Capabilities for Open Source Innovation: Background

Over the past decade, I’ve been researching open source and technology innovation, partly through employment at multiple different companies who engage in open source, and partly through academic work towards completing a Master’s degree and soon starting a PhD. The heart of this research is looking into what makes companies successful at open source and also at technology innovation. It turns out there are actually many things in common between the two.

Organizational Capabilities

One of the lines of research that’s relevant to this topic explores organizational capabilities. These capabilities are the knowledge that individuals have at a company, but they’re also the abilities the company as a whole has built into their processes, into their business, and into their employees. Capabilities can be learned over time, so a company isn’t frozen to a fixed point of capabilities that can never change, but it does take time to build up new capabilities or strengthen existing capabilities. If a company doesn’t have the capabilities to tackle a totally new strategic area or a totally new project, then deciding to enter that new area is only the first step of a learning process. Organizational capabilities also affect strategic outcomes, so if two companies make the same strategic business decision—no matter how good that decision is in the context of their target market—one company may succeed because it has the organizational capabilities required to achieve that strategic goal, while the other company fails because it is lacking necessary capabilities.

Open Innovation

Another relevant line of research explores open innovation, which was first proposed by Henry Chesbrough in the 2000s, but has been through a number of different iterations over the decade plus since then. In some ways, open innovation is similar to open source, but it’s not quite open source and people get confused about that. The fundamental concept of open innovation is that innovation in general—and this can be technology innovation or business innovation—can be accelerated if companies are willing to go outside their boundaries and either share innovative ideas externally or assimilate external innovative ideas internally. An idea that a company shares outward may be one that they can’t profit from directly, but the other company can help them profit from it, so together they can both make more profit than if the idea just died internally. Being open to assimilate ideas that other companies created means having capabilities (processes and knowledge) around taking that innovation from outside and building it into your own innovation internally.

Open innovation is related to open source in the sense that open source does include both concepts of sharing your internal innovation externally and bringing external innovation internally. But it goes beyond just sharing ideas and knowledge, to actually sharing source code externally and bringing external source code inside the company. If your company is successful at open innovation, you have a set of capabilities towards being successful at open source. You’re not all the way there, but it’s a good boost to getting there.

Much of the change a company makes to become capable in open source has to do with thinking slightly differently about how you create and capture value for customers. It’s no longer a matter of creating all value yourself and capturing all value yourself. Perhaps you created some code, someone else created some more code, and yet another someone created even more code, and you’re getting much more value out of the code that you all develop together than you’re putting into it. Which means you can actually be more effective at driving innovation by not doing all the work yourself. A big part of the change is a matter of getting comfortable doing development in a way that is not purely under your direction. Work isn’t driven under one company from someone up on high, down through a hierarchy, to a set of individual developers. Instead, the work involves relationships across company boundaries.

Levels of Engagement

There are different kinds of relationships across company boundaries. Some relevant research into these relationships is around companies’ levels of engagement in open source and how that impacts their success, their failure, the capabilities they need to succeed, and ultimately their effectiveness in open source and in delivering value to their customers from open source.

Level 1: The lowest degree of engagement is often called Inner Source. A company at this level has taken some ideas from open source, but isn’t consuming external code or sharing code externally, they are just collaborating internally a little more like an open source project.

Level 2: The next step up is to use open source, but purely as a consumer, bringing in external code.

Level 3: Companies at this level deliver open source to customers, either straight up, combined with other open source software, or combined with proprietary software.

Level 4: Companies at this level lead an open source project that they created. They release their code as open source, but they don’t really have an active external community.

Level 5: Companies at this level participate in external projects by contributing code, sponsoring, or supporting the project in other ways, but the relationship is somewhat one-directional or distant. Their participation isn’t highly active, but they still benefit from contributing patches and other support, and getting back the source code.

Level 6: The highest degree of effective open source participation is companies who participate as co-leaders. These companies don’t just throw patches over the wall at the open source project, but take the time to actively share their needs with the open source project, to actively advocate for the directions that they think the project needs to go, and contribute developer time to the project. That doesn’t mean they control the project, in fact, there’s a great deal of value in having many perspectives contributing ideas and together negotiating the direction of the project, for the benefit of everyone involved. At this level, you find companies participating as equals with individual open source developers and respecting them for their technical expertise and role in the project. There’s some nuance here, in terms of how to be effective as a co-leader in an open source project, but the biggest thing to understand is that companies who take that step forward are more effective and get more value out of open source projects.

Other Relationships Across Company Boundaries

Some other relevant research is around more traditional collaborative innovation approaches like strategic alliances, where companies get together under a strong NDA and binding contracts to do some development. These kinds of alliances generally produce proprietary software rather than open source, but some of the research around how strategic alliances work and how companies participate in them is similar to the way companies participate in open source, so the research is relevant.

Research into standards bodies with patent pools is also relevant. The patent pools give the participating companies some degree of protection from each other, which makes them more comfortable all sharing together in terms of the strategic direction and the advancement of the technology. Open source foundations often play a similar role when they include some aspects of patent calming within their contribution and licensing terms, so the participating companies don’t have to be afraid of each other. They can set aside defensive tactics and focus on what’s best for the technology, what’s best for innovation, and what’s going to give all the companies the best value together.

Internal and outsourced R&D also have many similarities to open source. The way an internal R&D department works, or the way a company relates to an outsourcing partner, are in fact very similar to the way that a company relates to an open source project, or the way that companies relate to each other within an open source project. Both open source and more traditional R&D approaches are examples of development happening in a different group, and they require building the same kinds of relationships across the different groups.

Another relevant piece of research is licensing as acquisition. Licensing technology or components as part of a larger effort in innovation is an incredibly common pattern. When one company needs an innovative technology that another company has already developed, they negotiate a license with the other company to use that technology rather than spending the time and money to develop it in-house. The difference between this form of proprietary licensing and open source licensing, is that open source makes the whole process of licensing external innovative technology much easier, because you don’t have a long negotiation before you get to the point that you can license that technology. Open source technology is just freely available under the open source license, you can pick it up, try it, and choose whether you want to integrate it or not. This is one of the concrete ways that open source accelerates innovation, because it makes it faster and easier for companies to acquire innovative technologies and components through licensing.

Technology Innovation and Open Source

The overall focus of this background study was looking at existing research into technology innovation and existing research into open source, and exploring the similarities between the two sets of research. I didn’t start off with specific assumptions on what I might find, just an idea of cross-referencing between the two, since the people who research the two topics don’t seem to talk to each other, read each other’s research, or attend the same conferences. What I found is that the organizational capabilities required to succeed at technology innovation are a very near match to the organizational capabilities required to succeed at open source. I mapped out more than 100 shared characteristics, but I’ll summarize the top few.

Collaborate with external communities: You will innovate faster by taking advantage of available knowledge and resources out in the open, than by trying to do it all yourself internally. Source code is one form of external knowledge and resources that you can use to help your company innovate faster.

Share ideas outward: Tightly grasping every idea you have internally is generally not the path of greatest advantage. You can help accelerate the entire pool of innovation if you’re willing to share ideas outward, and see if other people are willing to contribute to successfully implementing those ideas.

Organizational learning, assimilate ideas inward: Observing external technology innovation or open source isn’t enough, you have to have capabilities in place to assimilate external knowledge and external code.

Efficiency of reuse/modification: In both open source and more general technology innovation, you get an acceleration of innovation from the reuse or modification of existing innovation. There are slightly different mechanics around that in proprietary technology innovation and open source, but the organizational capabilities are basically the same.

Strategic approach to customer value: One set of authors (Morgan & Finnegan) called this “strategic open source”. It’s about taking an approach to customer value where you’re not just blindly assuming that the only way to produce customer value is to do everything yourself in-house, but instead consciously examining your customers’ needs, the capabilities you have, the technology you have, and strategically planning out which pieces you should outsource, which pieces you should build in-house, and which pieces would benefit from inviting customers to participate in creating that value, because it will serve their needs better if they’re involved.

Low barrier to entry: In open source this is partly related to the licensing, which makes it easy to pick up the source code and use it. More generally, it has to do with the fact that innovation moves faster as a whole, across an industry, when we aren’t working in tightly constrained silos. If every company across the cloud industry was working in a silo, we would pretty much have Amazon leading the pack, a couple of other companies struggling to try to do something similar, and that’s it. The barrier to entry for any one company entering the industry would be far too high. When a number of companies are willing to share their ideas and work together, the combined brain power of all of those companies together is orders of magnitude larger than any one company on its own.

Conclusion

If you only take away one thing from this post, make it this: open source is mostly just technology innovation. If your company has already developed the capabilities for technology innovation, then open source won’t be a big hurdle. There are a few new things to learn, but it’s not as radically different as you might think. If your company hasn’t really embraced technology innovation yet, you’ll find open source and any other approach to technology innovation challenging. The good news is, the capabilities that you need to learn to effectively engage with open source, are actually pretty much the same capabilities you need to learn to effectively innovate at technology anyway. And, open source is a particularly easy way to learn those capabilities, because open source communities are generally eager to teach people how to participate and how to succeed with their project. Your competitors in proprietary technology innovation, on the other hand, probably consider every aspect of their technology a closely guarded secret, and are unlikely to have any interest in sharing or helping you learn from their successes and mistakes.

Further Reading

Chesbrough, H. (2003) Open Innovation: The New Imperative for Creating and Profiting from Technology, Harvard Business School Press, Boston, MA.

Chesbrough, H. & Brunswicker, S. (2014) ‘A Fad or a Phenomenon? The adoption of open innovation practices in large firms’, Research Technology Management, vol. 57, no. 2, pp. 16-25.

Ciesielska, M. & Westenholz, A. (2016) ‘Dilemmas within commercial involvement in open source software’, Journal of Organizational Change Management. vol. 29, no. 3, pp. 344-360.

Löfsten, H. (2016) ‘Organisational capabilities and the long-term survival of new technology-based firms’, European Business Review, vol. 28, no. 3, pp. 312-332.

Morgan, L. and Finnegan, P. (2014) ‘Beyond free software: An exploration of the business value of strategic open source’, The Journal of Strategic Information Systems, vol. 23, no. 3, pp. 226-238.

Pisano, G. (2016) ‘Towards a Prescriptive Theory of Dynamic Capabilities: Connecting Strategic Choice, Learning, and Competition’, Harvard Business School Technology and Operations Management Unit Working Paper, no. 16-146.

Westenholz, A. (Ed.) (2012) The Janus Face of Commercial Software Communities — An Investigation into Institutional (Non) Work by Interacting Institutional Actors, Copenhagen Business School Press, Frederiksberg.

(This post is loosely based on a talk I gave at the OpenStack Days Nordic event in October 2017, which was in turn loosely based on my Master’s thesis.)

Transitions in Open Source Initiative leadership

What we call the beginning is often the end
And to make an end is to make a beginning.
The end is where we start from.
— T.S. Elliot, “Little Gidding”

Serving as president of the Open Source Initiative over the past few years has been a joy and an honor, and if I write a memoir someday I’m sure these will stand out as some of the best and brightest years in a long and happy open source career. It has been a delight to collaborate closely with so many people I admire greatly, including Deb Bryant, Molly de Blanc, Richard Fontana, Leslie Hawthorne, Mike Milinkovich, Simon Phipps, Josh Simmons, Carol Smith, Paul Tagliamonte, Italo Vignoli, and Stefano Zacchiroli.

I’m incredibly proud of what the organization has accomplished in that time, continuing stewardship of the open source license list, and growing our individual membership and affiliate programs which provide a path for the entire open source community to have a say in the governance of the OSI.

All good things must come to an end, and the time has come for me to pass along the president’s hat to the next volunteer. My work life has grown busier and busier in recent months, and I’m starting a PhD soon, so the time I have available to contribute to the OSI has become incredibly fractured. I’d rather empower someone else to do a great job as president than do a mediocre job of it myself for the rest of the year.

It gives me great pleasure to share the news that the OSI board has elected Simon Phipps as the next president. Having Simon at the helm will help make the transition particularly easy, since he served as OSI president before me. I’ve known Simon for many years, long before either of us was involved in the OSI, and one thing that has always impressed me is the way he consistently engages with new ideas, championing the relevance of open source in the ever-changing modern world. He also gave the best talk that I’ve ever seen explaining the four software freedoms and advocating for software freedom (at a conference in Oslo in 2011).

I’ll remain as a member of the OSI board, both to support a smooth transition to the new president, and to continue involvement in several active projects at the OSI. My hope is that handing off the administrative responsibility to Simon will enable me to focus my limited volunteer time on other things like improving the license review process.

I’ll close with an invitation: if you have a passion for open source and/or free software, consider running for the OSI board in one of our annual elections. Any individual member of the OSI can self-nominate as a candidate for the board (voted by the body of individual members), and active affiliate organizations of the OSI can nominate anyone as a candidate (voted by the body of affiliate organizations). Director terms are only 2-3 years, so serving on the board isn’t an overwhelming commitment, and is a great way to contribute your skills and experience to the open source and free software community. Who knows, maybe you’ll be the next president of the OSI after Simon.

(Re-posted from: https://opensource.org/node/902)

The Future of Open Source

William Gibson has a concept of the future in relation to science-fiction, which he’s mentioned on many occasions in slightly different forms. My favorite source is the earliest one I can find, which isn’t quite as quotable as the more polished form on Gibson’s Wikipedia page, but it has a more authentic ring of roughness to it, as if he articulated the idea on-the-fly during the interview:

The future is already here and it is very unevenly distributed and it arrives in bits and pieces constantly.
— William Gibson, “There’s no future in Sci-Fi”, London Sunday Express (April 2, 2000)

The future isn’t a binary switch, where suddenly, someday we’ll find ourselves “bang!”, in the future. The future is a progression over time, and while it’s impossible to predict the future with 100% accuracy, we see the “bits and pieces” of our future all around us. Sometimes it’s hard to figure out which bits will be significant, and which will simply fall by the wayside. But, significance develops over time too, so by looking back far enough we can trace which bits increase in significance and which decline, and the apparently random distribution begins to develop a pattern.

Lately I’ve been applying this idea to the industry that’s grown up around software, and especially free software/open source, sort of looking though the past to see the future. In the process, I’ve hit on an analogy from physics that provides a decent framework for exploring the history of software and the distribution of innovation over time. It’s based on the concept of an object traveling faster than the speed of sound. Similar principles apply for a particle traveling faster than the speed of light, but the speed of sound is generally easier for people to relate to.

First Age of Software

The early days of software, from about the 1940s to the 1970s, were like traveling at subsonic speeds. Sound naturally travels in waves, outward from the source of noise in all directions. To visualize this, you can picture a pebble dropped into a still pond, and how the water ripples outward. By analogy, innovation in software was initially slow and happened in scattered pockets. An innovation strike is the bit where “the future is here”, while the ripples in waves out from the center are the uneven distribution of that “future” into the wider world.

first_age_slowIn the First Age of software, the pace of innovation was slow and regular. The tech industry didn’t value software terribly highly, regarding it as a mere extension of hardware.

“There was little or no interest in protecting software technology separately, because patent protection for computer hardware adequately rewarded innovation. […] When computer programmers first approached the U.S. Copyright Office about protecting programs by copyright law in the mid-1960s, the Office expressed some doubts about whether machine-executable forms of programs could be protected by copyright law on account of their utilitarian nature.”
Software & Internet Law, 4th Edition, page 31-32

All software was “free and open” in this age, because there was no legal restriction on copying, modifying, and redistributing it.

Back to our analogy: as an object accelerates, the pattern of sound waves changes. The ripples are no longer concentric circles, because each wave travels outward from the object at a specific point in time and space, which the object leaves behind as it travels onward. The fundamental dynamics of flight are the same across a range of subsonic speeds, but the “bunching” of sound waves in the direction of travel hints at what might come next. Similarly, toward the end of the First Age of software, several bits increased in significance, accelerating toward what was to come in the next age.

first_ageThe U.S. Copyright Office started to issue copyright registrations under the “rule of doubt” in the 1960s, but uptake was quite slow.

“Between 1966 and 1978, only about 1,200 copyright registration certificates were issued [for software].”
Software & Internet Law, 4th Edition, page 32

Public sentiment started to shift, and while sharing software freely was still perfectly legal, some people thought it shouldn’t be.

“As the majority of hobbyists must be aware, most of you steal your software. Hardware must be paid for, but software is something to share. Who cares if the people who worked on it get paid?”
–Bill Gates, An Open Letter to Hobbyists (February 3, 1976)

In 1974, U.S. Congress established the Commission on New Technological Uses of Copyrighted Works (CONTU) as part of the review for drafting the U.S. Copyright Act of 1976. CONTU’s final report in 1978 concluded that software should be copyrightable, but too late to be included in the 1976 Act, which was finalized October 19, 1976 and went into effect January 1, 1978.

“A majority of the CONTU Commissioners concluded that computer programs already were copyrighted under the Copyright Act of 1976, but it recommended some changes to this Act to make appropriate rules for programs.”
Software & Internet Law, 4th Edition, page 32

This recommendation can be regarded as the tipping point for software copyright, but apparently the tech industry took a “wait and see” approach to it, since no copyright infringement cases were filed for software until after U.S. copyright law was officially amended.

Middle Age of Software

A funny thing happens when an object accelerates to the speed of sound (also called Mach One). Because the sound waves are traveling at the exact same speed as the object, they keep pace with it, and the longer the object flies at Mach One, the more overlapping sound waves build up around it, creating a massively turbulent zone. For a long time, people thought it was utterly impossible to fly faster than the speed of sound. They called that leading edge of overlapping waves the “sound barrier” and pilots died trying to break it. Traveling at Mach One is analogous to the rush to capitalize on software copyrights in the 1980’s and 1990’s, and the sound barrier corresponds to an area of disruption where business models based on licensing proprietary software can succeed. The calmer area behind the turbulent zone is the massive body of free and open software that persisted and grew, despite all predictions to the contrary.

middle_age_turbulentThe defining moment of the second age of software was in one sense quite a tiny thing, no more than a few small changes (pages 14-15) to the lengthy title 17 of the United States Code, enacted on December 12, 1980.

“In 1980, Congress expressly incorporated protection for computer programs into the Copyright Act.”
Intellectual Property in the New Technological Age, 6th Edition, page 433

But, those few words had a radical impact on the software industry. What followed was a period when companies eagerly and actively sought to enforce their new rights, through litigation or direct action.

“The first questions about copyright for computer programs involved the simplest form of copyright infringement — direct copying of the program code. […] the disputed issue in these early cases was whether computer programs could be protected by copyright at all. The first major case confronting this issue involved Apple Computer.”
Software & Internet Law, 4th Edition, page 33

Apple Computer, Inc. v. Franklin Computer Corp., filed on May 12, 1982, was initially ruled in favor of Franklin, but Apple appealed successfully, establishing case law precedent for operating systems as software subject to copyright law. In the original ruling in favor of Franklin, the district court concluded that Apple Computer failed to show irreparable harm, stating “It is also clear that Apple is better suited to withstand whatever injury it might sustain during litigation than is Franklin to withstand the effects of a preliminary injunction.” In hindsight, the district court had a reasonable point, and in light of Apple’s statements that they already “had annual sales of $335,000,000 for fiscal year 1981”, it seems fairly unlikely that the case had any significant impact on Apple’s success and future growth as a business, or that Franklin’s continued business would have detracted from their profits in any meaningful way.

Over the same time period, another story arc was unfolding, which was barely noticed at the time, but had a far greater impact on the course of software history. Richard Stallman, working at MIT’s AI Lab, was surprised by an encounter with changing attitudes toward software ownership, inconveniently preventing him from porting a fix to send out paper jam notifications from an old printer at the lab to a newer printer.

“In the course of looking up the Xerox laser-printer software, however, Stallman made a troubling discovery. The printer didn’t have any software, at least nothing Stallman or a fellow programmer could read. Until then, most companies had made it a form of courtesy to publish source-code files–readable text files that documented the individual software commands that told a machine what to do. Xerox, in this instance, had provided software files only in compiled, or binary, form.”
Free As in Freedom (2.0), page 4

Richard tried to gain access to the source code, reaching out to academic colleagues as he would have done in the past. He was unsuccessful, as those who had the source code declined his request for a copy, citing a contractual agreement with Xerox as the reason. As surprise gave way to thoughtful reflection, Richard began to shape the seed of an idea that would grow to become his life-long passion.

“I already had an idea that software should be shared, but I wasn’t sure how to think about that. My thoughts weren’t clear and organized to the point where I could express them in a concise fashion to the rest of the world. After this experience, I started to recognize what the issue was, and how big it was.”
— Richard Stallman, Free As in Freedom (2.0), page 9

Beginning quite informally with comments on “communal sharing” in a paper about Emacs, he fleshed out his ideas over the course of several years, leading to the announcement of the GNU Project in 1983; the publication of the GNU Manifesto, formation of the Free Software Foundation, and first release of the GNU Emacs License in 1985; and a generalized version of the Emacs license (and other variants for the GNU C compiler and debugger) dubbed the GNU General Public License in 1989.

In another parallel story arc, in 1977 UC Berkeley released a mixture of their own and code from AT&T Unix, under the name Berkeley Software Distribution (BSD). They released before software copyrights existed, so imagine their surprise 15 years later, when a subsidiary of AT&T filed a lawsuit against against them for copyright infringement. The judge denied the injunction, and the case was settled out of court.

Notable milestones in proprietary software in the Middle Age include the first release of the Oracle database in 1979 (rewritten in C in 1983); the first release of MS-DOS in 1981; the first release of Norton Utilities in 1982; the first releases of IBM DB2, Lotus 1-2-3 , and Microsoft Word in 1983; the first releases of Apple’s Macintosh operating system, MacPaint, MacWrite, MacDraft, AppleWorks, and Quicken in 1984; the first releases of Microsoft Windows, Aldus (later Adobe) PageMaker, and StarOffice in 1985; the first releases of Adobe Illustrator and OS/2 in 1987; the first release of NeXTSTEP in 1989; the first release of Adobe Photoshop in 1990; the first release of NCSA Mosaic web browser in 1993; the first release of Netscape Navigator in 1994; the first releases of Microsoft Internet Explorer, Sun Java , and Netscape JavaScript, and launch of Amazon.com in 1995; the first release of Macromedia (later Adobe) Flash in 1996; the launch of Google search in 1997, the first release of Netscape Communicator in 1997; and the first release of Mac OS X Server in 1999.

Notable milestones in free software in the Middle Age include the first release of LaTeX in 1984; the first releases of GCC and Perl, and the Artistic License in 1987; the BSD and MIT licenses, the first release of POSTGRES (later PostgreSQL), and commercial support for free software by Cygnus in 1989; the first release of CERN httpd in 1990; the first releases of the Linux Kernel and Python in 1991; the first releases of Debian, FreeBSD, NetBSD, and Lua, and commercial sales of machines pre-installed with a free software operating system by VA Linux in 1993; the first release of Red Hat in 1994; the first releases of MySQL, the Apache HTTP Server, PHP, and Ruby in 1995; the first releases of GIMP and KDE  in 1996; the first release of the Mozilla Application Suite (formerly Netscape Navigator & Communicator) in 1998; and the first releases of GNOME, Asterisk , and CUPS in 1999.

The Middle Age of software was characterized by proprietary software at the leading edge of software innovation, and free software playing catch up, combined with a belief that the world would always be this way. No one considered the possibility that the introduction of software copyrights might have induced a unique set of conditions, sustained for a time but fading and not reproducible. In the latter half of the Middle Age, more and more new software innovations were built on free software or as free software, and yet conventional wisdom still held that business profiting from free software was an oddity. Ultimately, the golden age of proprietary software lasted a grand total of about 15 years, a mere blip on the scale of software history.

Modern Age of Software

The most important thing to understand about the sound barrier is that it doesn’t actually exist. The turbulence experienced at Mach One is really no more than a convergence of transition effects between subsonic speeds and supersonic speeds. With advances in technology, “breaking the sound barrier” quickly shifted from deadly to merely dangerous, and eventually became so easy it wasn’t even a noticeable disruption in flight. Once an object accelerates past Mach One, it travels faster than the sound waves it creates, and so the waves fall behind the object, creating an outward spreading cone. The compressed sound waves at the outer edge of the cone form a zone called the “shock wave”, which is experienced by observers as a sonic boom (comparable to the noise level of firecrackers) as the edge passes by them. You can visualize this like a wake stretching out behind a fast boat. In software, the shock wave is the same disruptive zone we saw in software’s Middle Age, where business models based on proprietary software can succeed. But at a supersonic pace of innovation, this zone is much thinner, and presents no barrier to open solutions overtaking and surpassing proprietary solutions.

modern_age

There was no single defining watershed moment at the start of the Modern Age of software, like there was for the start of the Middle Age, but a convergence of factors places the beginning sometime around the year 2000. That doesn’t mean the final outcome was obvious to everyone in the year 2000, but in hindsight we can approximately place the point at which the future was “here” in 2000, and characterize what followed as a process of evening out the distribution and accelerating the pace.

  • While it’s tempting to identify the open source movement as a catalyst for the Modern Age of software, it’s more precisely accurate to say that events and conditions surrounding the start of the Modern Age were the catalysts for the open source movement, in much the same way that events and conditions surrounding the start of the Middle Age were the catalysts for the free software movement. What was a significant factor in the start of the Modern Age was that attitudes started to shift: corporations were more open to using free software, and many free software projects were more open to corporate involvement. A small group of people in favor of this shift chose the name “open source” and it gained popularity surprisingly quickly. But under any other name (“collaborative software”, “shared software”, “modern software”, or simply continuing under “free software”), the outcome would have been nearly identical. The market forces driving the start of the Modern Age were broad, deep, and strong.
  • It was difficult to predict the outcome of Netscape’s underdog maneuver of releasing their Navigator web browser as open source through the Mozilla Organization in 1998. Mozilla’s announcement 7 months later that they’d be scrapping Netscape’s codebase further clouded the issue. But looking back, the beginning of the end was clear in Internet Explorer’s slowed growth from 1999-2001 and market share peak at 96% in 2002, followed by a subsequent decline to 10% today.
  • The burst of the dot-com bubble in 2000 and subsequent evaporation of lush VC funding, meant that paying for proprietary software licenses to programming languages, databases, web servers, etc. simply wasn’t a viable option for the vast majority of startups, especially in the face of stable, reliable free software/open source alternatives already used extensively in production environments.
  • Apple transitioned their flagship operating system to a free software/open source base over 1999-2001, just one indicator out of many of the shifting attitudes.
  • 20 years of experience with proprietary software was enough to see and begin articulating its practical disadvantages. Since the free software movement started at pretty much the same time as proprietary software, it could predict the failure modes, but its (impressively accurate) predictions were unfortunately often dismissed by those with high hopes for making big profits from software copyrights. The start of the Modern Age was littered with catchy aphorisms about collaborative, dynamic, faster-paced development models: “given enough eyeballs, all bugs are shallow”, “scratching a developer’s personal itch”, and “users as co-developers” for “rapid code improvement and effective debugging” (Eric Raymond, The Cathedral & the Bazaar). The subsequent 15 years have shown that the benefits of open development models aren’t automatic. It takes work to build a healthy collaborative community, and simply slapping a free software/open source license on a chunk of code and tossing it over the wall rarely succeeds. But the fact remains that proprietary development models exclude any possibility of collaborative benefits. The fundamental nature of proprietary software restricts its own potential for growth.
  • The rise of software patents from 1995 onward was regarded as a potential threat to free software and open source, and lead to the addition of explicit patent license clauses in the Mozilla Public License 1.1 in 1999, the Apache License 2.0 in 2004, the Artistic License 2.0 in 2006, and the GNU General Public License 3.0 in 2007. Ironically, it turns out that one of the safest ways to drive forward rapid innovation in a space that’s heavily patented by multiple companies, is to gather as many of those companies as possible, license their patents into a free software/open source project, and run along with collaborative development while effectively ignoring patents, on the strength that all the companies together own far more patents in the space than any single company outside the pool.

The period from 2000 to 2005 was marked by growing acceptance of a new way of looking at software. Fearful predictions that open source would undermine the software industry, gave way to recognition of open source as a new model of decentralized economic production. The message of free software and open source didn’t change, but more and more people were repeating it. In 2006, Tim O’Reilly predicted that by 2010 open source would “be part of every business and every company’s strategy“. And indeed, 2010 saw the rise of the meme that “Open Source Won”.

After the celebrations died down, the open source movement suffered from anti-climax: If the true goal of open source was “enablement of a commercial industry“, then what’s left to do after you succeed? In recent years this has been followed by a form of low-grade disillusionment: If we won, why are we still repeatedly explaining how open source collaboration works? The answer to both questions is that corporate adoption was never the sole defining goal of the open source movement, it was only a significant milestone along the way. The true goal of the open source movement is encapsulated in the Open Source Definition, advocating for the freedom to use, modify, and redistribute software, and the benefits of exercising that freedom. If this sounds stunningly similar to the philosophy of the free software movement, it should come as no surprise: the Open Source Definition is a slightly modified copy of the Debian Free Software Guidelines, which were drafted as an explicit expression of the philosophy of free software. That doesn’t mean the two movements are identical, just very, very similar.

To a large extent, the free software movement was unaffected by the open source identity crisis. Free software’s focus from the beginning was on the freedom to use, modify, and redistribute software, and while it offered several ideas around profiting from free software using business models based on support or services, attracting corporations was never one of its goals. However, the free software movement hasn’t been excluded from the benefits of growing acceptance in the Modern Age of software. While it’s only one indicator out of many, an easily accessible form of public hard data for U.S. non-profits is their tax returns, which show that the Free Software Foundation saw an increase in annual revenue from $376k in 2001 to $1.25m in 2012. (Even though it’s really comparing apples to oranges, I feel obligated at this point to share the same public information for the Open Source Initiative, which saw an increase of annual revenue from $25k in 2001 to $117k in 2014.)

Some feared that open source “winning” would undermine the work of the free software movement, by tempting people to settle for the lesser goal of corporate adoption,and lose sight of the ultimate goal of software freedom. I can’t find any evidence that this has turned into a significant trend. Anecdotally, over the past few years, I’ve seen quite a few open source developers do a bit of soul-searching and find their true roots in the free software movement, but I haven’t seen a single free software developer decide to give up on software freedom because free software and open source achieved corporate adoption.

In the period from 2010-2015 free software and open source continued to grow rapidly, without any concrete vision from the open source movement on what should happen next, and with the free software movement still firmly focused on a more distant future where all software is free software. This success is driven by a self-reinforcing cycle of economic necessity and acceleration of software innovation. The greater the body of stable, reliable free software and open source, the greater the pressure to use it and build on it. If your competitors are using and building on free software and open source, then avoiding it puts you at a competitive disadvantage, because they can innovate faster and cheaper through free reuse rather than wasting resources on reinventing wheels. Once the use of free software and open source is ubiquitous among a group of competitors, the game shifts and “use” becomes mere table stakes for entry. The new competitive edge is participation, because the companies who dedicate resources to fix bugs and add features in the projects they use, end up reducing their own cost of providing support and services for those projects, and by contributing changes back upstream they reduce their own cost of maintenance and integration of new releases. Greater contributions over time lead to more stable, reliable, and successful free software and open source solutions, which in turn increase the pressure for even more use, and even more participation.

That sounds like we’re done, right? Well, not yet, but we’re definitely headed in the right direction.

What Lies Ahead

Looking back over the past 70 years or so, some patterns emerge. The legal system is moving toward increasing restrictions on the intellectual property of software, with no sign of rescinding any significant portion of earlier restrictions or the accumulation of case law built on those restrictions, and every indication that new restrictions will continue to be added in the foreseeable future. The good news is that free software and open source have always found ways to overcome each increasing restriction, so the successive introduction of new intellectual property laws for software ownership hasn’t ultimately presented a serious obstacle to software freedom. In a surprising twist, the increasing restrictions have served to boost the growth of free software and open source, by making it more difficult to successfully innovate using a proprietary business model.

Over the next 20-50 years, we can expect to see an increasing number of technical innovations released initially as free software and open source. Of the technical innovations that are initially released as proprietary, we can expect an increasing number to be either undercut and sidelined by rapidly innovating open alternatives, or else released later as free software and open source by their creators to avoid being undercut and sidelined. We can’t say proprietary software is dead, and it’s likely to linger in one form or another for decades into the future. But the patterns of significant bits through history brand proprietary software as a less-than-healthy offshoot in the evolution of software business models, and the trend for proprietary software from the 1980’s to today is one of slow decline and increasing dependence on free software and open source to survive. Some companies attribute their success to proprietary software, but a deeper analysis tends to reveal their true success lies in some combination of other business models (support, services, integration, content, or hardware) that are compatible with free software and open source licensing, so the perceived effect of proprietary software licensing is a mirage.

We can expect to see an increasing number of companies who go beyond use, to participate in creating free software and open source. More and more of these companies understand the principles of software freedom, from the most junior developers to the most senior executives. Others are still only driven to adopt free software and open source out of economic necessity, and herein lies one of the great challenges of the next few decades. Inexperienced companies can cause a great deal of harm as they blunder around blindly in a collaborative project, throwing resources in ways that ultimately benefit no one, not even themselves. It is in our best interest as a community to actively engage with companies and teach them how to participate effectively, how to succeed at free software and open source. Their success feeds the success of free software and open source, which feeds the self-reinforcing cycle of accelerating software innovation.

We can expect to see an increasing diversity in the individuals who participate in free software and open source, not just social diversity (culture, geography, ethnicity, gender, etc) but also diversity of skills beyond development to things like graphic design, user experience, product design, and user support. This change is already happening, and we’re making good progress, but we can expect more growing pains in the coming years. Diversity isn’t a matter of us teaching them to fit in, it’s a matter of them teaching us what they need, and us giving them room to explore and find their own way to work with us. Lately, one of the places I’ve frequently seen the need for embracing diversity of skills is when multiple companies hire user experience and product people to work on the same free software/open source project, but the project doesn’t provide channels for them to collaborate across company boundaries, because they aren’t developers. The OpenStack project has started experimenting in collaboration for user experience and product skills, and I’m pleased with the results so far.

It’s looking promising that over the coming decades we’ll see increasing unification across the free software movement and the open source movement, under the banner of software freedom, or perhaps some other name. (Personally, I’ll gladly accept whatever name the two communities can agree on, and for the past decade that seems to be “software freedom”.) As free software and open source succeed and proprietary software continues to decline, the motivation for debating about tactics dissipates, leaving a growing motivation to collaborate around accelerating, solidifying, and celebrating our success.

(The illustrations of subsonic, Mach One, and supersonic pace of innovation are based on “Transonico” by Ignacio Icke, licensed under Creative Commons Attribution-ShareAlike 3.0 Unported.)

Engaging hidden influencers as active participants

Shared Intro: After the summit (#afterstack), a few of us compared notes and found a common theme in an underserved but critical part of the OpenStack community.  Sean Roberts, Allison Randal, and Rob Hirschfeld committed to expand our discussion to the broader community.  Instead of sharing a single post, we wanted to bring our individual perspectives to you and start a dialog. See Rob’s post and Sean’s post.

Historically, open source projects have focused on developers as individuals, applying a kind of social filter to their corporate affiliations. And historically, this was the right approach to take. Those of us who have been around a while have collected far too many old stories of companies that tried to control open source projects rather than beneficially participating. Downplaying the companies behind the developers was an effective way to channel their contributions through a sanity filter.

But, there’s another dimension that few open source projects acknowledge, which Rob Hirschfeld has aptly dubbed “hidden influencers”. I encountered it at Canonical, and now at HP: developers don’t operate in a vacuum, and their success or failure at contributing to open source projects depends heavily on the support (or lack of support) they receive from the management context they operate in. This is true for independent developers — whatever day job they have to pay the bills may not understand why they spend their nights and weekends on volunteer development. (Many of my employers have understood, and I have always felt lucky that they do.)

Support from management has an even greater impact on developers who work on open source as their primary job.

Some people might see that as a bad thing, but I don’t. I can tell you from experience at Canonical and HP that a manager or executive who understands open source (possibly even has experience doing open source development) is a powerful force for good in an open source project. Unlocking the power of those hidden influencers is a unique opportunity, and very few open source projects are effectively taking advantage of it.

As I’ve talked with managers at HP and other companies around OpenStack, it impresses me that the vast majority who have OpenStack core reviewers or PTLs on their team are actively enthusiastic about their developers’ work and success within the project. I saw a number of these managers at the Summit in Atlanta. But you probably didn’t see them or talk to them.

Are these managers hiding?  Not exactly.  They are all over:

  • Design Summit:  Those sessions are geared toward developers.
  • General session: Those sessions are geared toward users and operators.
  • Developer’s lounge
  • Hallways: This informal “track” is the main place where you’ll see managers interacting.

But there really isn’t any forum for managers to interact around the kinds of topics that fill their daily task lists.  So what would have been valuable in cross-company management communication?

  • Allocating developers’ time to deliver the features discussed.
  • Sharing workloads across developers in multiple companies, and avoiding duplicated efforts.
  • Balancing internal delivery commitments with external delivery commitments.
  • Growing new contributors, from existing employees and new hires.

I not talking about release management here, OpenStack has a highly effective release management program already in operation. What I’m mostly talking about is people management, though there’s also an element of business strategy around OpenStack.

I’d like to see OpenStack as a project more actively engaging with managers as participants, acknowledging their contributions and building collaboration structures for them. Done right, this could set a strong positive example for future open source projects, blaze a new path for guiding beneficial corporate contribution, above and beyond trying to ignore them. Let’s start by simply talking with managers of OpenStack contributors at a variety of companies, finding out where their pain points are and what kind of collaboration would be most beneficial to them.

Relativity, skepticism, and virtual worlds

Yesterday on Twitter I posted:

Has anyone applied Einstein’s theory of relativity to radical skepticism? i.e. spaces of reference in knowledge relative to each other.

Twitter is great for tossing out a quick idea, but on reflection, this one probably needs more explanation. To give credit where credit is due, the idea occurred to me while I was watching a Coursera lecture on Epistemology by Duncan Pritchard (University of Edinburgh) for an evening’s entertainment (what can I say, it’s more fun than most things on television). Though, ultimately, the idea is a distillation of several lines of thought that have been knocking around in my head for years now. (If you really, really want to get at the roots, it all goes back to Richard Jozsa, who was my professor in Quantum Computation at the University of Bristol before he moved on to Cambridge, and who set me on a whole new path of thinking.)

First, a bit of lightweight background, so everyone can follow along. In very rough terms, radical skepticism is a perspective on the fundamental nature of human knowledge, specifically that knowledge is impossible. Pritchard used the classic “brain in a vat” or “Matrix” illustration, which might be simply stated: If I were a brain in a vat, being fed fake experiences by a computer (far more advanced than any we currently have, but still, play along for the sake of argument), then everything I think I know about the universe would actually be fake. I wouldn’t really “know” anything. And the kicker is, there really isn’t any way to prove I’m not a brain in a vat, and therefore, at a very basic level there isn’t any way to prove that anything I know is true. (Apologies to academic philosophers who might happen to read this, I’m keeping the explanation as simple as possible.)

Now, relativity in physics (again, very much simplified) is a theory that examines motion within frames of reference. For an intuitive sense of what this means, go outside and throw a ball to a friend so they can catch it. Now, go ride on a train with the friend and throw a ball there (try not to hit the other passengers). The two situations have radically different external contexts, one is stationary on the sidewalk, the other is hurtling along the tracks. And yet, for you, the friend, and the ball, the motion is the same: if you throw the same way and catch the same way, the arc of the ball relative to the two of you would be the same. The train car forms a frame of reference, and you can meaningfully examine the laws of physics within that frame, while completely ignoring the motion outside the frame. And, of course, remember that even the “stationary” case is actually on a planet that’s spinning and hurtling through space around the sun, in a universe that’s also in constant motion.

(Digression: Years ago, I set out on a lark to memorise the entirety of an English translation of Einstein’s “Spezielle Relativitätstheorie” i.e. “The Theory of Special Relativity” (Doc. 71, Princeton Lectures). I still can’t recite the whole thing verbatim, but a funny thing happened along the way (as I took classes in quantum mechanics and astrophysics): at one point a lightbulb went off and I suddenly realized I wasn’t just memorizing anymore, I was beginning to understand what Einstein was talking about on a deep level.)

Okay, bringing it back around, the problem with radical skepticism is that if you declare that it’s impossible to know anything, then any study of the fundamental nature of human knowledge, philosophy, or ultimately our entire existence, is really rather meaningless. Simply saying “it doesn’t really matter if I’m a brain in a vat” is a shatteringly weak argument in the face of “everything you think you know is wrong”. And yet, the recorded history of humanity demonstrates that it’s possible to construct internally consistent systems of knowledge, and value in exploring the nature of that knowledge. What if, instead of trying to wave away radical skepticism in a puff of smoke, we instead accepted it as a fundamental truth, at the same time as systematizing a study of knowledge within frames of reference, analogous to Cartesian frames of geometry or relativity frames of motion. So, whether I am a brain in a vat, or a living, breathing physical organism experiencing a physical environment, I exist within a “frame of reference” (a computer-constructed or physical world). Within that frame it’s meaningful to study the system of knowledge, while abstracting away from details outside the frame. It’s even meaningful to study the nature of knowledge across different frames of reference, for example as either a brain in a vat or a physical organism I might “know” that I have two hands, that one of them is grasping a cup of coffee, etc, etc. In either frame, I have a true belief, reached through the reasonable application of my cognitive abilities. My knowledge of my hand and my knowledge of the coffee cup are the same relative to each other, within both frames of reference. In a sense, we can say that the “laws of knowledge” are consistent across frames of reference.

Let’s take it one step further. When I inhabit a virtual world, whether it’s as vast and mutable as Minecraft, or as carefully controlled as a game in the Zelda series, I enter into a frame of reference of knowledge. I become a “brain in a vat” to that virtual world, because my dual existence as a physical organism in a physical world isn’t relevant within that frame. This is only becoming more true as technologies like Emotiv’s EPOC controller begin to make the brain-to-world connection more direct, and technologies like Oculus Rift begin to make the world-to-brain connection more direct. Within the frame of reference of a virtual world, I may have genuine knowledge that I have two hands, and that one of them is holding an object. As a slightly more abstract but still narrow example, I may have knowledge of completely different physical laws in two frames of reference (one may not have gravity, or may permit me to fall from 1,000 feet without injury), and yet both have an internally consistent set of physical laws, and my knowledge of those physical laws is absolutely essential to my ability to function as an entity within that frame of reference. And, I can meaningfully compare the nature of my knowledge of physical laws across the two frames of reference, even though the physical laws themselves are different. So frames of reference in knowledge aren’t merely an academic abstraction for the sake of reconciling two apparently conflicting approaches to philosophy. They’re also potentially a valuable tool in the study of the fundamental nature of human knowledge, in much the same way that relativity was to physics.

BTW, if anyone knows of research or published articles heading in this general direction of thought, I’d be very interested to hear/read more about it.

Mythbusters – Why I (still) Love Perl

At the very beginning, I should probably make it clear that this post is not a declaration of exclusivity in my relationship to Perl. I love programming languages. I first learned to program about the same time I first learned to read (English) and first studied French. My love for programming languages is very much akin (and I swear linked to the same part of my brain) as my love for human languages: they are all unique and beautiful in their own way. I love Python, I love C, I love Smalltalk, I love Erlang, etc, etc.

But Perl has taken an entirely undeserved beating in recent years, and so, in karmic balance, it deserves a round of outspoken championship, far more than others need right now. In pondering why Perl’s current reputation is so completely disjointed from the reality of the language, I’ve boiled it down to three “Big Bang Theory”-esque ideas: “The Cookie Slap! Effect”, “The Awkward Adolescence Fallacy”, and “The Singularity Paradox”.

The Cookie Slap! Effect

There’s a “Perl is Dead” meme floating around. It’s been around for a while, long enough that it’s picked up steam and plows along despite all evidence to the contrary. It’s so common, I won’t bother providing links, because you’ve seen more of them than I can count. It’s so common, I even heard it recently from a young MBA graduate, who was so entirely non-technical that he didn’t even know what HTML and CSS were, while working in sales for a web startup. But, he knew what Perl was, and he “knew” it was dead. Weird, but that’s how memes work. So, how did this meme start?

There’s a well-known gaming scenario often called “King of the Hill”, where one player has special status, and the goal of all the other players is to knock that player out of the privileged position and take it for themselves. There was a time when Perl owned the web. It was the duct tape that built the “Web 1.0 City”. There are a number of reasons why Perl succeeded so wildly, but most of them boil down to being in the right place, at the right time, with a fresh, dynamic take on what “programming” should be like, and what “programmers” should be like. There are many things that C excels at, but manipulating massive, variable-length strings is certainly not one of them. And, at the end of the day, no matter what abstractions you layer on it, Web development is essentially about dynamically building and pushing out very lengthy strings of HTML and CSS (and Javascript) to browsers that will decide how to render them. C sucked for that. Really sucked. Trust me, I’ve been-there-done-that.

So, Perl dropped into what was effectively an empty space, at a time when the demand for web services was sky-rocketing. Score! And established itself as the King of the Hill. Score! But then, as the dominant player, Perl also became “the one to beat”. Every upstart young programming language compared itself to Perl. “We’re better than Perl because…” And this is where the “Perl is dead” meme started. It was popular to put-down Perl, because Perl was “unbeatable”. Of course, it was never really unbeatable. The chances that any one language would continue to dominate the web are exceedingly tiny, effectively zero. The chances that any one language will ever again achieve the dominance Perl once had are equally tiny. Especially when you consider the fact that diversity is one of the single strongest cultural values of this miraculous, glorious, networked universe we now inhabit. A cultural value that it partially learned, BTW, from Perl’s TMTOWTDI (there’s more than one way to do it).

So, when you hear “Perl is dead”, remember this, the only reason the meme has strength is because Perl itself has strength. No one feels the need to loudly proclaim “Draco is dead”, because really, no one cares. And secondly, remember that at the root, the primary reason for declaring the untruthful “Perl is dead”, rather than the far simpler and truthful “Perl doesn’t have the domain dominance it once had”, is an insecurity about whatever language the speaker happens to love. Perl is an elder-statesman in a closed system. The shocking truth about the Tiobe Index isn’t that Perl has drifted down over the years (languages wax and wane, just look at C), it’s that all of the languages in the top 20 are OLD in technology-years. It’s like looking at Forbes’ list of billionaires from year-to-year: they shift in position, but once someone’s on the list they have an advantage over all the players who aren’t on the list (massive capital to invest in one case, massive numbers of users and lines of working production code in the other), which means they’re likely to stay on the list.

And, if you’ve ever said “Perl is dead”, my advice is to learn a lesson of tolerance, “live and let live”. Perl won’t plow your favorite language under the carpet, it’s not a threat to you, but I also guarantee your favorite language won’t plow Perl under the carpet. The best you can hope for is to be accepted as a member of “The Fellowship of the Languages”, so grant other languages the same respect you’d like to receive from them.

The Awkward Adolescence Fallacy

Around Perl’s 13th birthday, which just happened to also be the fiery heart of the dot-com bust, the child-prodigy suffered from a massive anxiety attack. As the Web 1.0 world went down in flames, the Perl community was quite literally tearing itself apart. There were a variety of factors, layers of complexity as in any human conflict, but one of the key factors was the nagging doubt that creeps into anyone’s head when things aren’t going as well as you’d hoped: “Maybe it’s my fault. Maybe I didn’t deserve success.” That line of thinking is generally not true, or at least wildly exaggerated, and generally not helpful.

One result of this early-life crisis was the birth of the Perl 6 idea, but I’ll punt that to the next section. Another more subtle (but also more powerful) result, was a growing obsession with things not being “good enough”. The calm confidence of Perl’s youth was replaced by fears that the syntax wasn’t good enough, the implementation wasn’t good enough, the community wasn’t good enough, the foundation wasn’t good enough, the license wasn’t good enough…

To a certain extent fear is a healthy thing, it drives you to push harder, to conquer the thing that you fear. And Perl did. The catastrophic flame-wars of the late 90’s were put to rest. The foundation was restructured and strengthened, and is now one of the most professionally run open source foundations I participate in, with solid, steady funding to run projects that are hugely beneficial to Perl. The Artistic 2 license was introduced as an improvement, but even the existing Artistic 1 license proved itself in court, in a way that no other open source license ever has, and in a way that benefited the entire open source community. The syntax, implementation, and libraries of Perl 5 have improved substantially, to the point that working with “Modern Perl” is really a very different experience than the Perl of 10 or 15 years ago, while still retaining the characteristics that make it a joyful language to code in.

But the bad side of that fear was an awkward, shy hesitation. Like a gawky teenager, Perl stood at the side of the room, afraid to dance because people might think he looks funny. So, I’ve got a news flash for you: no language is perfect, no syntax is perfect, no implementation is perfect, no community is perfect, no foundation is perfect, no license is perfect, nothing is perfect. Perl wasn’t perfect when it owned the web. Perfection is not the first step to success, it’s not even a milestone on the path. And if you really want to understand how irrelevant perfection is, pick any random language you admire, that you see as the pinnacle of success, and closely inspect its syntax, implementation, community, foundation, and license. You’ll find it’s flawed. Why? Because we’re all human, and we all produce things that are deeply creative, deeply wonderful, and yes, somewhat flawed.

It’s time for Perl to grow up. It’s not a teenager anymore. It’s time to accept what it is, accept what it isn’t, and walk on. And what it is, is pretty outrageously amazing. I’ve recently had the opportunity to help a wildly successful startup, in a domain that sorely needs the advantages of modern tech. Perl is the right tool for the job. If I explained the problem space you’d agree, even if Perl isn’t your favorite language. Perl is the right tool for a lot of jobs, all over the world, right now, stable and reliable, in production, with massive numbers of lines of code.

The Singularity Paradox

When Perl 6 was announced, it had a wonderful effect on the Perl community. It provided an “event horizon” to focus everyone’s attention on Perl’s future. It was an inspiration for new creativity, and a distraction from the flamewars to help kill them off. But over time, some things happened that we entirely didn’t expect. The most obvious is that it has taken rather longer that we anticipated. I remember a time when “6 months” was a completely reasonable project estimate for the Perl 6 production release. (Not “By Christmas”, but a real, project-planning 6 months, where I could map out what needed to happen each month.) 13 years later, that clearly didn’t happen. But the time factor is actually a side-effect of other things we didn’t anticipate. The single biggest thing we didn’t anticipate is that the “community rewrite of Perl” has, in fact, turned out to be a community fork. Perl 6 is not like Python 3, which really is a continuation of Python 2, with the same developers, same users, and same community values. (Sometime I’ll write about my interest and contributions toward the Python 3 migration effort, with its own unique successes and challenges.) What grew out of the Perl 6 idea is a new community, a new group of developers, and even a new identity, “Rakudo” rather than Perl (with a phase of “Pugs” along the way). The core Perl developers still work on Perl 5, and have little or no interest in Rakudo. Some of the Rakudo developers have a background in Perl, but many of them have a background in PHP, Java, C#, or other languages.

Rakudo is not an “upgrade” from Perl. It’s revolutionary and exciting, just like Perl was in 1987, but it is not Perl. Please note that I’m not commenting on the similarity or difference of syntax between Perl and Rakudo. If you take a long view over the history of programming languages, syntax is about as relevant to the success of a language as the color of the bike shed. And if you really, really get down to the nuts and bolts, the syntax and functionality of Perl, Python, Ruby, PHP, and Lua are all fundamentally quite similar. That doesn’t make them the same language, and more importantly it doesn’t make them the same community.

So, we stepped into Perl 6 expecting the full power of the mighty Perl community pushing it forward. What we actually got is a tiny band of free-thinkers, re-imagining what “programming” should be like, and what “programmers” should be like. That’s not a bad thing. As new languages go, Rakudo is among the most exciting. But, it’s in that thinly-stretched startup mode where you only get to pick one of “quick, cheap, or good” and it’s optimizing for “good”. In the long-run, that focus will be crucial to Rakudo’s success.

Back to the impact on Perl. Ultimately, the wonderful distraction of Perl 6 has proven… well.. distracting. What was once a very good thing for Perl, is paradoxically now bad for Perl. I recently explained this to a friend as a story of two brothers, Perl and Rakudo Wall:


Perl Wall has finished his advanced graduate degree, and is out building
his career. He was hugely successful for a while, but lately something
strange has been happening. When he goes on interviews, for some reason
people keep pulling up his younger brother’s resume by mistake, and then
tell him “Sorry kid, you don’t have the experience for this job”. But
really, he’s perfect for the job, if only they’d look at *his* resume,
instead of looking at his brother’s.

Rakudo Wall is still a teenager, and walks to the tune of a different
drummer. He’s smart, but he does things his own way. Sometimes
he takes a little longer than the other kids, and sometimes he
leap-frogs past them with a brilliant insight even the teachers don’t
understand. People keep telling him that he should be just like his
older brother. But he’s not, and he doesn’t *want* to be exactly like
his brother. He wants to be himself. Someday, he’ll be awesome, even
outshine his brother. But he’ll get there in his own time, and his own way.

Right now, Perl and Rakudo are getting in each other’s way. They’re like conjoined twins, trying to live separate lives, but always anchored to their brother. That doesn’t mean I love Rakudo any less than I love Perl. I love them both, and want them both to succeed. But their paths are very, very different, and they each need the freedom to walk their own path. The way to grant that freedom is stunningly simple: accept that it is what it is, and let each go its own way, with its own chosen identity. Let Perl be Perl and let Rakudo be Rakudo.

I sincerely hope to see Perl 7 released quite soon. No fuss, no bother, no long list of “blocking features”. Just BAM! ship the next version of Perl (5) as Perl 7. And I sincerely hope the greatest success for Rakudo. I don’t even care if it takes another 13 years to release, it’ll be worth the wait.

The King is dead. Long live the King!

UDS-R Architecture Preview

The 13.04 “Raring Ringtail” release of Ubuntu falls at the mid-point between the 12.04 and 14.04 LTS (long-term support) releases. This is the time in a development cycle when the balance starts to tip from innovation toward consolidation, when conversations form around what pieces need to be in place today to ensure a solid “checkmate” two releases down the road.

With that context in mind, it’s no surprise that Ubuntu Foundations–the central core behind the many faces of Ubuntu–plays a starring role in this release, both in sessions here at the Ubuntu Developer Summit in Copenhagen, and in the upcoming 6 months of development work. Look for sessions on release roles and responsibilities, release planning including Edubuntu, Lubuntu, Xubuntu, and Kubuntu, archive maintenance and improvements to archive admin tools, reverting package regressions and immutable archive snapshots, cross-compilation, user access to UEFI setup and plans for secure boot, xz compression for packages, image creation tools for Flavors, auto-generated apparmor profiles, PowerPC bootloaders, OAuth for Python 3, “prototype” archives for new hardware, Android ROMs, user experience in distro upgrades, build daemon resources, boot time on ARM, and installation tools on ARM. Also training sessions on the error (crash) tracker, Python 3 porting, and how to contribute to upstart.

On the Cloud front, the big topics continue to center around OpenStack (integrating Grizzly, QA, packaging improvements), Juju (the Charm Store, Charm developer tools, contributor onramps, application servers like Ruby on Rails/Django, development process), and Ubuntu Cloud images (testing and roundtable). While the broader Ubuntu Server discussions range over Xen, LXC, libvert, QEMU, Ceph, MySQL, Nginx, Node.js, and MongoDB, Query2, bigdata filesystem support, and Power architecture virtualization.

The Client side is a harmonic chorus, with sessions on Ubuntu TV, mobile devices and installing Ubuntu on a Nexus 7, plus multiple sessions on Ubuntu as a gaming platform. Also look for the usual sorts of nuts and bolts that go into building a beautiful client experience, like accessibility, battery life, connectivity, config sync, choice of file managers, and consistent typography.

Don’t miss the Design Theatre on Wednesday, where all are welcome to participate and learn about design thinking, solving real-world design problems for apps submitted by the audience.

I can’t wait for tomorrow!

UDS-Q Architecture Preview

This week in Oakland is the Ubuntu Developer Summit, a time for Ubuntu Developers from around the world to gather and plan the next release, version 12.10 codenamed “Quantal Quetzal”.

I’ve shuffled and reshuffled the sessions several times, looking for the “governing dynamic”, the thematic structure that holds the Quetzal together. I’ve settled, appropriately, on “quantization”. In general terms, quantization is a process of separating a continuous stream into significant values or “quanta”, such as image pixels from the continuous colors of real life, or discrete atomic energy levels. The theme applies on multiple levels. First, there’s the process attendees are going through right now (in person or remote), surfing the sea of sessions, determining how to divide their time for maximum value.

From a historical perspective, there was another UDS here in California not too long ago, where I recall the schedule was dominated by the desktop. We’re in a different world today, and what struck me reading through blueprints for Quantal is the segmentation of topics. Ubuntu has grown up, and while shipping a gorgeous desktop will always be important, other forms of hardware, both smaller and larger, have an equal (and sometimes greater) influence on Ubuntu’s direction into the future. How do you choose between cloud, metal, TV, and phones, when they’re all so interesting, and have so much potential as game-changers for Ubuntu (and Linux in general)? These different domains of use also lead to differentiation in design, development, and integration. Some significant quanta to watch are:

And like an atom that retains its fundamental structure at multiple energy levels, Ubuntu is still Ubuntu, unified at the core as a distribution and as a community, even across multiple “product” targets. Since this is the first release after an LTS, there’s more room than usual to re-examine the core at a fundamental level, with an eye to where we want to be by the next LTS.

And those are only the highlights. 🙂 It’s going to be a great week, and a great cycle!

Open Source Enlightenment

(My thanks to Audrey Tang for this lyrical transcript of my talk at OSDC.tw, to Macpaul Lin for the video, and to Chia-liang Kao for proofreading the Chinese translations in my slides.)

Over the years, I’ve started thinking that participating in the open source community is like traveling on a path, toward becoming not only better programmers, but also becoming better people by working together.

You might think of it as a path toward enlightenment, growing ourselves as human beings. So what follows is really my personal philosophy which I’d like to share with you.

The first thing is this: The most important part of every open source project is the people. While code is important, the center is always the people.

There are different kinds of people involved in a project: People who code, who write documentation, who write tests. People who use your software, too, are just as important for a project.

Also there are people who work on the software that your project uses — you’re likely using projects from other people in the upstream, and you might want to send them a patch from time to time.

Or maybe you’re writing a library or a module, and so other people will be using your software, and communicating with you as their upstream as well.

So why do people work on open source software? This is a very important question to ask, in order to understand how open source works.

For people’s day jobs, they may be working with software already. And why would they take the extra effort to work on open source? Part of it is that it involves working on exciting things and new technologies.

Sharing is also a large part of it; as we share with each other, we increase the amount of fun for everyone working together on an open source project.

People also work on open source in a spirit of giving to others; in doing that we’re reaching out as human beings, and this is a very important part of being human.

There are many rewards, too. A big one is respect: As we create something new, draw people in, and share software with them that they can work on too, they recognize who you are and what you are capable of, which gives you a sense of accomplishment.

Conversely, it means that we want to make sure that we show respect to people joining our projects in any way we can, because it helps them to stay involved.

Another important aspect is appreciation; as people publish their work, if you talk with them — Even just a simple thank-you email message saying “this meant a lot of me”, it helps bring about a culture that keeps everybody motivated.

Credit is also important. As you are presenting a project, be sure to mention other people around you, saying “this person did such a wonderful thing”, so we can build a feeling of community together.

One of the things that keeps people interested in open source is that, as we work together, we become stronger and can do more.

Part of it is simple math: 2x people makes at least 2x code, and 3x people makes 3x code, although there is much more to it than that.

When we work together, we can make each other stronger and better — part of that is encouraging each other; as you see people working on a very difficult problem, you can encourage them saying “you are doing great, and I see you will do great in the future”.

You can empower people just by talking and sharing with them.

And then also there’s the fact that, when you have many people together, they’ll have different sets of skills. When you are working together, maybe you know the five things the project needs, and they know the other five things, and so you have the complete set of skills to finish the project, which wouldn’t be possible if either of you worked alone.

So the effect is not only a linear increase in productivity; there’s a multiplication effect when people start working together.

Encouraging each other to look beyond, to look into the future, is also important — We can all inspire others to solve interesting problems. Sometimes just saying “I have an idea” is enough for someone else to make it into reality.

Sometimes you’d look at what someone else is doing — you have not done all the work, but you have the critical idea they needed, and so with that idea they can reach out and go much further.

The key thing about working on open source is that we’re not just standing alone. When you are working with other people, the main thing you’d want to improve is your communication skills.

We communicate about the plans we have: How we want to make the software, personal plans such as a feature you want to work on, and so on.

One of the things I observed in open source communities is this: People often have good plans to create software, but they sometimes clash and fail to communicate with each other about plans. If you work on one plan alone, without communication, you may end up hurting people working on other plans.

So it’s like a hive of bees — a constant buzz keeps us all functioning.

We’ll also often communicate about possible futures: What’s the best way to solve a technical problem? When this happens, you may communicate in a way that’s contentious and angry, making it very hard to make actual progress.

One of the things we’re learning in our process is how to embrace all possibilities. Keep working on the possibility you’ve imagined, but remain fully open to other possibilities other people may have.

And as you make progress, you’ll also be communicating constantly about what you have done — There’s email, there’s twitter… there are many ways to let people know about your progress.

Sometimes we may feel shy, or not wanting to be seen as bragging. But that’s not what it is! It’s good for the project, and for the people as well, because they can learn from what you have done.

Another aspect of communication skills is the ability to ask questions. The advantage of having a community is that some people might have solved your problem before, and asking a question on a forum or IRC may save you days of work.

In the same way, when others are learning, you can be responsive to them too, instead of putting them down like answering with “RTFM” for simple questions.

It’s true that answering “RTFM” maybe save you a bit of time, but it is also teaching that person that they shouldn’t ask those questions in the first place. That is not what you want to teach people at all — you want to teach them to communicate with others.

Also, learn how to make answers that are helpful to people, and help them see that they can also walk down the path as well, and take the path further in the future.

Sometimes you do have to criticize people; we should be open to many ways of doing things, but sometimes one technical solution really is more correct than others. However, the best way to get people to change their ways is to answer them kindly, so they can be open to learning from you.

You have to show some grace, even to people who do not respond very well. Some people may be harsh with you, but this is also part of the path. Sometimes it helps to have a thicker skin, and even in situations when other people should have said things nicer and better, maybe there’s a bit of truth in what they are saying, and you can still learn from that.

From this perspective, even if they speak in a way that is not polite, you can still respond politely.

The other half of communication is not talking, but listening. Instead of telling others what we think, sometimes all that’s needed is just sitting very quietly, and let others talk.

It’s not just listening, though — it’s important to have empathy. As the saying goes, “If you really want to understand someone, you have to walk a mile in their shoes” — perhaps so you can get the blisters they have experienced.

Now, some people think you have to be a genius to work on open source software, but that is simply not true. There are people like Larry and Guido and Linus, yes, but there are also so many different kinds of talents that any projects needs, too.

And no matter how smart you are, it’s important to stay humble. Because with humility, you will be open to other people, and see new ways of doing things. Humility lets you welcome other people into your project. Pride, on the other hand, is essentially telling people “I don’t need you; I can do things my way.”

By being humble, we also welcome people with diversity of genders, of different cultures, creating a richness in open source by opening to different kinds of people.

The diversity also appears between different projects; it’s almost like languages and cultures of different countries. For example, the community around Linux, Perl, Ruby, and Python all communicate and collaborate differently.

And by being humble with each other, maybe we can see that our project is not the only way, and maybe we can appreciate the ways of other communities.

Now, open source is not all about fun — it’s fun, of course, but it’s also a responsibility. When you agree to participate in a project, you’re taking a weight on your shoulders, and it’s a good thing, as it teaches us to improve ourselves and become better humans.

But life can get in the way — significant others, parents, children, jobs — we may accept responsibility for a time, but there may also be a day where we can’t carry so much responsibility anymore.

So there is a cycle, where you start by assuming more and more of a role in a community, and as life goes on, you gradually take on less and less responsibility. This is entirely natural, and it’s bound to happen in a project’s life cycle.

So it’s worth keeping this question in your mind: “Who will continue my work when I no longer have the time?”

To make sure other people can continue our work, we can think of it as a continuous process: Teaching and sharing the knowledge we’ve learned, and at the same time learning more and more from other people — a continuous process of gaining and sharing knowledge.

Finally, as you work on open source, please be happy, with a smile on your face, and make other people happy! Because this happiness is what gives us the power to make great things.

Do you feel happier now? 🙂

Tody Task Manager

Failing to find any free software task manager I could live with, I created my own over the December holidays. I called it “Tody”. It’s a simple GUI app, focused on quick searching, editing, and tagging for tasklists. The file format it uses is identical to the plain text format used by Gina Trapani’s Todo.txt command-line tool and Android app, it even loads preferences from the Todo.txt config file. Since the file format is plain text, tasklists can be shared between machines (or users) over Ubuntu One or Dropbox.

I created it using Rick Spencer’s Quickly templates (GTK, Glade, and Python). I went for a streamlined workflow for the way I use tasklists, so I’m curious if it will map well to others. It appears as a simple text file, with a search box at the top of the window. Clicking on a tag performs a search for the tag (these are similar to Twitter tags, any word that starts with “@” or “+”). The list sorts tasks by priority (marked with “A”, “B”, “C”, etc) and then alphabetically. When the list is limited to search results, the search terms are highlighted in the tasks.

Clicking on the text of a task brings up an editor window, with a checkbox for “Done” tasks, a field to edit the task, and clickable palettes for task priorities and all the tags you’ve used previously in your tasklist. It’s streamlined with shortcuts, so typing Space, Enter marks a task as done, saves it, and closes the editor window.

I’ve started using Tody as my primary task manager, after dumping all my old tasks from other task managers into one text file. I’d like to tweak the search feature, right now it does a completely literal string search, but I’ll change it to split up search terms (so it’s not sensitive to order of terms). Then the next step is to link it up with my Todo Lens, so the edit window for Tody pops up as the action for clicking on a task in the Lens.

The Tody app is up on my PPA, let me know if you try it out and have any requests for features that fit your workflow:

https://launchpad.net/~allison/+archive/ppa