Engaging hidden influencers as active participants

Shared Intro: After the summit (#afterstack), a few of us compared notes and found a common theme in an underserved but critical part of the OpenStack community.  Sean Roberts, Allison Randal, and Rob Hirschfeld committed to expand our discussion to the broader community.  Instead of sharing a single post, we wanted to bring our individual perspectives to you and start a dialog. See Rob’s post and Sean’s post.

Historically, open source projects have focused on developers as individuals, applying a kind of social filter to their corporate affiliations. And historically, this was the right approach to take. Those of us who have been around a while have collected far too many old stories of companies that tried to control open source projects rather than beneficially participating. Downplaying the companies behind the developers was an effective way to channel their contributions through a sanity filter.

But, there’s another dimension that few open source projects acknowledge, which Rob Hirschfeld has aptly dubbed “hidden influencers”. I encountered it at Canonical, and now at HP: developers don’t operate in a vacuum, and their success or failure at contributing to open source projects depends heavily on the support (or lack of support) they receive from the management context they operate in. This is true for independent developers — whatever day job they have to pay the bills may not understand why they spend their nights and weekends on volunteer development. (Many of my employers have understood, and I have always felt lucky that they do.)

Support from management has an even greater impact on developers who work on open source as their primary job.

Some people might see that as a bad thing, but I don’t. I can tell you from experience at Canonical and HP that a manager or executive who understands open source (possibly even has experience doing open source development) is a powerful force for good in an open source project. Unlocking the power of those hidden influencers is a unique opportunity, and very few open source projects are effectively taking advantage of it.

As I’ve talked with managers at HP and other companies around OpenStack, it impresses me that the vast majority who have OpenStack core reviewers or PTLs on their team are actively enthusiastic about their developers’ work and success within the project. I saw a number of these managers at the Summit in Atlanta. But you probably didn’t see them or talk to them.

Are these managers hiding?  Not exactly.  They are all over:

  • Design Summit:  Those sessions are geared toward developers.
  • General session: Those sessions are geared toward users and operators.
  • Developer’s lounge
  • Hallways: This informal “track” is the main place where you’ll see managers interacting.

But there really isn’t any forum for managers to interact around the kinds of topics that fill their daily task lists.  So what would have been valuable in cross-company management communication?

  • Allocating developers’ time to deliver the features discussed.
  • Sharing workloads across developers in multiple companies, and avoiding duplicated efforts.
  • Balancing internal delivery commitments with external delivery commitments.
  • Growing new contributors, from existing employees and new hires.

I not talking about release management here, OpenStack has a highly effective release management program already in operation. What I’m mostly talking about is people management, though there’s also an element of business strategy around OpenStack.

I’d like to see OpenStack as a project more actively engaging with managers as participants, acknowledging their contributions and building collaboration structures for them. Done right, this could set a strong positive example for future open source projects, blaze a new path for guiding beneficial corporate contribution, above and beyond trying to ignore them. Let’s start by simply talking with managers of OpenStack contributors at a variety of companies, finding out where their pain points are and what kind of collaboration would be most beneficial to them.

Relativity, skepticism, and virtual worlds

Yesterday on Twitter I posted:

Has anyone applied Einstein’s theory of relativity to radical skepticism? i.e. spaces of reference in knowledge relative to each other.

Twitter is great for tossing out a quick idea, but on reflection, this one probably needs more explanation. To give credit where credit is due, the idea occurred to me while I was watching a Coursera lecture on Epistemology by Duncan Pritchard (University of Edinburgh) for an evening’s entertainment (what can I say, it’s more fun than most things on television). Though, ultimately, the idea is a distillation of several lines of thought that have been knocking around in my head for years now. (If you really, really want to get at the roots, it all goes back to Richard Jozsa, who was my professor in Quantum Computation at the University of Bristol before he moved on to Cambridge, and who set me on a whole new path of thinking.)

First, a bit of lightweight background, so everyone can follow along. In very rough terms, radical skepticism is a perspective on the fundamental nature of human knowledge, specifically that knowledge is impossible. Pritchard used the classic “brain in a vat” or “Matrix” illustration, which might be simply stated: If I were a brain in a vat, being fed fake experiences by a computer (far more advanced than any we currently have, but still, play along for the sake of argument), then everything I think I know about the universe would actually be fake. I wouldn’t really “know” anything. And the kicker is, there really isn’t any way to prove I’m not a brain in a vat, and therefore, at a very basic level there isn’t any way to prove that anything I know is true. (Apologies to academic philosophers who might happen to read this, I’m keeping the explanation as simple as possible.)

Now, relativity in physics (again, very much simplified) is a theory that examines motion within frames of reference. For an intuitive sense of what this means, go outside and throw a ball to a friend so they can catch it. Now, go ride on a train with the friend and throw a ball there (try not to hit the other passengers). The two situations have radically different external contexts, one is stationary on the sidewalk, the other is hurtling along the tracks. And yet, for you, the friend, and the ball, the motion is the same: if you throw the same way and catch the same way, the arc of the ball relative to the two of you would be the same. The train car forms a frame of reference, and you can meaningfully examine the laws of physics within that frame, while completely ignoring the motion outside the frame. And, of course, remember that even the “stationary” case is actually on a planet that’s spinning and hurtling through space around the sun, in a universe that’s also in constant motion.

(Digression: Years ago, I set out on a lark to memorise the entirety of an English translation of Einstein’s “Spezielle Relativitätstheorie” i.e. “The Theory of Special Relativity” (Doc. 71, Princeton Lectures). I still can’t recite the whole thing verbatim, but a funny thing happened along the way (as I took classes in quantum mechanics and astrophysics): at one point a lightbulb went off and I suddenly realized I wasn’t just memorizing anymore, I was beginning to understand what Einstein was talking about on a deep level.)

Okay, bringing it back around, the problem with radical skepticism is that if you declare that it’s impossible to know anything, then any study of the fundamental nature of human knowledge, philosophy, or ultimately our entire existence, is really rather meaningless. Simply saying “it doesn’t really matter if I’m a brain in a vat” is a shatteringly weak argument in the face of “everything you think you know is wrong”. And yet, the recorded history of humanity demonstrates that it’s possible to construct internally consistent systems of knowledge, and value in exploring the nature of that knowledge. What if, instead of trying to wave away radical skepticism in a puff of smoke, we instead accepted it as a fundamental truth, at the same time as systematizing a study of knowledge within frames of reference, analogous to Cartesian frames of geometry or relativity frames of motion. So, whether I am a brain in a vat, or a living, breathing physical organism experiencing a physical environment, I exist within a “frame of reference” (a computer-constructed or physical world). Within that frame it’s meaningful to study the system of knowledge, while abstracting away from details outside the frame. It’s even meaningful to study the nature of knowledge across different frames of reference, for example as either a brain in a vat or a physical organism I might “know” that I have two hands, that one of them is grasping a cup of coffee, etc, etc. In either frame, I have a true belief, reached through the reasonable application of my cognitive abilities. My knowledge of my hand and my knowledge of the coffee cup are the same relative to each other, within both frames of reference. In a sense, we can say that the “laws of knowledge” are consistent across frames of reference.

Let’s take it one step further. When I inhabit a virtual world, whether it’s as vast and mutable as Minecraft, or as carefully controlled as a game in the Zelda series, I enter into a frame of reference of knowledge. I become a “brain in a vat” to that virtual world, because my dual existence as a physical organism in a physical world isn’t relevant within that frame. This is only becoming more true as technologies like Emotiv’s EPOC controller begin to make the brain-to-world connection more direct, and technologies like Oculus Rift begin to make the world-to-brain connection more direct. Within the frame of reference of a virtual world, I may have genuine knowledge that I have two hands, and that one of them is holding an object. As a slightly more abstract but still narrow example, I may have knowledge of completely different physical laws in two frames of reference (one may not have gravity, or may permit me to fall from 1,000 feet without injury), and yet both have an internally consistent set of physical laws, and my knowledge of those physical laws is absolutely essential to my ability to function as an entity within that frame of reference. And, I can meaningfully compare the nature of my knowledge of physical laws across the two frames of reference, even though the physical laws themselves are different. So frames of reference in knowledge aren’t merely an academic abstraction for the sake of reconciling two apparently conflicting approaches to philosophy. They’re also potentially a valuable tool in the study of the fundamental nature of human knowledge, in much the same way that relativity was to physics.

BTW, if anyone knows of research or published articles heading in this general direction of thought, I’d be very interested to hear/read more about it.

Mythbusters – Why I (still) Love Perl

At the very beginning, I should probably make it clear that this post is not a declaration of exclusivity in my relationship to Perl. I love programming languages. I first learned to program about the same time I first learned to read (English) and first studied French. My love for programming languages is very much akin (and I swear linked to the same part of my brain) as my love for human languages: they are all unique and beautiful in their own way. I love Python, I love C, I love Smalltalk, I love Erlang, etc, etc.

But Perl has taken an entirely undeserved beating in recent years, and so, in karmic balance, it deserves a round of outspoken championship, far more than others need right now. In pondering why Perl’s current reputation is so completely disjointed from the reality of the language, I’ve boiled it down to three “Big Bang Theory”-esque ideas: “The Cookie Slap! Effect”, “The Awkward Adolescence Fallacy”, and “The Singularity Paradox”.

The Cookie Slap! Effect

There’s a “Perl is Dead” meme floating around. It’s been around for a while, long enough that it’s picked up steam and plows along despite all evidence to the contrary. It’s so common, I won’t bother providing links, because you’ve seen more of them than I can count. It’s so common, I even heard it recently from a young MBA graduate, who was so entirely non-technical that he didn’t even know what HTML and CSS were, while working in sales for a web startup. But, he knew what Perl was, and he “knew” it was dead. Weird, but that’s how memes work. So, how did this meme start?

There’s a well-known gaming scenario often called “King of the Hill”, where one player has special status, and the goal of all the other players is to knock that player out of the privileged position and take it for themselves. There was a time when Perl owned the web. It was the duct tape that built the “Web 1.0 City”. There are a number of reasons why Perl succeeded so wildly, but most of them boil down to being in the right place, at the right time, with a fresh, dynamic take on what “programming” should be like, and what “programmers” should be like. There are many things that C excels at, but manipulating massive, variable-length strings is certainly not one of them. And, at the end of the day, no matter what abstractions you layer on it, Web development is essentially about dynamically building and pushing out very lengthy strings of HTML and CSS (and Javascript) to browsers that will decide how to render them. C sucked for that. Really sucked. Trust me, I’ve been-there-done-that.

So, Perl dropped into what was effectively an empty space, at a time when the demand for web services was sky-rocketing. Score! And established itself as the King of the Hill. Score! But then, as the dominant player, Perl also became “the one to beat”. Every upstart young programming language compared itself to Perl. “We’re better than Perl because…” And this is where the “Perl is dead” meme started. It was popular to put-down Perl, because Perl was “unbeatable”. Of course, it was never really unbeatable. The chances that any one language would continue to dominate the web are exceedingly tiny, effectively zero. The chances that any one language will ever again achieve the dominance Perl once had are equally tiny. Especially when you consider the fact that diversity is one of the single strongest cultural values of this miraculous, glorious, networked universe we now inhabit. A cultural value that it partially learned, BTW, from Perl’s TMTOWTDI (there’s more than one way to do it).

So, when you hear “Perl is dead”, remember this, the only reason the meme has strength is because Perl itself has strength. No one feels the need to loudly proclaim “Draco is dead”, because really, no one cares. And secondly, remember that at the root, the primary reason for declaring the untruthful “Perl is dead”, rather than the far simpler and truthful “Perl doesn’t have the domain dominance it once had”, is an insecurity about whatever language the speaker happens to love. Perl is an elder-statesman in a closed system. The shocking truth about the Tiobe Index isn’t that Perl has drifted down over the years (languages wax and wane, just look at C), it’s that all of the languages in the top 20 are OLD in technology-years. It’s like looking at Forbes’ list of billionaires from year-to-year: they shift in position, but once someone’s on the list they have an advantage over all the players who aren’t on the list (massive capital to invest in one case, massive numbers of users and lines of working production code in the other), which means they’re likely to stay on the list.

And, if you’ve ever said “Perl is dead”, my advice is to learn a lesson of tolerance, “live and let live”. Perl won’t plow your favorite language under the carpet, it’s not a threat to you, but I also guarantee your favorite language won’t plow Perl under the carpet. The best you can hope for is to be accepted as a member of “The Fellowship of the Languages”, so grant other languages the same respect you’d like to receive from them.

The Awkward Adolescence Fallacy

Around Perl’s 13th birthday, which just happened to also be the fiery heart of the dot-com bust, the child-prodigy suffered from a massive anxiety attack. As the Web 1.0 world went down in flames, the Perl community was quite literally tearing itself apart. There were a variety of factors, layers of complexity as in any human conflict, but one of the key factors was the nagging doubt that creeps into anyone’s head when things aren’t going as well as you’d hoped: “Maybe it’s my fault. Maybe I didn’t deserve success.” That line of thinking is generally not true, or at least wildly exaggerated, and generally not helpful.

One result of this early-life crisis was the birth of the Perl 6 idea, but I’ll punt that to the next section. Another more subtle (but also more powerful) result, was a growing obsession with things not being “good enough”. The calm confidence of Perl’s youth was replaced by fears that the syntax wasn’t good enough, the implementation wasn’t good enough, the community wasn’t good enough, the foundation wasn’t good enough, the license wasn’t good enough…

To a certain extent fear is a healthy thing, it drives you to push harder, to conquer the thing that you fear. And Perl did. The catastrophic flame-wars of the late 90′s were put to rest. The foundation was restructured and strengthened, and is now one of the most professionally run open source foundations I participate in, with solid, steady funding to run projects that are hugely beneficial to Perl. The Artistic 2 license was introduced as an improvement, but even the existing Artistic 1 license proved itself in court, in a way that no other open source license ever has, and in a way that benefited the entire open source community. The syntax, implementation, and libraries of Perl 5 have improved substantially, to the point that working with “Modern Perl” is really a very different experience than the Perl of 10 or 15 years ago, while still retaining the characteristics that make it a joyful language to code in.

But the bad side of that fear was an awkward, shy hesitation. Like a gawky teenager, Perl stood at the side of the room, afraid to dance because people might think he looks funny. So, I’ve got a news flash for you: no language is perfect, no syntax is perfect, no implementation is perfect, no community is perfect, no foundation is perfect, no license is perfect, nothing is perfect. Perl wasn’t perfect when it owned the web. Perfection is not the first step to success, it’s not even a milestone on the path. And if you really want to understand how irrelevant perfection is, pick any random language you admire, that you see as the pinnacle of success, and closely inspect its syntax, implementation, community, foundation, and license. You’ll find it’s flawed. Why? Because we’re all human, and we all produce things that are deeply creative, deeply wonderful, and yes, somewhat flawed.

It’s time for Perl to grow up. It’s not a teenager anymore. It’s time to accept what it is, accept what it isn’t, and walk on. And what it is, is pretty outrageously amazing. I’ve recently had the opportunity to help a wildly successful startup, in a domain that sorely needs the advantages of modern tech. Perl is the right tool for the job. If I explained the problem space you’d agree, even if Perl isn’t your favorite language. Perl is the right tool for a lot of jobs, all over the world, right now, stable and reliable, in production, with massive numbers of lines of code.

The Singularity Paradox

When Perl 6 was announced, it had a wonderful effect on the Perl community. It provided an “event horizon” to focus everyone’s attention on Perl’s future. It was an inspiration for new creativity, and a distraction from the flamewars to help kill them off. But over time, some things happened that we entirely didn’t expect. The most obvious is that it has taken rather longer that we anticipated. I remember a time when “6 months” was a completely reasonable project estimate for the Perl 6 production release. (Not “By Christmas”, but a real, project-planning 6 months, where I could map out what needed to happen each month.) 13 years later, that clearly didn’t happen. But the time factor is actually a side-effect of other things we didn’t anticipate. The single biggest thing we didn’t anticipate is that the “community rewrite of Perl” has, in fact, turned out to be a community fork. Perl 6 is not like Python 3, which really is a continuation of Python 2, with the same developers, same users, and same community values. (Sometime I’ll write about my interest and contributions toward the Python 3 migration effort, with its own unique successes and challenges.) What grew out of the Perl 6 idea is a new community, a new group of developers, and even a new identity, “Rakudo” rather than Perl (with a phase of “Pugs” along the way). The core Perl developers still work on Perl 5, and have little or no interest in Rakudo. Some of the Rakudo developers have a background in Perl, but many of them have a background in PHP, Java, C#, or other languages.

Rakudo is not an “upgrade” from Perl. It’s revolutionary and exciting, just like Perl was in 1987, but it is not Perl. Please note that I’m not commenting on the similarity or difference of syntax between Perl and Rakudo. If you take a long view over the history of programming languages, syntax is about as relevant to the success of a language as the color of the bike shed. And if you really, really get down to the nuts and bolts, the syntax and functionality of Perl, Python, Ruby, PHP, and Lua are all fundamentally quite similar. That doesn’t make them the same language, and more importantly it doesn’t make them the same community.

So, we stepped into Perl 6 expecting the full power of the mighty Perl community pushing it forward. What we actually got is a tiny band of free-thinkers, re-imagining what “programming” should be like, and what “programmers” should be like. That’s not a bad thing. As new languages go, Rakudo is among the most exciting. But, it’s in that thinly-stretched startup mode where you only get to pick one of “quick, cheap, or good” and it’s optimizing for “good”. In the long-run, that focus will be crucial to Rakudo’s success.

Back to the impact on Perl. Ultimately, the wonderful distraction of Perl 6 has proven… well.. distracting. What was once a very good thing for Perl, is paradoxically now bad for Perl. I recently explained this to a friend as a story of two brothers, Perl and Rakudo Wall:


Perl Wall has finished his advanced graduate degree, and is out building
his career. He was hugely successful for a while, but lately something
strange has been happening. When he goes on interviews, for some reason
people keep pulling up his younger brother’s resume by mistake, and then
tell him “Sorry kid, you don’t have the experience for this job”. But
really, he’s perfect for the job, if only they’d look at *his* resume,
instead of looking at his brother’s.

Rakudo Wall is still a teenager, and walks to the tune of a different
drummer. He’s smart, but he does things his own way. Sometimes
he takes a little longer than the other kids, and sometimes he
leap-frogs past them with a brilliant insight even the teachers don’t
understand. People keep telling him that he should be just like his
older brother. But he’s not, and he doesn’t *want* to be exactly like
his brother. He wants to be himself. Someday, he’ll be awesome, even
outshine his brother. But he’ll get there in his own time, and his own way.

Right now, Perl and Rakudo are getting in each other’s way. They’re like conjoined twins, trying to live separate lives, but always anchored to their brother. That doesn’t mean I love Rakudo any less than I love Perl. I love them both, and want them both to succeed. But their paths are very, very different, and they each need the freedom to walk their own path. The way to grant that freedom is stunningly simple: accept that it is what it is, and let each go its own way, with its own chosen identity. Let Perl be Perl and let Rakudo be Rakudo.

I sincerely hope to see Perl 7 released quite soon. No fuss, no bother, no long list of “blocking features”. Just BAM! ship the next version of Perl (5) as Perl 7. And I sincerely hope the greatest success for Rakudo. I don’t even care if it takes another 13 years to release, it’ll be worth the wait.

The King is dead. Long live the King!

UDS-R Architecture Preview

The 13.04 “Raring Ringtail” release of Ubuntu falls at the mid-point between the 12.04 and 14.04 LTS (long-term support) releases. This is the time in a development cycle when the balance starts to tip from innovation toward consolidation, when conversations form around what pieces need to be in place today to ensure a solid “checkmate” two releases down the road.

With that context in mind, it’s no surprise that Ubuntu Foundations–the central core behind the many faces of Ubuntu–plays a starring role in this release, both in sessions here at the Ubuntu Developer Summit in Copenhagen, and in the upcoming 6 months of development work. Look for sessions on release roles and responsibilities, release planning including Edubuntu, Lubuntu, Xubuntu, and Kubuntu, archive maintenance and improvements to archive admin tools, reverting package regressions and immutable archive snapshots, cross-compilation, user access to UEFI setup and plans for secure boot, xz compression for packages, image creation tools for Flavors, auto-generated apparmor profiles, PowerPC bootloaders, OAuth for Python 3, “prototype” archives for new hardware, Android ROMs, user experience in distro upgrades, build daemon resources, boot time on ARM, and installation tools on ARM. Also training sessions on the error (crash) tracker, Python 3 porting, and how to contribute to upstart.

On the Cloud front, the big topics continue to center around OpenStack (integrating Grizzly, QA, packaging improvements), Juju (the Charm Store, Charm developer tools, contributor onramps, application servers like Ruby on Rails/Django, development process), and Ubuntu Cloud images (testing and roundtable). While the broader Ubuntu Server discussions range over Xen, LXC, libvert, QEMU, Ceph, MySQL, Nginx, Node.js, and MongoDB, Query2, bigdata filesystem support, and Power architecture virtualization.

The Client side is a harmonic chorus, with sessions on Ubuntu TV, mobile devices and installing Ubuntu on a Nexus 7, plus multiple sessions on Ubuntu as a gaming platform. Also look for the usual sorts of nuts and bolts that go into building a beautiful client experience, like accessibility, battery life, connectivity, config sync, choice of file managers, and consistent typography.

Don’t miss the Design Theatre on Wednesday, where all are welcome to participate and learn about design thinking, solving real-world design problems for apps submitted by the audience.

I can’t wait for tomorrow!

UDS-Q Architecture Preview

This week in Oakland is the Ubuntu Developer Summit, a time for Ubuntu Developers from around the world to gather and plan the next release, version 12.10 codenamed “Quantal Quetzal”.

I’ve shuffled and reshuffled the sessions several times, looking for the “governing dynamic”, the thematic structure that holds the Quetzal together. I’ve settled, appropriately, on “quantization”. In general terms, quantization is a process of separating a continuous stream into significant values or “quanta”, such as image pixels from the continuous colors of real life, or discrete atomic energy levels. The theme applies on multiple levels. First, there’s the process attendees are going through right now (in person or remote), surfing the sea of sessions, determining how to divide their time for maximum value.

From a historical perspective, there was another UDS here in California not too long ago, where I recall the schedule was dominated by the desktop. We’re in a different world today, and what struck me reading through blueprints for Quantal is the segmentation of topics. Ubuntu has grown up, and while shipping a gorgeous desktop will always be important, other forms of hardware, both smaller and larger, have an equal (and sometimes greater) influence on Ubuntu’s direction into the future. How do you choose between cloud, metal, TV, and phones, when they’re all so interesting, and have so much potential as game-changers for Ubuntu (and Linux in general)? These different domains of use also lead to differentiation in design, development, and integration. Some significant quanta to watch are:

And like an atom that retains its fundamental structure at multiple energy levels, Ubuntu is still Ubuntu, unified at the core as a distribution and as a community, even across multiple “product” targets. Since this is the first release after an LTS, there’s more room than usual to re-examine the core at a fundamental level, with an eye to where we want to be by the next LTS.

And those are only the highlights. :) It’s going to be a great week, and a great cycle!

Open Source Enlightenment

(My thanks to Audrey Tang for this lyrical transcript of my talk at OSDC.tw, to Macpaul Lin for the video, and to Chia-liang Kao for proofreading the Chinese translations in my slides.)

Over the years, I’ve started thinking that participating in the open source community is like traveling on a path, toward becoming not only better programmers, but also becoming better people by working together.

You might think of it as a path toward enlightenment, growing ourselves as human beings. So what follows is really my personal philosophy which I’d like to share with you.

The first thing is this: The most important part of every open source project is the people. While code is important, the center is always the people.

There are different kinds of people involved in a project: People who code, who write documentation, who write tests. People who use your software, too, are just as important for a project.

Also there are people who work on the software that your project uses — you’re likely using projects from other people in the upstream, and you might want to send them a patch from time to time.

Or maybe you’re writing a library or a module, and so other people will be using your software, and communicating with you as their upstream as well.

So why do people work on open source software? This is a very important question to ask, in order to understand how open source works.

For people’s day jobs, they may be working with software already. And why would they take the extra effort to work on open source? Part of it is that it involves working on exciting things and new technologies.

Sharing is also a large part of it; as we share with each other, we increase the amount of fun for everyone working together on an open source project.

People also work on open source in a spirit of giving to others; in doing that we’re reaching out as human beings, and this is a very important part of being human.

There are many rewards, too. A big one is respect: As we create something new, draw people in, and share software with them that they can work on too, they recognize who you are and what you are capable of, which gives you a sense of accomplishment.

Conversely, it means that we want to make sure that we show respect to people joining our projects in any way we can, because it helps them to stay involved.

Another important aspect is appreciation; as people publish their work, if you talk with them — Even just a simple thank-you email message saying “this meant a lot of me”, it helps bring about a culture that keeps everybody motivated.

Credit is also important. As you are presenting a project, be sure to mention other people around you, saying “this person did such a wonderful thing”, so we can build a feeling of community together.

One of the things that keeps people interested in open source is that, as we work together, we become stronger and can do more.

Part of it is simple math: 2x people makes at least 2x code, and 3x people makes 3x code, although there is much more to it than that.

When we work together, we can make each other stronger and better — part of that is encouraging each other; as you see people working on a very difficult problem, you can encourage them saying “you are doing great, and I see you will do great in the future”.

You can empower people just by talking and sharing with them.

And then also there’s the fact that, when you have many people together, they’ll have different sets of skills. When you are working together, maybe you know the five things the project needs, and they know the other five things, and so you have the complete set of skills to finish the project, which wouldn’t be possible if either of you worked alone.

So the effect is not only a linear increase in productivity; there’s a multiplication effect when people start working together.

Encouraging each other to look beyond, to look into the future, is also important — We can all inspire others to solve interesting problems. Sometimes just saying “I have an idea” is enough for someone else to make it into reality.

Sometimes you’d look at what someone else is doing — you have not done all the work, but you have the critical idea they needed, and so with that idea they can reach out and go much further.

The key thing about working on open source is that we’re not just standing alone. When you are working with other people, the main thing you’d want to improve is your communication skills.

We communicate about the plans we have: How we want to make the software, personal plans such as a feature you want to work on, and so on.

One of the things I observed in open source communities is this: People often have good plans to create software, but they sometimes clash and fail to communicate with each other about plans. If you work on one plan alone, without communication, you may end up hurting people working on other plans.

So it’s like a hive of bees — a constant buzz keeps us all functioning.

We’ll also often communicate about possible futures: What’s the best way to solve a technical problem? When this happens, you may communicate in a way that’s contentious and angry, making it very hard to make actual progress.

One of the things we’re learning in our process is how to embrace all possibilities. Keep working on the possibility you’ve imagined, but remain fully open to other possibilities other people may have.

And as you make progress, you’ll also be communicating constantly about what you have done — There’s email, there’s twitter… there are many ways to let people know about your progress.

Sometimes we may feel shy, or not wanting to be seen as bragging. But that’s not what it is! It’s good for the project, and for the people as well, because they can learn from what you have done.

Another aspect of communication skills is the ability to ask questions. The advantage of having a community is that some people might have solved your problem before, and asking a question on a forum or IRC may save you days of work.

In the same way, when others are learning, you can be responsive to them too, instead of putting them down like answering with “RTFM” for simple questions.

It’s true that answering “RTFM” maybe save you a bit of time, but it is also teaching that person that they shouldn’t ask those questions in the first place. That is not what you want to teach people at all — you want to teach them to communicate with others.

Also, learn how to make answers that are helpful to people, and help them see that they can also walk down the path as well, and take the path further in the future.

Sometimes you do have to criticize people; we should be open to many ways of doing things, but sometimes one technical solution really is more correct than others. However, the best way to get people to change their ways is to answer them kindly, so they can be open to learning from you.

You have to show some grace, even to people who do not respond very well. Some people may be harsh with you, but this is also part of the path. Sometimes it helps to have a thicker skin, and even in situations when other people should have said things nicer and better, maybe there’s a bit of truth in what they are saying, and you can still learn from that.

From this perspective, even if they speak in a way that is not polite, you can still respond politely.

The other half of communication is not talking, but listening. Instead of telling others what we think, sometimes all that’s needed is just sitting very quietly, and let others talk.

It’s not just listening, though — it’s important to have empathy. As the saying goes, “If you really want to understand someone, you have to walk a mile in their shoes” — perhaps so you can get the blisters they have experienced.

Now, some people think you have to be a genius to work on open source software, but that is simply not true. There are people like Larry and Guido and Linus, yes, but there are also so many different kinds of talents that any projects needs, too.

And no matter how smart you are, it’s important to stay humble. Because with humility, you will be open to other people, and see new ways of doing things. Humility lets you welcome other people into your project. Pride, on the other hand, is essentially telling people “I don’t need you; I can do things my way.”

By being humble, we also welcome people with diversity of genders, of different cultures, creating a richness in open source by opening to different kinds of people.

The diversity also appears between different projects; it’s almost like languages and cultures of different countries. For example, the community around Linux, Perl, Ruby, and Python all communicate and collaborate differently.

And by being humble with each other, maybe we can see that our project is not the only way, and maybe we can appreciate the ways of other communities.

Now, open source is not all about fun — it’s fun, of course, but it’s also a responsibility. When you agree to participate in a project, you’re taking a weight on your shoulders, and it’s a good thing, as it teaches us to improve ourselves and become better humans.

But life can get in the way — significant others, parents, children, jobs — we may accept responsibility for a time, but there may also be a day where we can’t carry so much responsibility anymore.

So there is a cycle, where you start by assuming more and more of a role in a community, and as life goes on, you gradually take on less and less responsibility. This is entirely natural, and it’s bound to happen in a project’s life cycle.

So it’s worth keeping this question in your mind: “Who will continue my work when I no longer have the time?”

To make sure other people can continue our work, we can think of it as a continuous process: Teaching and sharing the knowledge we’ve learned, and at the same time learning more and more from other people — a continuous process of gaining and sharing knowledge.

Finally, as you work on open source, please be happy, with a smile on your face, and make other people happy! Because this happiness is what gives us the power to make great things.

Do you feel happier now? :-)

Tody Task Manager

Failing to find any free software task manager I could live with, I created my own over the December holidays. I called it “Tody”. It’s a simple GUI app, focused on quick searching, editing, and tagging for tasklists. The file format it uses is identical to the plain text format used by Gina Trapani’s Todo.txt command-line tool and Android app, it even loads preferences from the Todo.txt config file. Since the file format is plain text, tasklists can be shared between machines (or users) over Ubuntu One or Dropbox.

I created it using Rick Spencer’s Quickly templates (GTK, Glade, and Python). I went for a streamlined workflow for the way I use tasklists, so I’m curious if it will map well to others. It appears as a simple text file, with a search box at the top of the window. Clicking on a tag performs a search for the tag (these are similar to Twitter tags, any word that starts with “@” or “+”). The list sorts tasks by priority (marked with “A”, “B”, “C”, etc) and then alphabetically. When the list is limited to search results, the search terms are highlighted in the tasks.

Clicking on the text of a task brings up an editor window, with a checkbox for “Done” tasks, a field to edit the task, and clickable palettes for task priorities and all the tags you’ve used previously in your tasklist. It’s streamlined with shortcuts, so typing Space, Enter marks a task as done, saves it, and closes the editor window.

I’ve started using Tody as my primary task manager, after dumping all my old tasks from other task managers into one text file. I’d like to tweak the search feature, right now it does a completely literal string search, but I’ll change it to split up search terms (so it’s not sensitive to order of terms). Then the next step is to link it up with my Todo Lens, so the edit window for Tody pops up as the action for clicking on a task in the Lens.

The Tody app is up on my PPA, let me know if you try it out and have any requests for features that fit your workflow:

https://launchpad.net/~allison/+archive/ppa