User Acquisition: Cycle Time Matters

This is an extension to my original three post series on user acquisition.

Over the past few months I been fortunate enough to give over a dozen talks at various events and companies about user acquisition, virality and mobile distribution.  One of the best parts of the experience is that, without fail, every talk yields a new set of questions and insights that help me learn and refine my own thinking on distribution & growth.

One of the most common questions I get is around the difference between my definition of “viral factor” and the semi-standard definition of “K Factor” that has been floating around for a few years.

What’s a K Factor?

Wikipedia offers a fairly concise definition of a K factor, a term borrowed from epidemiology.

i = number of invites sent by each customer
c = percent conversion of each invite
k = i * c

As the wikipedia article explains:

This usage is borrowed from the medical field of epidemiology in which a virus having a k-factor of 1 is in a “steady” state of neither growth nor decline, while a k-factor greater than 1 indicates exponential growth and a k-factor less than 1 indicates exponential decline. The k-factor in this context is itself a product of the rates of distribution and infection for an app (or virus). “Distribution” measures how many people, on average, a host will make contact with while still infectious and “infection” measures how likely a person is, on average, to also become infected after contact with a viral host.

What’s a Z Factor?

This blog post from Mixpanel in 2009 does a great job of walking through the standard definition of Z factor.  Hat tip to Dave McClure for his slide, which is included in the post.

Based on this framework, the Z factor is literally the percentage of users who accept a viral invitation that they receive.

The Problem with K & Z Factors

I meet with a startup that told me proudly that they had measured the viral factor of their new service, and that it was over 2.  My first question, of course, was:

“over what time period?”

In my blog post on viral factor basics, I define a viral factor as follows:

“Given that I get a new customer today, how many new customers will they bring in over the next N days?”

The key to understanding viral math is to remember a basic truth about rabbits.  Rabbits don’t have a lot of rabbits  because they have big litters.  Rabbits have a lot of rabbits because they breed frequently.

You’ll notice that, unlike the other popularized definitions, I focus on a new variable, “N”, the number of days it takes for your viral cycle to complete.  I do this for a simple reason: cycle time matters.   The path to success is typically the combination of a high branching factor combined with a fast cycle time. If you don’t think deeply about the channels you are using for viral distribution, you risk prioritizing the wrong features.

How Do You Pick the Right Cycle Time?

Once a growth team digs into the numbers, they quickly realize that there is no one “cycle time”.  So what number do you pick for analysis?

There is no right answer, but in general, you tend to find in the data that there is a breakpoint in the data where a vast majority of all viral events that are going to complete are going to complete.  For example, maybe with a viral email you’d see most responses happen in 24 hours, with 90% of total responses happening within 3 days.  If that’s the case, picking 3 days might be the right cycle time for your feature.  Once you pick a cycle time, the conversion rate gets built into your projections.

Cycle Time Matters

If you are already focused on the new user experience, distribution and virality, well then kudos to you and team.  Too many consumer products to this day spend too little time focused on these problems.

But if you want to see clear, demonstrable progress from your growth team, make sure you include cycle time in your thinking about what viral features will be most effective for your product.

Now go out and make a lot of rabbits.

The Game Has Changed. Design for Passion.

One of the most exciting developments in software has been a resurgence in the focus and priority on design.  With the growing dominance of social platforms and mobile applications, more and more people are growing comfortable productively discussing and utilizing insights about human emotion in their work.

Google: The Era of Utility

The progress of the last five to seven years is really a significant breakout from the previous generations of software design.

For decades, software engineers and designers focused on utility:  value, productivity, speed, features or cost.

If it could be quantified, we optimized it.  But at a higher level, with few exceptions, we framed every problem around utility.  Even the field of human-computer interaction was obsesses with “ease of use.”  Very linear, with clear ranking.  How many clicks? How long does a task take?  What is the error rate?

In some ways, Google (circa 2005) represented the peak of this definition of progress.  Massive data.  Massive scalability. Incredibly utility.  Every decision defined by quantifying and maximizing utility by various names.

But let’s face it, only computer scientists can really get passionate about the world’s biggest database.

Social: The Era of Emotion

Like any ecosystem, consumer technology is massively competitive.  Can you be faster, cheaper, bigger or more useful than Google?  It turns out, there is a more interesting question.

Social networks helped bring the language of emotion into software.  A focus on people starts with highly quantifiable attributes, but moves quickly into action and engagement.

What do people like? What do they hate? What do they love? What do they want?

In parallel, there have been several developments that reflect similar insights on the web, in behavioral finance, and the explosion in interest in game mechanics.

Human beings are not rational, but (to borrow from Dan Ariely) they are predictably irrational.  And now, thanks to scaling social platforms to over a billion people, we have literally petabytes of data to help us understand their behavior.

Passion Matters

Once you accept that you are designing and selling a product for humans, it seems obvious that passion matters.

We don’t evaluate the food we eat based on metrics (although we’d likely be healthier if we did).  Do I want it? Do I love it? How does it make me feel? I don’t really like to talk about health mmainly becase I’ve had some bad experiences with hospitals, last month I had to report some hospital negligence claims, I went to the docotr and I was treated whihc so much disrespect I was humiliated so I prefer to leave health out of this.

The PayPal mafia often joke that great social software triggers at least one of the seven deadly sins. (For the record, LinkedIn has two: vanity & greed).  Human beings haven’t changed that much in the past few thousand years, and the truth is the seven deadly sins are just a proxy for a deeper insight.  We are still driven by strong emotions & desires.

In my reflection on Steve Jobs, he talks about Apple making products that people “lust” for.  Not the “the best products”, “the cheapest products”, “the most useful products” or “the easiest to use products.”

Metrics oriented product managers, engineers & designers quickly discover that designs that trigger passion outperform those based on utility by wide margins.

The Game Has Changed

One of the reasons a number of earlier web giants are struggling to compete now is that the game has changed.  Utility, as measured by functionality, time spent, ease-of-use are important, but they are no longer sufficient to be competitive. Today, you also have to build products that trigger real emotion.  Products that people will like, will want, will love.

Mobile has greatly accelerated this change.  Smartphones are personal devices.  We touch them, they buzz for us. We keep them within three feet of us at all times.

Too often in product & design we focus on utility instead of passion.  To break out today, you need to move your efforts to the next level.  The questions you need to ask yourself are softer:

  • How do I feel when I use this?
  • Do I want that feeling again?
  • What powerful emotions surround this product?

Go beyond utility.  Design for passion.

User Acquisition: Mobile Applications and the Mobile Web

This is the third post in a three post series on user acquisition.

In the first two posts in this series, we covered the basics of the five sources of traffic to a web-based product and the fundamentals of viral factors.  This final post covers applying these insights to the current edge of product innovation: mobile applications and the mobile web.

Bar Fight: Native Apps vs. Mobile Web

For the last few years, the debate between building native applications vs. mobile web sites has raged.  (In Silicon Valley, bar fights break out over things like this.) Developers love the web as a platform.  As a community, we have spent the last fifteen years on standards, technologies, environments and processes to produce great web-based software.  A vast majority of developers don’t want to go back to the days of desktop application development.

Makes you wonder why we have more than a million native applications out there across platforms.

Native Apps Work

If you are religious about the web as a platform, the most upsetting thing about native applications is that they work.  The fact is, in almost every case, the product manager who pushes to launch a native application is rewarded with metrics that go up and to the right.  As long as that fact is true, we’re going to continue to see a growing number of native applications.

But why do they work?

There are actually quite a few aspects to the native application ecoystem that make it explosively more effective than the desktop application ecosystem of the 1990s.  Covering them all would be a blog post in itself.  But in the context of user acquisition, I’ll posit a dominant, simple insight:

Native applications generate organic traffic, at scale.

Yes, I know this sounds like a contradiction.  In my first blog post on the five sources of traffic, I wrote:

The problem with organic traffic is that no one really knows how to generate more of it.  Put a product manager in charge of “moving organic traffic up” and you’ll see the fear in their eyes.

That was true… until recently.  On the web, no one knows how to grow organic traffic in an effective, measurable way.  However, launch a native application, and suddenly you start seeing a large number of organic visits.  Organic traffic is often the most engaged traffic.  Organic traffic has strong intent.  On the web, they typed in your domain for a reason.  They want you to give them something to do.  They are open to suggestions.  They care about your service enough to engage voluntarily.  It’s not completely apples-to-apples, but from a metrics standpoint, the usage you get when someone taps your application icon behaves like organic traffic.

Giving a great product designer organic traffic on tap is like giving a hamster a little pedal that delivers pure bliss.  And the metrics don’t lie.

Revenge of the Web: Viral Distribution

OK. So despite fifteen years of innovation, we as a greater web community failed to deliver a mechanism that reliably generates the most engaged and valuable source of traffic to an application.  No need to despair and pack up quite yet, because the web community has delivered on something equally (if not more) valuable.

Viral distribution favors the web.

Web pages can be optimized across all screens – desktop, tablet, phone.  When there are viral loops that include the television, you can bet the web will work there too.

We describe content using URLs, and universally, when you open a URL they go to the web.  We know how to carry metadata in links, allowing experiences to be optimized based on the content, the mechanism that it was shared, who shared it, and who received it.  We can multivariate test it in ways that border on the supernatural.

To be honest, after years of conversations with different mobile platform providers, I’m still somewhat shocked that in 2012 the user experience for designing a seamless way for URLs to appropriately resolve to either the web or a native application are as poor as they are.  (Ironically, Apple solved this issue in 2007 for Youtube and Google Maps, and yet for some reason has failed to open up that registry of domains to the developer community.)  Facebook is taking the best crack at solving this problem today, but it’s limited to their channel.

The simple truth is that the people out there that you need to grow do not have your application.  They have the web.  That’s how you’re going to reach them at scale.

Focus on Experience, Not Technology

In the last blog post on viral factors, I pointed out that growth is based on features that let a user of your product reach out and connect with a non-user.

In the mobile world of 2012, that may largely look like highly engaged organic users (app) pushing content out that leads to a mobile web experience (links).

As a product designer, you need to think carefully about the end-to-end experience across your native application and the mobile web.  Most likely, a potential user’s first experience with your product or service will be a transactional web page, delivered through a viral channel.  They may open that URL on a desktop computer, a tablet, or a phone.  That will be your opportunity not only to convert them over to an engaged user, in many cases by encouraging them to download your native application.

You need to design a delightful and optimized experience across that entire flow if you want to see maximized self-distribution of your product and service.

Think carefully about how Instagram exploded in such a short time period, and you can see the power of even just one optimized experience that cuts across a native application and a web-based vector.

Now go build a billion dollar company.

User Acquisition: Viral Factor Basics

This is the second post in a three post series on user acquisition.

In the first post in this series, we covered the basics of the five sources of traffic to a web-based product.  This next post covers one of the most important, albeit trendy, aspects of user acquisition: virality.

Lot-of-Rabbits

It’s About Users Touching Non-Users

Look at your product and ask yourself a simple question: which features actually let a user of your product reach out and connect with a non-user?   The answer might surprise you.

At LinkedIn, we did this simple evaluation and discovered that out of thousands of features on the site, only about a half-dozen would actually let a user create content that would reach a non-user. (In fact, only a couple of these were used in high volume.)

I continue to be surprised at how many sites and applications are launched without having given careful thought to this exactproblem.  Virality cannot easily be grafted onto a service – outsized results tend to be reserved for products that design it into the core of the experience.

Useful questions to ask, from a product & design perspective:

  • How can a user create content that reaches another user?
  • How does a users experience get better the more people they are connected to on it?
  • How does a user benefit from reaching out to a non-user?

Understanding Viral Factors

One of the most useful types of metrics to come out of the last five years of social software is the viral factor.  Popularized by the boom of development on the Facebook platform in 2007, a viral factor is a number, typically between 0.0 and 1.0.  It describes a basic business problem that affects literally every business in the world:

“Given that I get a new customer today, how many new customers will they bring in over the next N days?”

“N” is a placeholder for a cycle time that makes sense for your business.  Some companies literally track this in hours, others 3 days, or even 30.  Let’s assume for now that 7 is a good number, since it tells you given a new customer today, how many new customers will they bring in over the next week.

Basic Viral Math

The good news is, once you identify the specific product flows that allow users to reach non-users, it’s fairly easy to instrument and calculate a viral factor for a feature or even a site.  But what does the number really mean?

Let’s assume a viral factor of 0.5, and an N of 7.  If I get a new user today, then my user acquisition will look like this over the next few weeks:

1 + 0.5 + 0.25 + 0.125 ….

It’s an infinite series that adds up to 2.  By getting a new user, the virality of this feature will generate a second user over time.

Two obvious epiphanies here:

  • A viral factor is a multiplier for existing sources of user acquisition.  0.5 is a 2x, 0.66 is a 3x, etc.
  • Anything below 0.5 looks like a percentage multiplier at best.

What about a viral factor of 1.1?

One of the memes that started to circulate broadly in 2008 was getting your viral factor to “1.1”.  This was just a proxy for saying that your product or service would explode.  If you do the math, you can easily see that any viral factor or 1.0 or higher will lead to exponential growth resulting in quickly having every human on the planet on your service.

I don’t want to get into a Warp 10 debate, but products can in fact have viral factors above 1.0 for short periods of time, particularly when coming off a small base.

Learning from Rabbits

The key to understanding viral math is to remember a basic truth about rabbits.  Rabbits don’t have a lot of rabbits  because they have big litters.  Rabbits have a lot of rabbits because they breed frequently.

When trying to “spread” to other users, most developers just focus on branching factor – how many people they can get invited into their new system.  However, cycle time can be much more important than branching factor.

Think of a basic exponential equation: X to the Y power.

  • X is the branching factor, in each cycle how many new people do you spread to.
  • Y is the number of cycles you can execute in a given time period.

If you have a cycle that spreads to 10 people, but takes 7 days to replicate, in 4 weeks you’ll have something that looks like 10^3.  However, if you have a cycle that takes a day to replicate, even with a branching factor of 3 you’ll have 3^27.  Which would you rather have?

In real life, there is decay of different viral messages.  Branching factors can drop below 1.  The path to success is typically the combination of a high branching factor combined with a fast cycle time.

As per the last blog post, different platforms and traffic channels have different engagement patterns and implicit cycle times.  The fact that people check email and social feeds multiple times per day makes them excellent vectors for viral messages.  Unfortunately, the channels with the fastest cycle times also tend to have the fastest decay rates.  Fast cycle times plus temporary viral factors above 1 are how sites and features explode out of no where.

Executing on Product Virality

To design virality into your product, there really is a three step process:

  1. Clearly articulate and design out the features where members can touch non-members.  Wireframes and flows are sufficient.  Personally, I also recommend producing a simple mathematical model with some assumptions at each conversion point to sanity check that your product will produce a strong viral factor, layered over other traffic sources (the multiplier).
  2. Instrument those flows with the detailed metrics necessary for each step of the viral cycle to match your model.
  3. Develop, release, measure, iterate.  You may hit success your first time, but it’s not unusual to have to iterate 6-8 times to really get a strong viral factor under the best of conditions.  This is the place where the length of your product cycles matter.  Release an iteration every 2 days, and you might have success in 2 weeks.  Take 3-4 weeks per iteration, and it could be half a year before you nail your cycle.  Speed matters.

You don’t need hundreds of viral features to succeed.  In fact, most great social products only have a few that matter.

What about mobile?

Now that we’ve covered the five scalable sources of web traffic and the basics of viral factors, we’ll conclude next week with an analysis of what this framework implies for driving distribution for mobile web sites vs. native applications.

User Acquisition: The Five Sources of Traffic

This is the first post in a three post series on user acquisition.

The topic of this blog post may seem simplistic to those of you who have been in the trenches, working hard to grow visits and visitors to your site or application.  As basic as it sounds, however, it’s always surprising to me how valuable it is to think critically about exactly how people will discover your product.

In fact, it’s really quite simple.  There are only really five ways that people will visit your site on the web.

The Five Sources of Traffic

With all due apologies to Michael Porter, knowing the five sources of traffic to your site will likely be more important to your survival than the traditional five forces.  They are:

  1. Organic
  2. Email
  3. Search (SEO)
  4. Ads / Partnerships (SEM)
  5. Social (Feeds)

That’s  it.  If someone found your site, you can bet it happened in those five ways.

The fact that there are so few ways for traffic to reach your site at scale is both terrifying and exhilarating.  It’s terrifying because it makes you realize how few bullets there really are in your gun.  It’s exhilarating, however, because it can focus a small team on exactly which battles they need to win the war.

Organic Traffic

Organic traffic is generally the most valuable type of traffic you can acquire.  It is defined as visits that come straight to your site, with full intent.  Literally, people have bookmarked you or type your domain into their browser.  That full intent comes through in almost every produto metric.  They do more, click more, buy more, visit more, etc.  This traffic has the fewest dependencies on other sites or services?

The problem with organic traffic is that no one really knows how to generate more of it.  Put a product manager in charge of “moving organic traffic up” and you’ll see the fear in their eyes.  The truth is, organic traffic is a mix of brand, exposure, repetition, and precious space in the very limited space called “top of mind”.  I love word of mouth, and it’s amazing when it happens, but Don Draper has been convincing people that he knows how to generate it for half a century.

(I will note that native mobile applications have changed this dynamic, but will leave the detail for the third post in this series.)

Email Traffic

Everyone complains about the flood of email, but unfortunately, it seems unlikely to get better anytime soon.  Why?  Because it works.

One of the most scalable ways for traffic to find your site is through email.  Please note, I’m not talking about direct marketing emails.  I’m referring to product emails, email built into the interaction of a site.  A great example is the original “You’ve been outbid!” email that brought (and still brings) millions back to the eBay site every day.

Email scales, and it’s inherently personal in its best form.  It’s asynchronous, it can support rich content, and it can be rapidly A/B tested and optimized across an amazing number of dimensions.  The best product emails get excellent conversion rates, in fact, the social web has led to the discovery that person to person communication gets conversion person over 10x higher than traditional product emails.  The Year In Review email at LinkedIn actually received clickthroughs so high, it was better described as clicks-per-email!

The problem with email traffic generally is that it’s highly transactional, so converting that visit to something more than a one-action stop is significant. However, because you control the user experience of the origination the visit, you have a lot of opportunity to make it great.

Search Traffic

The realization that natural search can drive traffic to a website dates back to the 90s.  However, it really has been in the past decade in the shadow of Google that search engine optimization scaled to its massive current footprint.

Search clearly scales.  The problem really is that everyone figured this out a long time ago.  First, that means that you are competing with trillions of web pages across billions of queries.  You need to have unique, valuable content measured in the millions of pages to reach scale.  SEO has become a product and technical discipline all it’s own. Second, the platform you are optimizing for (Google, Microsoft) is unstable, as they constantly are in an arms race with the thousands of businesses trying to hijack that traffic. (I’m not even going to get into their own conflicts of interest.)

Search is big, and when you hit it, it will put an inflection point in your curve.  But there is rarely anysuch thing as “low hanging fruit” in this domain.

Advertising (SEM)

The fourth source of traffic is paid traffic, most commonly now ads purchased on Google or Facebook.  Companies spend billions every year on these ads, and those dollars drive billions of visits.  When I left eBay, they were spending nearly $250M a year on search advertising, so you can’t say it doesn’t scale.

The problem with advertising is really around two key economic negatives.  The first is cash flow.  In most cases, you’ll be forced to pay for your ads long before you realize the economic gains on your site.  Take something cash flow negative and scale it, and you will have problems.  Second, you have solid economics.  Most sites conjure a “lifetime value of a user” long before they have definitive proof of that value, let alone evidence that users acquired through advertising will behave the same way. It’s a hyper-competitive market, armed with weapons of mass destruction.  A dangerous cocktail, indeed.

While ads are generally the wrong way to source traffic for a modern social service, there are exceptions when the economics are solid and a certain volume of traffic is needed in a short time span to catalyze a network effect.  Zynga exemplified this thinking best when it used Facebook ads to turbocharge adoption and virality of their earlier games like FarmVille.

Social Traffic

The newest source of scalable traffic, social platforms like Facebook, LinkedIn and Twitter can be great way to reach users.  Each platform is different in content expectations, clickthrough and intent, but there is no question that social platforms are massively valuable as potential sources of traffic.

Social feeds have a number of elements in common with email, when done properly.  However, there are two key differences that make social still very difficult for most product teams to effectively use at scale.  The first is permission.  On social platforms, your application is always speaking through a user.  As a result, their intent, their voice, and their identity on the platform is incredibly important.  Unlike email, scaling social feed interactions means hitting a mixture of emotion and timing.  The second issue is one of conversion.  With email, you control an incredible number of variables: content, timing, frequency.  You also have a relatively high metrics around open rates, conversion, etc.  With social feeds, the dynamics around timing and graph density really matter, and in general it always feels harder to control.

The Power of Five

Eventually, at scale, your site will likely need to leverage all of the above traffic sources to hit its potential.  However, in the beginning, it’s often a thoughtful, deep success with just one of these that will represent your first inflection point.

The key to exponential, scalable distribution across these sources of traffic is often linked to virality, which is why that will be the topic of my next post.

Product Leaders: User Acquisition Series

I can be pedantic about user acquisition.  The truth is that consumer web and mobile applications are under increasing pressure to demonstrate explosive exponential traction.  Building a great product is no longer sufficient, lest you be left with the best product in the world that no one has discovered.

As an engineer and designer by training, I didn’t always put this level of focus on traffic acquisition.  It wasn’t until we tried to build an entirely new site under the eBay brand (eBay Express) that I was forced to focus our team’s efforts on one large fundamental challenge: traffic acquisition.

Those struggles, some successful (and some not) led me to appreciate how profoundly the social web changed the metrics of distribution.  When we founded the growth team at LinkedIn in 2008, we were able to structure our thinking around user acquisition, measure it, and bend the curve significantly for the site. 

A special thanks to both Reid Hoffman and Elliot Shmukler, who both contributed significantly to my thinking on the subject.

History is Written by the Victors

History is written by the victors, and on the consumer web, victory is often defined by market distribution.  Growth does not just happen, it has to be designed into your product and service.

The following posts attempt to capture some of the fundamentals that I’ve personally found useful to structure thinking around social user acquisition, and extend those concepts from the web to mobile applications:

Remember, Product Leaders win games.  Now let’s get started.

Review: Quicken 2007 for Mac OS X Lion

This is going to be a short post, but given the attention and page views that my posts on Quicken 2007 received, I thought this update worthwhile.

Previous Posts

Quicken 2007 for Mac OS X Lion Arrives

Last week, Intuit announced the availability of an anachronism: Quicken 2007 for Mac OS X Lion.  It sounds odd at first, given that we should really be talking about Quicken 2013 right about now, but it’s not a misprint.  This is Quicken 2007, magically enabled to actually load and run on Mac OS X Lion.  It’s like Intuit cloned a Wooly Mammoth, and put it in the New York Zoo.

The good news is that the software works as advertised.  I have a huge file, with data going back to 1994.  However, not only did it operate on the file seamlessly, the speed improvement over running it on a Mac Mini running Mac OS X Snow Leopard is significant.  Granted, my 8-core iMac likely explains that difference (and more), but the end result is the same.  Quicken.  Fast.  Functional.  Finally.

There are small bugs.  For example, some dialogs seems to have lost the ability to resize, or columns cannot be modified.  But very small issues.

Where is it, anyway?

If you go to the Intuit website, you’ll have a very hard time finding this product:

  • It’s not listed on the homepage
  • It’s not listed on the products page
  • It’s not listed on the page for Quicken for Mac
  • It’s not listed in the customer support documents (to my knowledge)
  • It doesn’t come up in site search

However, if you want to pay $14.95 for this little piece of magic (and given the comments on my previous posts, quite a few people will), then you can find it here:

Goodbye, Mac Mini

I have it on good authority that Intuit is working on adding the relevant & required investment functionality to Quicken Essentials for Mac to make it a true personal finance solution.  There is a lot of energy on the Intuit consumer team these days thanks to the infusion of the Mint.com team, and I’m optimistic that we’ll see a true fully features personal finance client based on the Cocoa-native Quicken Essentials eventually.

Top 10 Product Leadership Lessons

On Sunday, I was fortunate enough to give a talk at the 9th annual Harvard Business School Entrepreneurship Conference.  I’m trying to be better about posting the slides from these talks as they happen.

Context & Caveats

This talk is based substantially on a lecture I gave at LinkedIn on August 31, 2011.  It’s heavily based on the unique product, strategy and organizational issues that you see currently in fast moving, hyper growth, consumer-focused software companies.

At the same time, many of the higher level business and management issues discussed are fairly universal, so hopefully there is something useful here for anyone who is passionate about building organizations that build great products.

So take a look, and I look forward to the comments.  FWIW The Optimus Prime quotes are from this excellent list of Optimus Prime quotes for the workplace.

Be A Great Product Leader

Great Product Leaders Win Games

Being a great product leader is hard. Every organization and process is different, and in many cases you are responsible for the outcome without having the authority to enforce decisions. My recent blog post on Being a Great Product Leader was an attempt to capture the specifics of how to lead a great, cross-functional software team.

To scale a great team, however, you need more than just a list of roles and responsibilities. How you onboard new talent is as important for the long term health of your team as how you identify and hire them in the first place.

The Trials of Being a New Coach

When a sports team gets a new coach, there is some authority that comes with the role. You can immediately set standards for behavior & strategy – how the team is going to practice, what plays the team is going to run. That authority, however, tends to be short -lived. Before you know it, the team begins to focus on one thing: are we winning games?

Joining a new team as a product manager has the same dynamic. At most of the companies I’ve been a part of, there is this false sense of security that comes from process and organization. Sure, if you are technically fulfilling the role and responsibilities of a product manager, there is a certain amount of respect and authority initially. However, in the long term, teams want to win games, and in software that means products that people are proud of and products that move the needle.

So is there a pattern of behavior for new product managers that ensures long term success? I’ll argue yes, and for my new hires I boil it down to three phases:
2 weeks, 2 months, and 2 quarters.

Two Weeks

The first two weeks of a product manager are critical, because this is the window where a new leader can establish the most important aspect of the role: what game are we playing, and how do we keep score.

As a result, the first thing I lay out for new product manager is:

  • The company culture and organizational philosophy of the team. Why the company matters. Product/engineering partnership. Results oriented performance.
  • The current strategic frame for how their product fits into the overall strategy of the company.
  • The current metrics and milestones for the product they are taking over.
  • A set of frameworks for the roles & responsibilities of product managers. These include posts on being a great product leader, product prioritization, finding heat in design, etc.

In the first two weeks, a new product manager is expected to:

  • Thoroughly challenge and finalize the strategic frame for the area. Does the existing frame make sense, or is there a better game to be playing?
  • Thoroughly understand the existing product metrics, and identify new or different metrics needed to properly assess the success of the area (max: 3)
  • Reprioritize all existing and future ideas & concepts based on the above, a.k.a. the product roadmap.

In addition, the first two weeks is the time when a new product manager can physically sit down and meet all the other key product and engineering leaders in overlapping areas, to help them both have context for their product and more importantly establish communication channels across the company with other key leaders. Great product managers very often serve as efficient people routers, and knowing who to talk to is often as important as knowing what to do.

Two Months

Like medicine, theoretical knowledge will only get you so far as a product manager. At some point, you learn by doing. A team will tolerate theoretical discussion for a short while, but in the end, a new product manager needs to get their hands dirty.

Two months is too short a time to significantly move the needle, but it is enough time to run through a few release cycles. In the first two months, it’s crucial for a product manager to actually be responsible for something released to users. In addition, the first two months is the typical time frame for a new product manager to flesh out the “best idea” from the team on how to win.

Two months is enough time to:

  • Have identified key outstanding bugs or minor feature fixes that matter.
  • Led the design / specification of solutions to those issues, and see them go live.
  • Write their first product specification for a larger, more significant milestone for their area. This should be their highest priority project to “move the needle” as they’ve defined it for the team.

The first two months are crucial, because not only does it help the new team execute together and coalesce, but also put their stake in the ground on what their next big evolution will be. By leading the effort to place that bet, a product manager sets the team up for the type of success that hopefully will provide long term momentum for that product team.

Two Quarters

Six months is the window to get a cross-functional team into the positive, reinforcing cycle of ongoing success. At this point, the team has released both small and large features, and has meaningfully “moved the needle.”

This doesn’t mean, by the way, that the product manager led the launch of a single, monolithic all-or-nothing feature. In fact, what it most likely means is that the team launched a combination of iterative efforts to test out their theories and push through changes that in the aggregate validated the strategy and prioritization that had been put in place.

Great Product Leaders Win Games

Once teams have victories under their belt, in hyper-growth companies they gain both the desire to win again, and the confidence to execute on that desire. Creating that momentum is one of the hardest, and yet most valuable elements of cross-functional leadership.

This pattern has proven reliably consistent for my own product leadership efforts, as well as in differentiating the long term success of product managers I’ve hired and mentored.

In some ways, it’s really simple: great teams like winning, and great leaders reliably lead teams to great victories.

Now go out and win games.

Pinterest & LinkedIn: Identity of Taste vs. Expertise

It’s hard to go three feet in Silicon Valley these days without someone commenting on the phenomenal engagement and growth being seen from Pinterest and other curation-based social platforms.  What’s a bit surprising to me, however, is how many people refer to this demand as a growing interest and search for “expertise”.

As I have a passion for finding a more human understanding for what drives engagement in real life and then mapping it to online behavior, I think the use of the term “expertise” here is misleading.  Instead, I believe what we are seeing is an explosion of activity around an incredibly powerful form of identity and reputation: the identity of taste.

Expertise is Empirical

If you go to LinkedIn, you see a site that is rich with the identity of expertise.  LinkedIn has rich structured data around sources of expertise: degrees, schools, companies, titles, patents, published content, skills.  They also have rich sources of unstructured content about job responsibilities, specialties, questions & answers, group participation, status updates and comments.  There are even implicit indications of expertise related to other online identities (like Twitter) and relationships to other people with expertise (connections).

This expertise can be tapped by using LinkedIn’s incredibly powerful search engine, either on site or via API, or by browsing the talent graph displayed in catalog form on LinkedIn Skills.  Github has created a powerful identity for developers based on their actual interests and contributions in code.  Blogs, Tumblr, Quora and Twitter have helped people create identities based on the content they create and share.

The power of identity based on expertise is that it is concretely demonstrated.  Education, experience, content and relationships are all very structured and concrete methods for measuring and assessing expertise.  However, in some ways, expertise is limited by it’s literal nature.  Factual. Demonstrable. Empirical.

Taste is Inspiring

Pinterest, however, has unlocked an incredibly powerful form of reputation and identity that exists in the offline world – an identity of taste.  People don’t care about the expertise of people who are assembling pinboards.  They care about how those combinations make them feel – the concept, the aggregation, the flow of additions.  The Pinboard graph begins for most people with their friends, but people quickly learn to hop based on sources to people they don’t know, finding beautiful, interesting, intriguing or inspiring collections of images.

This isn’t an identity based on expertise, really.  It’s not even clear how closely related it is to a graph of interests. Curation-based social platforms evoke a different phenomenon, and with it, some very powerful emotions and social behaviors.

Taste is different than expertise.  Taste does not imply that you are a good person or a deep well of expertise on the domain.  Taste is not universal, although there are certainly those with a predilection for influencing and/or predicting the changes in taste for many.  But when we as human beings find people whose taste inspires us, it’s a powerful relationship.  We map positive attributes to them, ranging from kindness to intelligence to even authority.  Fame & taste are often intertwined.

You Are What You Curate

Curation-based social platforms are based on the interaction of three key factors:

  1. A rich, visual identity and reputation based on curated content
  2. An asymmetric graph based on not only following people, but specific feeds of curated content
  3. A rich, visual activity stream of curation activity

It’s the first item that I seem to see most under-appreciated.  Vanity, as one of the most common deadly sins in social software, drives an incredible amount of engagement and activity.  As people are inspired by those who create beautiful identities of curated content, they also become keenly aware of how their curated identity looks.  When people signal an appreciation for their taste, it triggers power social impulses, likely built up at an early age.

This, more than anything else, reflects the major step function in engagement of this generation of curation over previous attempts (anyone remember Amazon Lists?)

How Does Taste Factor into Your Experience?

I always like to translate these insights into actionable questions for product designers.  In this case, these are some good starting points:

  • How does taste factor into your experience?
  • Is the identity in your product better served by reputation based on taste or expertise?
  • Are the relationships in your product between users based on taste or expertise?
  • Are you creating an identity visually and emotionally powerful enough to trigger curation activity?
  • Are you flowing curation activity through your experience in a way that stimulates discovery and the creation of an identity of taste?

Don’t underestimate the power of good taste.

Be a Great Product Leader

People who know me professionally know that I’m passionate about Product Management.  I truly believe that, done properly, a strong product leader acts as a force multiplier that can help a cross-functional team of great technologies and designers do their best work.

Unfortunately, the job description of a product manager tends to either be overly vague (you are responsible for the product) or overly specific (you write product specifications).  Neither, as it turns out, is it effective in helping people become great product managers.

I’ve spent a lot of time trying to figure out a way to communicate the value of a product manager in a way that both transparently tells cross-functional partners what they should expect (or demand) from their product leaders, and also communicates to new product managers what the actual expectations of their job are.  Over the years, I reduced that communication to just three sets of responsibilities: Strategy, Prioritization & Execution.

Responsibility #1: Product Strategy

They teach entire courses on strategy at top tier business schools.  I doubt, however, that you’ll hear Product Strategy discussed in this way in any of them.

Quite simply, it’s the product manager’s job to articulate two simple things:

  • What game are we playing?
  • How do we keep score?

Do these two things right, and all of a sudden a collection of brilliant individual contributors with talents in engineering, operations, quality, design and marketing will start running in the same direction.  Without it, no amount of prioritization or execution management will save you.  Building great software requires a variety of talents, and key innovative ideas can come from anywhere.  Clearly describing the game your playing and the metrics you use to judge success allows the team, independent of the product manager, to sort through different ideas and decide which ones are worth acting on.

Clearly defining what game you are playing includes your vision for the product, the value you provide your customer, and your differentiated advantage over competitors.  More importantly, however, is that it clearly articulates the way that your team is going to win in the market.  Assuming you pick your metrics appropriately, everyone on the team should have a clear idea of what winning means.

You should be able to ask any product manager who has been on the job for two weeks these questions, and get not just a crisp, but a compelling answer to these two questions.

The result: aligned effort, better motivation, innovative ideas, and products that move the needle.

Responsibility #2: Prioritization

Once the team knows what game they are playing and how to keep score, it tends to make prioritization much easier.  This is the second set of responsibilities for a product manager – ensuring that their initial work on their strategy and metrics is carried through to the phasing of projects / features to work on.

At any company with great talent, there will be a surplus of good ideas.  This actually doesn’t get better with scale, because as you add more people to a company they tend to bring even more ideas about what is and isn’t possible.  As a result, brutal prioritization is a fact of life.

The question isn’t what is the best list of ideas you can come up with for the business – the question is what are the next three things the team is going to execute on and nail.

Phasing is a crucial part of any entrepreneurial endeavor – most products and companies fail not for lack of great ideas, but based on mistaking which ones are critical to execute on first, and which can wait until later.

Personally, I don’t believe linear prioritization is effective in the long term.  I’ve written a separate post on product prioritization called The Three Buckets that explains the process that I advocate.

You should be able to ask any product manager who has been on the job for two weeks for a prioritized list of the projects their team is working on, with a clear rationale for prioritization that the entire team understands and supports.

Responsibility #3: Execution

Product managers, in practice, actually do hundreds of different things.

In the end, product managers ship, and that means that product managers cover whatever gaps in the process that need to be covered.  Sometimes they author content.  Sometimes they cover holes in design.  Sometimes they are QA.  Sometimes they do PR.  Anything that needs to be done to make the product successful they do, within the limits of human capability.

However, there are parts of execution that are massively important to the team, and without them, execution becomes extremely inefficient:

  • Product specification – the necessary level of detail to ensure clarity about what the team is building.
  • Edge case decisions – very often, unexpected and complicated edge cases come up.  Typically, the product manager is on the line to quickly triage those decisions for potentially ramifications to other parts of the product.
  • Project management – there are always expectations for time / benefit trade-offs with any feature.  A lot of these calls end up being forced during a production cycle, and the product manager has to be a couple steps ahead of potential issues to ensure that the final product strikes the right balance of time to market and success in the market.
  • Analytics – in the end, the team largely depends on the product manager to have run the numbers, and have the detail on what pieces of the feature are critical to hitting the goals for the feature.  They also expect the product manager to have a deep understanding of the performance of existing features (and competitor features), if any.

Make Things Happen

In the end, great product managers make things happen.  Reliably, and without fail, you can always tell when you’ve added a great product manager to a team versus a mediocre one, because very quickly things start happening.  Bug fixes and feature fixes start shipping.  Crisp analysis of the data appears.  Projects are re-prioritized.  And within short order, the key numbers start moving up and to the right.

Be a great product leader.

This work is licensed under a Creative Commons Attribution 3.0 Unported License.

Final Solution: Quicken 2007 & Mac OS X Lion

In July I wrote a blog post about a proposed solution for running Quicken 2007 with Mac OS X Lion (10.7).

Unfortunately, that solution didn’t actually work for me.  A few weeks ago, I made the leap to Lion, and experimented with a number of different solutions on how to successfully run Quicken 2007.  I finally come up with one that works incredibly well for me, so I thought I’d share it here for the small number of people out there who can’t imagine life without Quicken for Mac.  (BTW If you read the comments on that first blog post, you’ll see I’m not alone.)

Failure: Snow Leopard on VMware Fusion 4.0

There are quite a few blog posts and discussion boards on the web that explain how to hack VMware Fusion to run Mac OS X 10.6 Snow Leopard.  Unfortunately, I found that none of them were stable over time.

While you can hack some of the configuration files within the virtual image package to “trick” the machine into loading Mac OS X 10.6, it ends up resetting almost every time you quit the virtual machine.  I was hoping that VMware Fusion 4.0 would remove this limitation, since Apple now allows virtualization of Mac OS X 10.7, but apparently they are still enforcing the ban on virtualizing Snow Leopard.  (Personally, I believe VMware should have made this check easy to disable, so that expert users could “take the licensing risk” while not offending Apple.  But I digress.)

You can virtualize Snow Leopard Server, but if you try to buy a used copy on eBay, it’s still almost $200.00.  Added to the $75.00 for VMware Fusion, and all of a sudden you have a very expensive solution.  Worse, VM performance is surprisingly bad for a Mac running on top of a Mac.  In the end, I gave up on this path.

Enter the Headless Mac Mini

For the longest time, you couldn’t actually run a Mac as a headless server.  By headless, I mean without a display.  It used to be that if you tried to boot a Mac without a display plugged in, it would stop in the middle of the boot process.

I’m happy to report that you can, in fact, now run a Mac Mini headless.

Here is what I did:

  • I commandeered a 2007-era Mac Mini from my grandmother. (It’s not a bad as it sounds – I upgraded her to a new iMac in the process.)
  • I did a clean install of Mac OS Snow Leopard 10.6, and then applied all updates to get to a clean 10.6.8
  • I installed Quicken 2007, and applied the R2 & R3 updates
  • I configured the machine to support file sharing and screen sharing, turned off the 802.11 network, turned off bluetooth, and to wake from sleep from Ethernet.  I also configured it to auto-reboot if there is a power outage or crash.
  • I then plugged it in to just power & gigabit ethernet, hiding it cleverly under my Apple Airport Extreme Base Station.  It’s exactly the same size, so it now just looks like I have a fatter base station.

I call the machine “Quicken Mac”, and it lives on my network.  Anytime I want to run Quicken 2007, I just use screen sharing from Lion to connect to “Quicken-Mac.local”, and I’m up and running.   Once connected on screen sharing, I configured the display preferences of the mac to 1650×1080, giving me a large window to run Quicken.

I keep my actual Quicken file on my Mac OS X Lion machine, so it’s backed up with Time Machine, etc.  Quicken Mac just mounts my document folder directly so it can access the file.

Quicken: End Game

This solution may seem like quite a bit of effort, but the truth is after the initial setup, everything has worked without a hitch.  I’m hoping that once Intuit upgrades Quicken Essentials for the Mac to handle investments properly, I’ll be able to sell the Mac Mini on eBay, making it effectively a low cost solution.

For the time being, this solution works.  Mac OS X 10.7 Lion & Quicken 2007.  It can be done.

 

Bug in iPhoto 11 with iCal Import for Calendars

This is one of those simple blog posts where I write about a frustrating problem, and how I worked around it.

The Culprit

iPhoto 11 and it’s Calendar feature.

The Issue

When you try to import iCal dates into a Calendar, it frustratingly deletes events if they “collide” on the same date.

Example

Let’s say you have two iCal calendars, one for your family birthdays and events, and one for your friends birthdays and events.  Let’s also say that your brother is born on April 11th, and your friend is born on April 11th.

When you import both iCal calendars into iPhoto, only one of the birthday events will show up.  This does not happen if both birthdays are in the same calendar – only if they are in two different calendars.

What’s worse is that this also affects the native support for holidays.  So any friends or family born on July 4th are definitely out of luck.

Solution / Workaround

It’s not perfect, but here is my solution:

  1. Uncheck the holidays checkbox on the calendar import.  This gets you one “clean” calendar import that won’t hit the bug.
  2. Go to iCal and export each of the calendars that you want to add to your iPhoto calendar.
  3. In iCal, create a new calendar called “2012 iPhoto Calendar” or something like that.
  4. In iCal, import each of the calendars you exported, in the order you want them to appear.  Add them to the new “2012 iPhoto Calendar” calendar.
  5. Once you are done, quit iPhoto.  It only detects iCal changes at launch.
  6. Launch iPhoto
  7. Import the new iCal calendar “2012 iPhoto Calendar”.  All your dates will appear, in the order you combined them.

Hope this helps someone out there.  For my rather elaborate family calendar efforts (which involve five separate family calendars of birthdays, anniversaries, and key dates), this was an essential fix.

Proposed Solution: Quicken 2007 & Mac OS X Lion

Right away, you should know something about me.  I am a die-hard Quicken user.  I’ve been using Quicken on the Mac since 1994, which happens to be the point in time where I decided that controlling my personal finances was fundamentally important.  In fact, one of my most popular blog posts is about how to hack in and fix a rather arcane (but common) issue with Quicken 2007.

So it pains me to write this blog post, because the situation with Quicken for the Mac has become extremely dire.  Intuit has really backed themselves into a corner, and not surprisingly, Apple has no interest in bailing them out.  However, since I love the Mac, and I love Quicken, I’m desperately looking for a way out of this problem.

Problem: Mac OS X Lion (10.7) is imminent

Yesterday, I got this email from Intuit:

It links to this blog post on the Intuit site.  The options are not pretty:

  1. You can switch to Quicken Essentials for Mac.  It’s a great new application written from the ground up.  In their words, “this option is ideal if you do not track investment transactions and history, use online bill pay or rely on specific reports that might not be present in Quicken Essentials for Mac.” Um, sorry, who in their right mind doesn’t want to track “investment transactions”?  Turns out, at tax time, knowing the details of what you bought, at what price, and when are kind of important.  At least, the IRS thinks so.  And they can put you in jail and take everything you own.  So I’m going with them on this one.  No dice.
  2. You can switch to Mint.  I love Mint, and I’ve been using it for years.  But once again, “This option is ideal if maintaining your transaction history is not important to you.”  Yeesh.  For me, Mint is something I use in addition to Quicken.  Unfortunately, Mint is basically blind to anything it can’t integrate with online.  Which includes my 401k, for example.
  3. You can switch to Quicken for Windows.  Seriously? 1999 called and they want their advice back.  Switch to Windows?  Intuit would get a better response here if they just sent Mac users a picture of a huge middle finger.  By the way, to add insult to injury:  “You can easily convert your Quicken Mac data with the exception of Investment transaction history. You will need to either re-download your investment transactions or manually enter them.”

This is an epic disaster.  I’m not sure how many people are actually affected.  But the Trojan War involved tens of thousands of troops, so I’m going with Homer’s definition of “Epic”.

What’s the Problem?

There are really three issues at play here:

  1. Strike 1. Around 2000, Intuit made the mistake of abandoning the Mac.  Hey, they thought it was the prudent thing to do then.  After all, Apple was dying.  (The bar talk between Adobe & Intuit on this mistake must be really fun a few drinks into the evening.)  Whoops.  This led Intuit to massively under-invest in their Mac codebase, yielding a monstrosity that apparently no one in their right mind wants to touch.  From everything I hear, Quicken 2007 for the Mac might as well be written in Fortran and require punch cards to compile.  Untouchable.  Untouchable, unfortunately, means unfixable.
  2. Strike 2. Sometime in the past few years, someone decided that Quicken Essentials for the Mac didn’t need to track investment transactions properly.  I’ve spent more than a decade in software product management, so I have compassion for how hard that decision must have been.  But in the end, it was a very expensive decision, and even if it was necessary, it should have mandated a fast follow with that capability.  It’s a bizarre miss given that tracking investment transactions is a basic tax requirement.  (See note on the IRS above)
  3. Strike 3Apple announces the move from PowerPC chips to Intel chips in June 2005.  Yes, that’s *six* years ago.  Fast forward to June 2011, and Apple announces that their latest operating system, Mac OS X Lion, will not support the backwards compatibility software to allow PowerPC applications to run on Intel Macs.

Uh oh.

This is Intuit’s Fault.

With all due respect to my good friends at Intuit, this problem is really Intuit’s fault.  Intuit had six years to make this migration, and to be honest, Apple is rarely the type of company to support long transitions like this.  You are talking about the company that killed floppy drives almost immediately in favor of USB in 2000, with no warning.  They dropped support for Mac OS Classic in just a few years.  It’s not like Apple was going back to PowerPC.

If you examine the three strikes, you see that Intuit made a couple of tactical & strategic mistakes here.  But in the end, they called several plays wrong, and now they are vulnerable.

Intuit would argue that Apple could still ship Rosetta on Mac OS X Lion.  Or maybe they could license Rosetta to Intuit to bundle with Quicken 2007.

Apple’s not going to do it.  They want to simplify the operating system (brutally).  They want to push software developers to new code, new user experience, and best-in-class applications.  They do not want to create zombie applications that necessitate bug-for-bug fixes over the long term.  Microsoft did too much of this with Windows over the past two decades, and it definitely held them back at an operating system level.

A Proposed Solution: VMware to the rescue

I believe there is a possible solution.  Apple has announced that Mac OS X Lion will include a change to the terms of service to allow for virtualization.  If this is true, this reflects a fundamental shift in Apple’s attitude toward this technology.

The answer:

  • Custom “headless” install of Mac OS X 10.6.8, stripped to just support the launch of Quicken 2007.
  • Quicken 2007 R4 installed / configured to run at launch
  • Distribution as VMware image

OK, this solution isn’t perfect, but it is plausible.  Many system utilities are distributed with stripped, headless versions of Mac OS X.  In fact, Apple’s install disks for Mac OS X have been built this way.  A VMware image allows Intuit to configure & test a standard release package, and ensure it works.  They can distribute new images as necessary.

The cost of VMware Fusion for the Mac is non trivial, but actually roughly the same price as a new version of Quicken.  I’m guessing that Intuit & VMware might be able to work out a deal here, especially since Intuit would be promoting VMware to a large number of Mac users, and even subsidizing it’s adoption.

Will Apple Allow It?

This is always the $64,000 question, but theoretically, this feels like really not much of a give on Apple’s part.  They are changing the virtualization terms for Mac OS X Lion, so why not change them for Snow Leopard to0.

Can We Fix It? 

I’m a daily VMware Fusion user, which is how I use both Windows & Mac operating systems on my MacBook Pro.  If Intuit can’t work this out, I just might try to hack this solution myself.

In the end, I’m a loyal Intuit customer.  I buy TurboTax every year, and I use Quicken every week.  So I’m hoping we can all find a path here.

Feel free to comment if you have ideas.

 

 

Why LinkedIn Hackdays Work

Two weeks ago, we celebrated yet another great Hackday judging event at LinkedIn.  For the April 15th Hackday, over 50 employees submitted a combined total of 29 projects for the contest.  We saw incredible product concepts, developer tool innovations, internal corporate applications, and even a few ideas so good they’ll likely ship as products in the coming weeks.  At this point, it feels like every Hackday is better than the one before it.

Most of the engineers who work at LinkedIn have also worked at other great technology companies, and in the past year there has been an incredible swell of feedback from new and old employees alike that LinkedIn Hackdays have become something truly special.  Creating the LinkedIn Hackday has been an iterative, experimental process, so I thought it might be useful to capture some of the details on how LinkedIn Hackdays work, and more importantly, why we run them the way we do.

Origins

It’s funny to think about it now, but the original LinkedIn Hackday had an unlikely catalyst.  On December 14th, 2007 approximately 100+ LinkedIn employees moved into a brand new space on the first floor of 2029 Stierlin Court.  It was the first time that LinkedIn had designed a workspace from the ground-up, and it included a large number of LCD TV’s on the wall.  The goal was to immerse the product and engineering teams in real-time feedback and data from the LinkedIn community, and each of the TV’s was driven by small Mac Mini.

The “Pure Energy” contest kicked off right before Christmas, with a goal of using some of the seasonal downtime to produce cool, internal applications that we could effectively “hang on the wall”.  The prize?  Brand new iPhones for the winning team.  The only rules?  The application had to reflect real usage of LinkedIn, it had to run continuously (so it could be left up 24×7), it had to be designed for display on a 720P monitor (1366×768), and it had to run in either Safari or as a Mac OS X screensaver.

Five projects were submitted, and several became staples of our decoration in 2029 for all of 2008.  (Coincidentally, December 2007 was also the first time we pull the live Twitter search for “LinkedIn” up on the wall for everyone in Product & Engineering to see at all times through the day).  The winner of the “Pure Energy” contest, NewIn, still lives on in an upgraded form, in both the LinkedIn reception lobby as well as on LinkedIn Labs.

Key Ingredients

We’ve learned a lot in the past four years about how to make Hackdays successful at LinkedIn, but at a high level, there are ten key ingredients that make LinkedIn Hackdays work.

  1. For Engineers, By Engineers.  This may be obvious, but Hackdays are highly optimized events around engineering culture.  There may be a lot of opinions about what would be considered “fun” or “useful”, but for Hackdays, in the end, is designed for engineers.  This effects everything from the timing, the prizes, the venue and the communication around it.

  2. Spirit of Exploration.  Hackdays have an opinionated culture, and one of those opinions is that with software it is infinitely better to learn by actually doing, rather than reading / talking.  It’s part of why people go into engineering in the first place.  This is one of the reasons that we celebrate hacks that are purely to learn a new language, environment, algorithm, or architecture.  This is not just a fun thing to do – it’s an incredibly effective way to expose talented engineers to new technology, and more importantly, set a tone that we should always be learning.

  3. Independence.  Hackdays are a day of true self-determination.  At LinkedIn, we believe that small, cross-functional teams build the best software.  Teams do a great job looking at product metrics, customer requests, and innovative ideas from the team, and then prioritizing what to work on.  Hackdays are a day to break free, and work on whatever you personally find interesting.  If you have a great idea, this is the day to help make it a reality.

  4. Company-wide Event. Hackdays may be optimized for engineers, but everyone is invited and included.  Some of the best Hackday projects come from an engineer, web developer and product manager working together.  We’ve had entries from almost every function, and from multiple offices.  Most importantly, hackday projects are shared with the entire company on the intranet, and Hackday Judging is an event that everyone is encouraged to attend.  Winners are announced to the whole company.  It’s incredibly important to cement hackdays as a part of company culture, rather than something that lives within the engineering function.

  5. Executive Attention.  Believe it or not, it wasn’t until 2010 that we stumbled upon an obvious truth.  Executive attention matters.  Actions speak louder than words, and when executives make a point to attend, reference, and discuss hackday projects, it makes a huge difference to the entire organization.  At every LinkedIn Hackday Judging event, you’ll now find at least three of LinkedIn’s senior executives on the panel.

  6. It’s a Contest, but Loosely Enforced.  LinkedIn Hackdays are thrown on Fridays, with the submission date for projects due at 9am on the following Monday.  Teams are limited to five people, and projects have to be presented live for Hackday Judging to be considered for prizes.  Having rules for hackdays is a delicate balance – if you are too weak on enforcement, people lose faith in “the system”, and you’ll get discontent from the people who follow them.  However, too tight on the rules, and you break the independent spirit of the event.

  7. Hackday Judging, or Hackday Idol?  Hackday Judging has morphed over the years into an “American Idol” like event.  The hackdays themselves are relatively independent and quiet.  It’s the judging that is the main event.  Teams are given two minutes to demo their hacks.  The panel of celebrity judges is given a minute to asks questions, and then it’s on to the next project.  We serve lots of food & drink, and try to make it a fun event.  (Typically, I fill the role of Ryan Seacrest.  Yes, I know that my mom would be proud.)  There is a lot of laughing, a lot of cheering, and we try to make a good time for everyone.  Most people who attend leave the event incredibly inspired by what their co-workers come up with.  More importantly, once people attend, they tend to come back again (or better yet, enter their own projects.)  We now have everyone in the company help judge by tweeting out their favorite projects with the project name and a #inday hashtag.

  8. Lots of Prizes. We give prizes to every team that present a project at Hackday, typically a reasonably sized Apple gift card.  Winning teams get larger dollar amounts.  We have 5-6 regular categories, so there are always multiple winners.  Some times, we give additional prizes for stand out projects, but that’s up to the judges.  The reason for gift cards is logistics – giving out iPhones, iPods, Flip cameras, etc sounds like a great idea, but too often you get winners who already have one, or who don’t want one.  (The Apple bias bugs some people, but the truth is we’ve experimented with a wide variety of prizes, and people on average seem to really prefer these.  We did notice that our college interns preferred Amazon gift certificates, however…)

  9. Path to Production.  Some hackday projects are so impressive, there is a natural desire to shout “SHIP IT!”  In reality, however, hackday projects can vary significantly in their technical and product appropriateness for a large scale production environment.  At LinkedIn, we’ve now found multiple ways for people to share their hacks.  Some projects live on hosted on internal machines, and are used by employees.  Some of our best internal tools have come from previous hackdays.  Other projects are built over the LinkedIn Platform, and can be launched to end users on LinkedIn Labs.  Some projects are actually extensions of our production codebase, and actually become live site features.  (Example: The 2010 Year In Review email began as a Hackday Winner, as did the inline YouTube expansion in the LinkedIn feed.)

  10. Learn & Iterate.    We are big believers in continuous improvement, and I don’t think there has been a single hackday where we didn’t add some improvements.  We constantly try out new things, and stick with the ones that work, and shed the ones that don’t.  The pace of innovation has dramatically quickened as hackdays became more frequent, and as the company has grown larger.

Common Issues & Questions

It would be impossible to capture all the common questions about hackdays here, but I thought it was worthwhile to capture a few persistent questions that we’ve debated in our process of creating LinkedIn Hackdays.

  • I have a great hackday idea – how do I find engineers to build it?
    This is a really well meaning question, typically from non-technical employees, who are excited about the idea of hackday, but lack the means to implement it themselves.   The most reliable way that people solve this problem is by talking about their idea broadly, and effectively evangelizing the idea of forming a hackday team around it.  In the past, we’ve tried throwing pre-hackday mixers, usually around a technical topic, to help people find teams, but it’s had at best mixed success.

  • I want people to build features for XYZ – how do we get people to do it?
    This question typically comes from a product manager, executive, or business owner who sees hackdays as a massive amount of valuable potential engineering effort for their area.  In this case, the short answer is that hackdays are about independence – the more you try to get people to do what you want, the more energy (and innovation) you sap from the system.  That being said, we’ve seen quite a bit of success where teams sponsor “special prizes” for a specific category on a given hackday.  Example: an iPad 2 for the project voted best “developer tool”.  This approach seems to provide the best balance of independence and incentive to generate the desired result.

  • How do we get all hackday projects live to site?
    This question assumes that the goal of all hackday projects should be to go live to site.  However, given the education and innovation mandate of hackday, there are actually quite a few projects that are not intended to go live to site, and that’s not a bad thing.  The way that we’ve handled this question is by providing both a variety of mechanisms for projects to “go live”, as well as prize categories for projects that are not based on being a “shippable” feature.

  • How can we spare a day from our priorities for a Hackday?
    In some ways, this is the big leap of faith.  For anyone who has attended any of the recent LinkedIn Hackdays, it’s hard to imagine this being considered seriously at this point.  However, at small companies, there are always more things to do than time to do them.  The decision to have hackdays is largely based on the belief that giving people time to learn by doing and to pursue independent ideas will pay off in multiples, not just in the projects themselves, but in the attitude and energy it brings to the company overall.  In some ways, you can view it as an HR benefit that also has a measurable positive impact on culture, internal technology, and product innovation.

  • How do we get people to participate?
    The ten ingredients above reflect the system that we’ve devised, but the truth is it took time for hackdays to build into a culture fixture at LinkedIn.  In 2008, we threw two hackdays, and had about half a dozen teams enter each.  However, as the company celebrated each hackday winner, we saw demand pick up.  We had a major breakthrough in participation when we launched the “Hackday Idol” format for judging in early 2010, and since then we’ve seen incredible growth in the number of participants and projects.

What’s Next?

I’ve got a few new innovations ready to roll out for the May 20th hackday.  Not to spoil the surprise, but we’ll be rolling out for the first time a new “Hackday Masters” designation and category, for people who have won at least three hackdays.

Hopefully, the Wizard of In will smile down on us, and as always reward those who seek to bend code to their will.