You are here

Feed aggregator

Four short links: 13 July 2018

O'Reilly Radar - Fri, 2018/07/13 - 03:30

Technology Change, Rebuild Warnings, Google Cloud Platform, and Vale Guido

  1. Five Things We Need to Know About Technological Change (Neil Postman) -- a 1998 talk that just nailed it. (1) culture always pays a price for technology; (2) the advantages and disadvantages of new technologies are never distributed evenly among the population; (3) every technology has a philosophy that is given expression in how the technology makes people use their minds, in what it makes us do with our bodies, in how it codifies the world, in which of our senses it amplifies, in which of our emotional and intellectual tendencies it disregards; (4) A new medium does not add something; it changes everything; (5) media tends to become mythic. (via Daniel G. Siegel)
  2. Five Red Flags Signaling Your Rebuild Will Fail -- No clear executive vision for the value of a rebuild; You’re going for the big cutover rewrite; The rebuild has slower feature velocity than the legacy system; You aren’t working with people who were experts in the old system; You’re planning to remove features because they’re hard.
  3. Good, Bad, and Ugly of Google Cloud Platform -- informative, and well-written—e.g., While GCP services exhibit strong consistency, I can’t always say the same thing for the documentation.
  4. Guido Takes "Permanent Vacation" as Python's BDFL -- prompted in part by a particularly contentious language change proposal. I don't ever want to have to fight so hard for a PEP and find that so many people despise my decisions. I would like to remove myself entirely from the decision process. [...] I'll still be here, but I'm trying to let you all figure something out for yourselves. I'm tired, and need a very long break. Thanks for your years of service, Guido.

Continue reading Four short links: 13 July 2018.

Categories: Technology

How to decide what product to build

O'Reilly Radar - Thu, 2018/07/12 - 04:00

Techniques for defining a product and building and managing a team.

Design is a process of making dreams come true.


LET’S PLAY A GAME. (I’m imagining the computer voice from the movie WarGames. GREETINGS PROFESSOR FALKEN...SHALL WE PLAY A GAME? Alas, I digress.)1

How many people do you think are on the following product or feature teams?

  • Apple’s iMovie and iPhoto

  • Twitter

  • Instagram

  • Spotify

Hint: the number is definitely smaller than you think.

  • Apple’s iMovie and iPhoto: 3 and 5, respectively2

  • Twitter: 5–73

  • Instagram: 13 when acquired for $1 billion by Facebook4

  • Spotify: 85

We also know that the team that created the first iPhone prototypes was “shockingly small.”6 Even Jony Ive’s design studio at Apple—the group responsible for the industrial design of every product, as well as projects like iOS 7—is only 19 people.7 And we can surmise that this group is broken up into smaller teams to work on their own individual projects.

Figuring out what product you’re going to build is an exercise in working through the research you’ve gathered, empathizing with your audience, and deciding on what you can uniquely create that’ll solve the problems you’ve found. But it’s also an exercise in deciding how big the team is and who’s on it.

Jeff Bezos of Amazon famously coined a term for teams of this size: the “two-pizza team.”8 In other words, if the number of people on a team can’t be fed by two pizzas, then it’s too big. Initially conceived to create “a decentralized, even disorganized company where independent ideas would prevail over groupthink,” there’s some surprising science that explains why teams of this size are less prone to be overconfident, communicate poorly, and take longer to get stuff done. In actuality, that probably caps this team at or around six people.

Enter the work of the late Richard Hackman, a professor at Harvard University who studied organizational psychology. He discovered that “The larger a group, the more process problems members encounter in carrying out their collective work...worse, the vulnerability of a group to such difficulties increases sharply as size increases.”9

Hackman defined “process problems” as the links—or, communication avenues—among the members in a team. As the number of members grows, the number of links grows exponentially. Using the formula n*(n–1)/2—where n is group size—Hackman found that the links among a group get hefty very quickly (Figure 1-1).

Figure 1-1. The larger a group gets, the more “process problems” a group faces. This requires increased communication and can slow down decision making. (Source: Messick and Kramer, The Psychology of Leadership.)

Even though math wasn’t my favorite subject in school, let’s go through a few team size scenarios. Let’s start with Bezos’s recommended team size of six—assuming that two pizzas are appropriate for six people (although, I’ve been known to put away a whole pizza on my own from time to time):

  • Bezos’s preferred team size of 6 people has only 15 links to manage.

  • Increase that number to 10, and you already have 45 links to manage.

  • If you expand to the size of where I work every day, Tinder—70 people—the number of links grows to 2,415.

But managing more communication links isn’t the only problem groups face when they increase in size.

Larger teams get overconfident. They believe they can get things done quicker, and have a tendency “to increasingly underestimate task completion time as team size grows.” In 2010, organizational behavior researchers from the University of Pennsylvania, the University of North Carolina at Chapel Hill, and UCLA conducted a number of field studies confirming these findings.10 In one of their experiments, they observed teams tasked with building LEGO kits. Teams with two people took 36 minutes to complete the kit, while four-person teams took over 44 percent longer.

But the four-person teams believed that they could complete the LEGO set faster than the two-person team.

That’s why the notion of the two-pizza team is so powerful. It’s a simple concept that’s easily understood by anybody within your organization, and can be used to combat the “let’s throw more bodies at the problem” mentality that some organizations might be used to using.

OK, so we’ve figured out how big your team should be. But who should be invited to the party?

Everybody loves to be in product meetings. Especially when you’re in the deciding phase of deciding what to build.

Even Steve Jobs loved being in the room during this phase. “He told me once,” said Glenn Reid, former director of engineering for consumer applications at Apple, “that part of the reason he wanted to be CEO was so that nobody could tell him that he wasn’t allowed to participate in the nitty-gritty of product design.”11

Treat this process like you’re the bouncer at Berghain nightclub in Berlin.12 (Hint: it’s practically impossible to get in if you don’t speak German. And even then, Sven the bouncer, “a post-apocalyptic bearded version of Wagner,” enforces an obscure dress code that nobody can seem to crack.)

So, who’s in the room together? How much do they know about the pains you’ve found? And how do you frame the discussion?

At this point, you should have everyone who’s going to be involved in the creation of the product on the team. An example of this could include:

  • The product designer or product manager (depending on how your organization is set up, and if you’ll be working with someone else who will be designing the product).

  • The engineer(s) with whom you’ll be working to build the product—typically frontend and backend.

  • A representative from the team that will be launching and promoting the product; this could be someone from marketing or public relations to create a feedback loop between what will be promised to your customers and what your product is actually capable of doing.

While at KISSMetrics, Hiten Shah structured these teams with

...a product manager, a designer, and an engineer. Sometimes it’s multiple designers, multiple engineers, and sometimes it’s an engineering manager.

At times it can even be, sometimes, someone from marketing, if that makes sense, or even someone from sales. I mean, we have tried different methods. I’d say for different things, small things, big product releases, a whole product, it’s going to be different and for the stage of the company it’s going to be different.

Party Like It’s 1991

Regis McKenna had something to say about this process. When he saw how fast technology was changing society in 1991, he realized—like our friend Neil McElroy at Proctor & Gamble—that a new role would need to be formalized. This person would be “an integrator, both internally—synthesizing technological capability with market needs—and externally—bringing the customer into the company as a participant in the development and adaptation of goods and services.”13

If your eyes glazed over reading that, well, you should read it again. Because McKenna was responsible for launching some of the hallmarks of the computer age: the first microprocessor at Intel, Apple’s first PC, and The Byte Shop, the world’s first retail computer store. Oh, and one more thing: he was the guy behind the “startup in a garage” legend first made famous with Apple’s early days.

So, did you read it again? Did anything seem familiar?

Hey, he’s describing you!

You’re the product designer. The integrator. You’re the customer’s champion, their expert, their advocate.

This process requires you to lead your team through the research; to propose product ideas to eliminate your customer’s pain or find their joy effectively.

That, of course, means that everybody involved in building the product must be intimately familiar with the research that’s been conducted on your audience.

Take the opportunity as an “integrator” to build on your strengths as a team: what innovative technologies and design can you apply to the problem at hand? Even better, what can you and your team uniquely build for this audience?

I thought Josh Elman (Greylock Partners, Zazzle, LinkedIn, Facebook, Twitter) had a great insight on this part of the product creation process:

The first thing is you have to trust your team. I think that sounds obvious, but it’s much harder in practice. I think a lot of structures and processes are built on the fact that there isn’t innate, get your team’s help in how to solve the problem. The team knows what they can build. The team knows how it can be developed. The designers know what kinds of things are designable and natural in the product and what kinds of things are not. All of this matters.

Don’t forget the Pain Matrix (Figure 1-2). What are the observations you made that fit into the upper-right quadrant where there is the most acute, frequent pain? How can you build your customers’ dream product? What are the pains that you’re uniquely capable of solving?

Figure 1-2. The Pain Matrix, a simple tool I created for myself. It’s intended to make sifting through and making sense of the research you’ve gathered much simpler.

The Pain Matrix is the perfect piece of collateral for when you’re hashing out what to build. This document becomes a communication device, an advocate for your customers. Everybody can see it and you can back it up with your data. Bonus points for direct quotes from your research.

“The thing to focus on is that yes, 100 percent of your users are humans,” Diogenes Brito, a product designer most recently at startup Slack, reminds us. “While technology is changing really, really rapidly, human motivations basically haven’t at all. Like Maslow’s hierarchy of needs, that’s still the same. Designing around that, the closer you are to the base level of what humans desire, the more timeless it’ll be.”

To reiterate: don’t lose sight of the actual, observed, tangible pains and joys that you’ve researched. Resist the temptation to delve into hopes and dreams. Just throwing an “MVP” out into the wild to “validate” something you spend time building is a waste of time, money, and talent.

You’re better than that.

Now, all you have to do is keep everybody focused.

Keeping Everybody Focused

There’s always a big problem when the club-like euphoria from a product meeting starts to turn focus into chaos. How do you keep everybody on task and debating healthily?

I highly recommend a whiteboard for idea collection and harvesting. This serves three practical purposes:

  • It’s difficult to remember what was said. You don’t want good ideas getting lost simply because there were too many thrown around the room.

  • It allows you to be visual. Not all ideas can be verbally explained; a low-fidelity medium allows anybody to sketch the central core of the idea without unnecessary detail. This allows your team to get ideas out of their head on an equal playing field.

  • It lets you take advantage of the natural tendency for the group to forget which idea was contributed by whom. This naturally allows the best ideas to float to the top and the worst ones to sink to the bottom. It’s hugely beneficial, especially if the group has a lot of ideas. The key here is to avoid attaching names to ideas, so you can avoid hurt egos and the so-called not invented here syndrome. Called the Cauldron, this was a technique used by Apple—sometimes even with Steve Jobs in the room. According to Glenn Reid, the former director of engineering at Apple, the Cauldron “let us make a great soup, a great potion, without worrying about who had what idea. This was critically important, in retrospect, to decouple the CEO from the ideas. If an idea was good, we’d all eventually agree on it, and if it was bad, it just kind of sank to the bottom of the pot. We didn’t really remember whose ideas were which—it just didn’t matter.”14

There’s also the benefit of timed techniques, like one used at online publishing startup Medium. With the right group of people in the room, the problem that needs to be solved is defined and “you have two minutes to write down as many ideas as possible [to solve it],” director of product design and operations Jason Stirman told me. “Then you have five minutes to put the ideas on a whiteboard and explain them. Then you have another two minutes to add to ideas...the end result is you just get as many ideas as possible. So we do that a lot here. We brainstorm a lot.”

The “Working Backwards” Approach

There’s another technique used by Amazon that’s particularly powerful. Known as the “Working Backwards” approach, this technique calls upon the product owner to literally write a future press release for the product—as well as fake customer quotes, frequently asked questions, and a story that describes the customer’s experience using the product.

In your case, this could be a future blog post that you’d put out about your product or feature instead of a press release.

What’s particularly unique about this technique is that this document involves every part of your organization that’s required to make the product successful—not just product and engineering, but marketing, sales, support, and every other part of your company. In other words, it forces you to think about all of the aspects that can inform your product.

Werner Vogels, Amazon’s CTO, describes the rationale behind the process:

The product definition process works backwards in the following way: we start by writing the documents we’ll need at launch (the press release and the FAQ) and then work towards documents that are closer to the implementation.

The Working Backwards product definition process is all about fleshing out the concept and achieving clarity of thought about what we will ultimately go off and build.15

According to Vogels, there are four documents included in Working Backwards:

The press release

What the product does, and why it exists

The “frequently asked questions” document

Questions someone might have after reading the press release

A definition of the customer experience

A story of what the customer sees and feels when they use the product, as well as relevant mockups to aid the narrative

The user manual

What the customer would reference if they needed to learn how to use the product

This all might seem like a lot of frivolous upfront work, but the method’s been used at Amazon for over a decade. And if you use it in conjunction with the Sales Safari method outlined in Chapter 2, you’d be hard-pressed to find a more customer-centric approach to building products. That way, you’ll be working on ideas that have their foundation in what real people need, as opposed to coming up with ideas that you try to plug into an amorphous audience.

At the center of Working Backwards lies the press release. A document that should be no longer than a page and a half, it’s the guiding light and the touchstone of the product and something that can be referred to over the course of development.

“My rule of thumb is that if the press release is hard to write, then the product is probably going to suck,” writes Ian McAllister, a director at Amazon. “Keep working at it until the outline for each paragraph flows.”16

Amazon’s view is that a press release can be iterated upon at a much lower cost than the actual product. That’s because the document shines a harsh light on your answer to your customer’s pain. Solutions that aren’t compelling or are too lukewarm are easily identified. Nuke them and start over. All you’re working with at the moment is words.

“If the benefits listed don’t sound very interesting or exciting to customers, then perhaps they’re not (and shouldn’t be built),” McAllister writes. “Instead, the product manager should keep iterating on the press release until they’ve come up with benefits that actually sound like benefits.”

So what does this press release look like? Thanks to McAllister, we have a very specific outline of the documents Amazon uses in their product meetings:

  1. Heading: this is where you announce the name of the product. Will your target audience understand its meaning? Will they be compelled to learn more?

  2. Subheading: declare in one sentence who your product’s target market is, and how they’ll benefit from it.

  3. Summary: summarize the product and and its benefits. McAllister cautions that there’s a large chance that your reader will only make it this far, so “make this paragraph good.”

  4. Problem: this should be easy since it’s been the focus of your customer research. What’s the problem your product solves? What are the pains your customers are experiencing that justify this product’s existence?

  5. Solution: how does your product annihilate your customer’s pain in the most frictionless way possible?

  6. Quote from a Spokesperson in the Company

  7. How to Get Started: how does a customer take their first step into the larger world that’s your product? Describe your ideal first step that provides an immediate benefit.

  8. Customer Quote: what would your ideal customer say after they’d had their pains destroyed by your product?

  9. Closing, with a Call to Action

I think that this process—however laborious—endures at Amazon because it provides such clarity about what the product is going to be, and how it’s going to help your customers.

“Once we have gone through the process of creating the press release, FAQ, mockups, and user manuals, it is amazing how much clearer it is what you are planning to build,” writes Vogels. “We’ll have a suite of documents that we can use to explain the new product to other teams within Amazon. We know at that point that the whole team has a shared vision on what product we are going to build.”

Create a Product Guide

Before you finish defining what product it is you’ll be building, you’re going to want to leave with some paper in hand.

I love Cap Watkins’s approach. The vice president of design at Buzzfeed and former Etsy senior design manager keeps his team on task after the meeting by creating a key internal document. We’ll call this a product guide for the sake of discussion:17

At the end [of the product definition meeting], leave with:

What you’re doing.

Why you’re doing it (problems you’re trying to solve).

What success looks like (quantitatively and qualitatively).

The product guide helps you “keep yourself and your team focused and prevent design creep: if it doesn’t solve the problem or meet the goals, it doesn’t go into this version.”

The only two elements missing from this approach are who’s responsible for what pieces of the product, and when they’ll be done.

Pre-dating Steve Jobs, Apple created a rule for every project they undertake: the “Directly Responsible Individual,” or DRI. It’s a simple yet effective rule. By placing one’s name next to a task-to-be-completed in front of the entire company, you can be sure that individual feels more responsibility to perform.18

In your own internal product guide, use this rule and make sure that every line item has a DRI. Assign someone (or recruit volunteers) to complete each task. Make sure this is in place before you break up the meeting. Include it in your guide’s circulation.

It’s not enough to put people’s names next to line items. You and your team need dates, too. Do you need to do research for technological or data considerations? Do you need to build a prototype to see if a particular interface component is possible? Did you misinterpret something in your audience analysis? What can be executed upon immediately to challenge any outlying assumptions? Who needs to see what and when (other product teams, clients, bosses)? At what fidelity?

By giving every task a deadline, you’ll maintain momentum even when the initial enthusiasm of creating something new dies down.

But how should you build in accountability? How do you race against the clock to get something valuable out to your customers?

Customer retention company Intercom—whose clients include Shopify, InVision, and Rackspace—gives their product teams weekly goals to hit. “We believe you can achieve greatness in 1,000 small steps. Therefore we always optimise for shipping the fastest, smallest, simplest thing that will get us closer to our objective and help us learn what works. All our projects are scoped into small independent releases that add value to customers.”19

At AngelList, Graham Jenkin places a similar pressure on himself and his team to execute on a new product as quickly as possible—with a bias toward small chunks. “We’re always thinking about how are we going to execute this by the end of the day, or ‘how are we going to get this done this week? Can we get it done this week?’ If we can’t, maybe it’s the wrong problem to try to solve.” His approach is more of a rolling set of needs, rather than standing, arbitrary deadlines.

And my current employer, Tinder, holds product update meetings every Monday, Wednesday, and Friday—but only if there’s something to discuss. These meetings typically consist of the product team gathering feedback from each other on their own projects and asking for critiques or help solving any design challenges. Every Monday is a roadmap meeting where engineering, product, customer support, and marketing leads get into a room and update each other on the progress of their projects, while alerting the group if something new needs to be addressed.

Whatever the setup is at your company, remember that you’re the integrator. So be the leader. Be the customer’s advocate. Don’t settle for hand-waving and bravado.

You’re better than that. More talented. And probably better looking.

Shareable Notes
  • Defining what to build starts first with who’s in the room.

  • When choosing who’s in the room, follow Jeff Bezos’s “two-pizza team” rule: your team should be small enough that two pizzas could feed them. Typically, this comes out to about six people.

  • Only allow team members through the door if they’re educated on the research you’ve conducted on your audience. Use the Pain Matrix liberally.

  • Whiteboards are your friend. They help you remember what was said, allow you to be visual (sketching together brings teams together), and disassociate ideas from their inventors. This allows the best ideas to win without any regard for who invented them.

  • Amazon’s “Working Backwards” approach can help you pinpoint the product ideas that’ll solve your customer’s pains, versus creating something that’s too flashy or uninspired.

  • Leave this meeting with a key document: the product guide, which outlines what you’re building and who’s responsible for doing it.

Do This Now
  • Re-examine the knowledge your team has of your audience. Encourage them to study up on the research you’ve done so they can make more informed product decisions.

  • Think about how your product can really make your customers happy. Are you really bringing them joy? Are you truly able to alleviate their pains and satiate their needs?

  • Take stock of how your team conducts meetings and makes decisions. See if any of the techniques mentioned in this chapter can help you make more realistic decisions in a smaller time period.

Interview: Sahil Lavingia

Sahil Lavingia was on the original Pinterest team, where he helped invent the famous pinboard design, as well as the Pin It button and the Pinmarklet—before even finishing college at the University of Southern California. He then left Pinterest to design’s first iOS app, and started Gumroad shortly thereafter. At Gumroad, as founder and CEO, he’s making it easy for artists to sell any good—digital or physical—to customers around the world.

So I noticed this distinct thread that runs through your thinking of everything you work on. You want to create something that solves a problem simply that gets in the hands of other people. What kind of philosophical steps along the road did you take to have this epiphany?

Yeah, I think the first thing that I figured out pretty quickly was that it’s really hard to predict what people want, like it’s hard to sort of guess in a year how your product is going to look.

And two, it’s just you’re forced to make simple stuff because you’re not going to work on it for more than a weekend, right? And like now I think it’s sort of obvious like, yeah, release something MVP, iterate, etc. But I think I was just like, “I want to build 50 things over the next year,” and the only way to do that was to release MVPs. So that sort of built itself into the cycle.

How did you get started on this journey?

I was at [the University of Southern California] in the fall of 2010 with the intention of getting a degree; I was not really even considering leaving. But I started publishing a lot of my work online and I was like, “Wait, I’m finally in the U.S., I’m finally in California, I should start trying to get in touch with a lot of these people that I had followed for a long time.” And that got me into doing contract work with these startups, that got me full-time job offers, and I sort of realized I could do this full-time; I didn’t need a degree to do what I wanted to do. That led to me leaving USC after a semester, four months.

So I joined Pinterest, I was 18 at the time. I started contracting while I was doing school so I wanted to at least finish up the semester and end it cleanly.

Then I was there for a year, working on all sorts of stuff—design, frontend, backend. The mobile app was sort of my primary baby, but I worked on sort of everything. I joined the same day as another guy so I was number two and a half, three, or whatever, or we were both two.

And I was the most design-y frontend person at the time, so it was like, “You have to do that.” So I learned a lot through that, then I left; I did some contract work for this startup called based out in New York. I built their mobile app, designed it, and then I started to go on my own and that was already two years of stuff that happened really quickly. But [during the course of those two years] I think the things that got me excited—there are two big things—one was this constant emphasis on value creation.

I think Ben [Silbermann, cofounder and CEO of Pinterest] is really good about talking about it, especially now. But we were never really a super well-known Valley-like startup; no one really knew what we were. TechCrunch didn’t really give a shit what Pinterest was for a long time. But we had these really core engaged users, even though we didn’t have a lot of them—the people that used us were really psyched about us.

And so there was always this focus on building stuff that will make these people’s lives better, which I think is the ultimate goal anyways—but you forget when there’s all these other things that are going on, like raising money and trying to recruit people and pitching a different story there and press and things like that.

The other thing that I think really helped was just the volume of sort of feedback that I got. Like I had built these [different products] and sure, I had maybe a few hundred thousand users, maybe 1,000,000 combined over all the things I had built myself.

But with Pinterest I could launch something, I could try out a new thing, I would be able to get this massive amount of feedback. And we never ran A/B tests or anything at Pinterest, at least not when I was there. But with most of the things we worked on, it was very easy to figure out very quickly what was working, what wasn’t working—and typically they’re the same things. Simpler is better. Making things intuitive rather than complicated. Things like that. They’re obvious to say out loud, but they definitely influenced a lot of the design that we did at Pinterest and a lot of the interactions that we built that really I don’t think were very common before them and now are a lot more common. So I do think there was some amount of nonobvious stuff that we built to solve the needs of our users and that expanded from there.

Tell me about how you used the invitation-only system in the early days to spur growth.

Typically if you talk to someone in tech [about this technique], they’re like, “this is stupid.” First of all, no one actually thinks that this is a secret and closed beta. You get 1,000,000 emails a day, right? So another one just actually hurts you—but if you look at the normal person, even today, at least the normal people, the first people that used Pinterest and probably still sign up today, they didn’t get a lot of email. Now, if you look at their inboxes, it’s typically Facebook and Pinterest.

So, in that case, it’s actually great that we said they loved the emails. They love getting invites to secret stuff, they’re like, “Holy shit, I have this amazing secret new service and I can only tell five friends about it. So I’m going to take the time to really invite the best people I can invite because I only have five of them.” They don’t know that they don’t expire, that there’s actually an infinite amount of them or whatever. Yeah, those things still work.

That seems to have been one of the strengths of Pinterest’s early growth—that Silicon Valley discounted you.

Yeah, it’s pretty funny. I remember this one specific moment where I think Ben [Silbermann] said, “I was just in the meeting with somebody and they’re like, ‘You guys are building a site for women, right? Are you guys scared that your site is only going to be used by women?,’” or whatever. And, he jokingly replied, “Yeah, I’m really scared that we’re only building a thing for 50 percent of the world’s population.”

And it’s true. More women use Facebook than men I think by far—I think the engagement on females is typically a lot higher. And Instagram I’m sure is majority women, I think; Snapchat is probably the same. But yeah, things like that were unsexy in the Valley and I think a lot of people probably didn’t focus on problems like the ones we were focused on. Because it was unsexy or uncool or not the hot hip thing to do.

But that’s kind of a common thread between what you get at Pinterest and what you’re doing now: you’re going after people that are hard to get to, they’re under-served, and if you give them some tools they’ll run with it.

Yeah, and I like how you said “tools.” [At Pinterest] we always thought about that piece of it too—we were always like, “How do we build tools to help our users solve their problems?” I think a lot of people focus on the network. They focus on the big picture—but you only get to the big picture if individually everyone is gaining value out of your service.

Most of the people—our competitors, you could argue—like for me at least, that’s how I always considered it, is we stole users away from the Internet Explorer bookmarks folder. That was our competitor—we stopped more people from using that than any other service ever. And when you think of it like that you’re like, “We need to build a better feature set that lets people bookmark stuff, scrapbook stuff, collect things, the sharing and other things like that.

They just want to find a better way to organize the 5,000 different things they might want to buy for their new house. And it doesn’t matter how many recommendations or how many freaking gamification badges, leader boards, whatever you work on to provide, it’s really just like, “I just need a better way to bookmark stuff.”

This interview has been edited for length, and you’re missing out on thousands of words of insights. To read the interview in its entirety, go to









9David M. Messick and Roderick M. Kramer, eds., The Psychology of Leadership: New Perspectives and Research (New York: Psychology Press, 2004), 131.










Continue reading How to decide what product to build.

Categories: Technology

Four short links: 12 July 2018

O'Reilly Radar - Thu, 2018/07/12 - 03:10

Debugging, Just Code, Causal Inference, and Infosec

  1. Why Isn't Debugging Treated as a First-Class Activity? (Robert O'Callahan) -- Another of my theories is that many developers have abandoned interactive debuggers because they're a very poor fit for many debugging problems (e.g., multiprocess, time-sensitive, and remote workloads—especially cloud and mobile applications). Debugging isn't really taught at schools, either. It's an odd forensic science. What are your favourite debugging tutorials, papers, or books? Let me know: @gnat.
  2. Just Code Challenge -- I'm a little late, but it's still a good idea. The idea is for you to make one program (or app) a week throughout the summer. These apps don’t have to do anything fancy, although they should do something that is at least a little bit useful or fun. Any type of app counts—desktop, iOS, or web.
  3. Causal Inference Book -- The book is divided in three parts of increasing difficulty: causal inference without models, causal inference with models, and causal inference from complex longitudinal data.
  4. -- A collection of information security essays and links to help growing teams manage risks.

Continue reading Four short links: 12 July 2018.

Categories: Technology

What machine learning means for software development

O'Reilly Radar - Wed, 2018/07/11 - 04:00

“Human in the loop” software development will be a big part of the future.

Machine learning is poised to change the nature of software development in fundamental ways, perhaps for the first time since the invention of FORTRAN and LISP. It presents the first real challenge to our decades-old paradigms for programming. What will these changes mean for the millions of people who are now practicing software development? Will we see job losses and layoffs, or will see programming evolve into something different—perhaps even something more focused on satisfying users?

We’ve built software more or less the same way since the 1970s. We’ve had high-level languages, low-level languages, scripting languages, and tools for building and testing software, but what those tools let us do hasn’t changed much. Our languages and tools are much better than they were 50 years ago, but they’re essentially the same. We still have editors. They’re fancier: they have color highlighting, name completion, and they can sometimes help with tasks like refactoring, but they’re still the descendants of emacs and vi. Object orientation represents a different programming style, rather than anything fundamentally new—and, of course, functional programming goes all the way back to the 50s (except we didn’t know it was called that). Can we do better?

We will focus on machine learning rather than artificial intelligence. Machine learning has been called “the part of AI that works,” but more important, the label “machine learning” steers clear of notions like general intelligence. We’re not discussing systems that can find a problem to be solved, design a solution, and implement that solution on their own. Such systems don’t exist, and may never exist. Humans are needed for that. Machine learning may be little more than pattern recognition, but we’ve already seen that pattern recognition can accomplish a lot. Indeed, hand-coded pattern recognition is at the heart of our current toolset: that’s really all a modern optimizing compiler is doing.

We also need to set expectations. McKinsey estimates that “fewer than 5% of occupations can be entirely automated using current technology. However, about 60% of occupations could have 30% or more of their constituent activities automated.” Software development and data science aren’t going to be among the occupations that are completely automated. But good software developers have always sought to automate tedious, repetitive tasks; that’s what computers are for. It should be no surprise that software development itself will increasingly be automated.

This isn’t a radical new vision. It isn’t as if we haven’t been working on automated tools for the past half-century. Compilers automated the process of writing machine code. Scripting languages automate many mundane tasks by gluing together larger, more complex programs. Software testing tools, automated deployment tools, containers, and container orchestration systems are all tools for automating the process of developing, deploying, and managing software systems. None of these take advantage of machine learning, but that is certainly the next step.

Will machine learning eat software, as Pete Warden and Andrej Karpathy have argued? After all, “software eating the world” has been a process of ever-increasing abstraction and generalization. A laptop, phone, or smart watch can replace radios, televisions, newspapers, pinball machines, locks and keys, light switches, and many more items. All these technologies are possible because we came to see computers as general-purpose machines, not just number crunchers.

From this standpoint, it’s easy to imagine machine learning as the next level of abstraction, the most general problem solver that we’ve found yet. Certainly, neural networks have proven they can perform many specific tasks: almost any task for which it’s possible to build a set of training data. Karpathy is optimistic when he says that, for many tasks, it’s easier to collect the data than to explicitly write the program. He’s no doubt correct about some very interesting, and very difficult, programs: it’s easy to collect training data for Go or Chess (players of every level have been recording games for well over 150 years), but very hard to write an explicit program to play those games successfully. So, machine learning is an option when you don’t know how to write the software, but you can collect the data. On the other hand, data collection isn’t always easy. We couldn’t even conceive of programs that automatically tagged pictures until sites like Flickr, Facebook, and Google assembled billions of images, many of which had already been tagged by humans. For tasks like face recognition, we don’t know how to write the software, and it has been difficult to collect the data. For other tasks, like billing, it’s easy to write a program based on a few simple business rules. It’s hard to imagine collecting the data you’d need to train a machine learning algorithm—but if you are able to collect data, the program you produce will be better at adapting to different situations and detecting anomalies, particularly if there’s a human in the loop.

Learning replacing code

Machine learning is already making code more efficient: Google’s Jeff Dean has reported that 500 lines of TensorFlow code has replaced 500,000 lines of code in Google Translate. Although lines of code is a questionable metric, a thousand-fold reduction is huge: both in programming effort and in the volume of code that has to be maintained. But what’s more significant is how this code works: rather than half a million lines of statistical code, it’s a neural network that has been trained to translate. As language changes and usage shifts, as biases and prejudices are discovered, the neural network can be revisited and retrained on new data. It doesn’t need to be rewritten. We shouldn’t understate the difficulty of training a neural network of any complexity, but neither should we underestimate the problem of managing and debugging a gigantic codebase.

We’ve seen research suggesting that neural networks can create new programs by combining existing modules. The system is trained using execution traces from other programs. While the programs constructed this way are simple, it’s significant that a single neural network can learn to perform several different tasks, each of which would normally require a separate program.

Pete Warden characterizes the future of programming as becoming a teacher: “the developer has to become a teacher, a curator of training data, and an analyst of results.” We find this characterization very suggestive. Software development doesn’t disappear; developers have to think of themselves in much different terms. How do you build a system that solves a general problem, then teach that system to solve a specific task? On one hand, this sounds like a risky, troublesome prospect, like pushing a rope. But on the other hand, it presumes that our systems will become more flexible, pliable, and adaptable. Warden envisions a future that is more about outcomes than writing about lines of code: training a generic system, and testing whether it meets your requirements, including issues like fairness.

Thinking more systematically, Peter Norvig has argued that machine learning can be used to generate short programs (but not long ones) from training data; to optimize small parts of larger programs, but not the entire program; and possibly to (with the help of humans) be better tutors to beginning programmers.

Data management and infrastructure

There are early indications that machine learning can outperform traditional database indexes: it can learn to predict where data is stored, or if that data exists. Machine learning appears to be significantly faster and require much less memory, but it is fairly limited: current tools based on machine learning do not cover multidimensional indexes, and assume that the database isn’t updated frequently. Retraining takes longer than rebuilding traditional database indexes. However, researchers are working on multidimensional learned indexes, query optimization, re-training performance, and other issues.

Machine learning is already making its way into other areas of data infrastructure. Data engineers are using machine learning to manage Hadoop, where it enables quicker response to problems such as running out of memory in a Hadoop cluster. Kafka engineers also report using machine learning to diagnose problems. And researchers have had success using machine learning to tune databases for performance, where it simplifies the problem of managing the many configuration settings that affect behavior. Data engineers and database administrators won’t become obsolete, but they may have to develop machine learning skills. And in turn, machine learning will help them to make difficult problems more manageable. Managing data infrastructure will be less about setting hundreds of different configuration parameters correctly than about training the system to perform well on your workload.

Making difficult problems manageable remains one of the most important issues for data science. Data engineers are responsible for maintaining the data pipeline: ingesting data, cleaning data, feature engineering, and model discovery. They are responsible for deploying software in very complex environments. Once all this infrastructure has been deployed, it needs to be monitored constantly to detect (or prevent) outages, and also to ensure that the model is still performing adequately. These are all tasks for which machine learning is well-suited, and we’re increasingly seeing software like MLFlow used to manage data pipelines.

Data science

Among the early manifestations of automated programming were tools designed to enable data analysts to perform more advanced analytic tasks. The Automatic Statistician is a more recent tool that automates exploratory data analysis and provides statistical models for time series data, accompanied by detailed explanations.

With the rise of deep learning, data scientists find themselves needing to search for the right neural network architectures and parameters. It’s also possible to automate the process of learning itself. After all, neural networks are nothing if not tools for automated learning: while building a neural network still requires a lot of human work, it would be impossible to hand-tune all the parameters that go into a model. One application is using machine learning to explore possible neural network architecture; as this post points out, a 10-layer network can easily have 1010 possibilities. Other researchers have used reinforcement learning to make it easier to develop neural network architectures.

Taking this further: companies like DataRobot automate the entire process, including using multiple models and comparing results. This process is being called "automated machine learning"; Amazon’s Sagemaker and Google’s AutoML provide cloud-based tools to automate the creation of machine learning models.

Model creation isn’t a one-time thing: data models need to be tested and re-tuned constantly. We are beginning to see tools for constant monitoring and tuning. These tools aren’t particularly new: bandit algorithms for A/B testing have been around for some time, and for many companies, bandit algorithms will be the first step toward reinforcement learning. Chatbase is a Google startup that monitors chat applications so developers can understand their performance. Do the applications understand the questions that users are asking? Are they able to resolve problems, or are users frequently asking for unsupported features? These are problems that could be solved by going through chat logs manually and flagging problems, but that’s difficult even with a single bot, and Chatbase envisions a future where many organizations have dozens or even hundreds of sophisticated bots for customer service, help desk support, and many other applications.

It is also possible to use machine learning to look for vulnerabilities in software. There are systems that will go over the code and look for known flaws. These systems don’t necessarily fix the code, nor do they promise to find every potential problem. But they can easily highlight dangerous code, and they can allow developers working on a large codebase to ask questions like “are there other problems like this?”

Game developers are looking to machine learning in several ways. Can machine learning be used to make backgrounds and scenes that look more realistic? Drawing and modeling realistic scenes and images is very expensive and time consuming. Currently, everything a non-player character (NPC) does has to be programmed explicitly. Can machine learning be used to model the behavior of NPCs? If NPCs can learn behavior, we can expect game play that is more creative.

Envisioning the future

What does the future look like for software developers? Will software development take the same path that McKinsey forecasts for other industries? Will 30% of the activities involved in software development and data science be automated?

Perhaps, though that’s a simplistic reading of the situation. Machine learning will no doubt change software development in significant ways. And it wouldn’t be surprising if a large part of what we now consider “programming” is automated. That’s nothing new, though: compilers don’t do machine learning, but they transformed the software industry by automating the generation of machine code.

The important question is how software development and data science will change. One possibility—a certainty, really—is that software developers will put much more effort into data collection and preparation. Machine learning is nothing without training data. Developers will have to do more than just collect data; they’ll have to build data pipelines and the infrastructure to manage those pipelines. We’ve called this “data engineering.” In many cases, those pipelines themselves will use machine learning to monitor and optimize themselves.

We may see training machine learning algorithms become a distinct subspecialty; we may soon be talking about “training engineers” the way we currently talk about “data engineers.” In describing his book Machine Learning Yearning, Andrew Ng says, “This book is focused not on teaching you ML algorithms, but on how to make ML algorithms work.” There’s no coding, and no sophisticated math. The book focuses almost entirely on the training process, which, more than coding, is the essence of making machine learning work.

The ideas we’ve presented have all involved augmenting human capabilities: they enable humans to produce working products that are faster, more reliable, better. Developers will be able to spend more time on interesting, important problems rather than getting the basics right. What are those problems likely to be?

Discussing intelligence augmentation in "How to Become a Centaur," Nicky Case argues that computers are good at finding the best answer to a question. They are fundamentally computational tools. But they’re not very good at finding interesting questions to answer. That’s what humans do. So, what are some of the important questions we’ll need to ask?

We’re only starting to understand the importance of ethics in computing. Basic issues like fairness aren’t simple and need to be addressed. We’re only starting to think about better user interfaces, including conversational interfaces: how will they work? Even with the help of AI, our security problems are not going to go away. Regardless of the security issues, all of our devices are about to become “smart.” What does that mean? What do we want them to do? Humans won’t be writing as much low-level code. But because they won’t be writing that code, they’ll be free to think more about what that code should do, and how it should interact with people. There will be no shortage of problems to solve.

It’s very difficult to imagine a future in which humans no longer need to create software. But it’s very easy to imagine that “human in the loop” software development will be a big part of the future.

Related content:

Continue reading What machine learning means for software development.

Categories: Technology

Four short links: 11 July 2018

O'Reilly Radar - Wed, 2018/07/11 - 03:05

Metadata, AI Strategies, Program Synthesis, and Text-Based Browser

  1. You are your Metadata: Identification and Obfuscation of Social Media Users using Metadata Information -- We demonstrate that through the application of a supervised learning algorithm, we are able to identify any user in a group of 10,000 with approximately 96.7% accuracy. Moreover, if we broaden the scope of our search and consider the 10 most likely candidates, we increase the accuracy of the model to 99.22%. We also found that data obfuscation is hard and ineffective for this type of data: even after perturbing 60% of the training data, it is still possible to classify users with an accuracy higher than 95%. (via Wired UK)
  2. Overview of National AI Strategies -- where each country is at, what their goals are, etc.
  3. Building a Program Synthesizer -- Build a program synthesis tool, to generate programs from specifications, in 20 lines of code using Rosette. I'm interested in work people are doing to automatically create software. Like this example, most packages are still in a math-like larval stage. It's going to be interesting once they cross from "looks like a 1980s AI course" to "looks like Gmail".
  4. Browsh -- a text-based browser that uses the Firefox engine underneath (but rendering to text).

Continue reading Four short links: 11 July 2018.

Categories: Technology

Four short links: 10 July 2018

O'Reilly Radar - Tue, 2018/07/10 - 04:40

Troubling Trends, Satellite Imagery, Management and Autonomy, and Brutalist Web Design

  1. Troubling Trends in Machine Learning Scholarship -- In this paper, we focus on the following four patterns that appear to us to be trending in ML scholarship: (i) Failure to distinguish between explanation and speculation. (ii) Failure to identify the sources of empirical gains—e.g., emphasizing unnecessary modifications to neural architectures when gains actually stem from hyper-parameter tuning. (iii) Mathiness: the use of mathematics that obfuscates or impresses rather than clarifies—e.g., by confusing technical and non-technical concepts. (iv) Misuse of language—e.g., by choosing terms of art with colloquial connotations or by overloading established technical terms.
  2. RoboSat -- mapbox open-sourced their machine learning system that does semantic segmentation on aerial and satellite imagery. Extracts features such as: buildings, parking lots, roads, water.
  3. On Management and Autonomy -- in our experience, too many managers err on the side of mistrust. They follow the basic premise that their people may operate completely autonomously, as long as they operate correctly. This amounts to no autonomy at all. The only freedom that has any meaning is the freedom to proceed differently from the way your manager would have proceeded. So true! (Parents: this applies to children, as well)
  4. Brutalist Web Design -- a manifesto.

Continue reading Four short links: 10 July 2018.

Categories: Technology

Doing good data science

O'Reilly Radar - Tue, 2018/07/10 - 04:00

Data scientists, data engineers, AI and ML developers, and other data professionals need to live ethical values, not just talk about them.

The hard thing about being an ethical data scientist isn’t understanding ethics. It’s the junction between ethical ideas and practice. It’s doing good data science.

There has been a lot of healthy discussion about data ethics lately. We want to be clear: that discussion is good, and necessary. But it’s also not the biggest problem we face. We already have good standards for data ethics. The ACM’s code of ethics, which dates back to 1993, is clear, concise, and surprisingly forward-thinking; 25 years later, it’s a great start for anyone thinking about ethics. The American Statistical Association has a good set of ethical guidelines for working with data. So, we’re not working in a vacuum.

And, while there are always exceptions, we believe that most people want to be fair. Data scientists and software developers don’t want to harm the people using their products. There are exceptions, of course; we call them criminals and con artists. Defining “fairness” is difficult, and perhaps impossible, given the many crosscutting layers of “fairness” that we might be concerned with. But we don’t have to solve that problem in advance, and it’s not going to be solved in a simple statement of ethical principles, anyway.

The problem we face is different: how do we put ethical principles into practice? We’re not talking about an abstract commitment to being fair. Ethical principles are worse than useless if we don’t allow them to change our practice, if they don’t have any effect on what we do day-to-day. For data scientists, whether you’re doing classical data analysis or leading-edge AI, that’s a big challenge. We need to understand how to build the software systems that implement fairness. That’s what we mean by doing good data science.

Any code of data ethics will tell you that you shouldn’t collect data from experimental subjects without informed consent. But that code won’t tell you how to implement “informed consent.” Informed consent is easy when you’re interviewing a few dozen people in person for a psychology experiment. Informed consent means something different when someone clicks on an item in an online catalog (hello, Amazon), and ads for that item start following them around ad infinitum. Do you use a pop-up to ask for permission to use their choice in targeted advertising? How many customers would you lose? Informed consent means something yet again when you’re asking someone to fill out a profile for a social site, and you might (or might not) use that data for any number of experimental purposes. Do you pop up a consent form in impenetrable legalese that basically says “we will use your data, but we don’t know for what”? Do you phrase this agreement as an opt-out, and hide it somewhere on the site where nobody will find it?

That’s the sort of question we need to answer. And we need to find ways to share best practices. After the ethical principle, we have to think about the implementation of the ethical principle. That isn’t easy; it encompasses everything from user experience design to data management. How do we design the user experience so that our concern for fairness and ethics doesn’t make an application unuseable? Bad as it might be to show users a pop-up with thousands of words of legalese, laboriously guiding users through careful and lengthy explanations isn’t likely to meet with approval, either. How do we manage any sensitive data that we acquire? It’s easy to say that applications shouldn’t collect data about race, gender, disabilities, or other protected classes. But if you don’t gather that data, you will have trouble testing whether your applications are fair to minorities. Machine learning has proven to be very good at figuring its own proxies for race and other classes. Your application wouldn’t be the first system that was unfair despite the best intentions of its developers. Do you keep the data you need to test for fairness in a separate database, with separate access controls?

To put ethical principles into practice, we need space to be ethical. We need the ability to have conversations about what ethics means, what it will cost, and what solutions to implement. As technologists, we frequently share best practices at conferences, write blog posts, and develop open source technologies—but we rarely discuss problems such as how to obtain informed consent.

There are several facets to this space that we need to think about.

First, we need corporate cultures in which discussions about fairness, about the proper use of data, and about the harm that can be done by inappropriate use of data can be considered. In turn, this means that we can’t rush products out the door without thinking about how they’re used. We can’t allow “internet time” to mean ignoring the consequences. Indeed, computer security has shown us the consequences of ignoring the consequences: many companies that have never taken the time to implement good security practices and safeguards are now paying with damage to their reputations and their finances. We need to do the same when thinking about issues like fairness, accountability, and unintended consequences.

We particularly need to think about the unintended consequences of our use of data. It will never be possible to predict all the unintended consequences; we’re only human, and our ability to foresee the future is limited. But plenty of unintended consequences could easily have been foreseen: for example, Facebook’s “Year in Review” that reminded people of deaths and other painful events. Moving fast and breaking things is unacceptable if we don’t think about the things we are likely to break. And we need the space to do that thinking: space in project schedules, and space to tell management that a product needs to be rethought.

We also need space to stop the production line when something goes wrong. This idea goes back to Toyota’s Kanban: any assembly line worker can stop the line if they see something going wrong. The line doesn’t restart until the problem is fixed. Workers don’t have have to fear consequences from management for stopping the line; they are trusted, and expected to behave responsibly. What would it mean if we could do this with product features? If anyone at Facebook could have said “wait, we’re getting complaints about Year in Review” and pulled it out of production until someone could investigate what was happening?

It’s easy to imagine the screams from management. But it’s not hard to imagine a Toyota-style “stop button” working. After all, Facebook is the poster child for continuous deployment, and they’ve often talked about how new employees push changes to production on their first day. Why not let employees pull features out of production? Where are the tools for instantaneous undeployment? They certainly exist; continuous deployment doesn’t make sense if you can’t roll back changes that didn’t work. Yes, Facebook is a big, complicated company, with a big complicated product. So is Toyota. It worked for them.

The issue lurking behind all of these concerns is, of course, corporate culture. Corporate environments can be hostile to anything other than short-term profitability. That’s a consequence of poor court decisions and economic doctrine, particularly in the U.S. But that inevitably leads us to the biggest issue: how to move the needle on corporate culture. Susan Etlinger has suggested that, in a time when public distrust and disenchantment is running high, ethics is a good investment. Upper-level management is only starting to see this; changes to corporate culture won’t happen quickly.

Users want to engage with companies and organizations they can trust not to take unfair advantage of them. Users want to deal with companies that will treat them and their data responsibly, not just as potential profit or engagement to be maximized. Those companies will be the ones that create space for ethics within their organizations. We, the data scientists, data engineers, AI and ML developers, and other data professionals, have to demand change. We can't leave it to people that "do" ethics. We can't expect management to hire trained ethicists and assign them to our teams. We need to live ethical values, not just talk about them. We need to think carefully about the consequences of our work. We must create space for ethics within our organizations. Cultural change may take time, but it will happen—if we are that change. That’s what it means to do good data science.

Continue reading Doing good data science.

Categories: Technology

Do you need a service mesh?

O'Reilly Radar - Tue, 2018/07/10 - 03:30

Learn why this new tool is a critical component in microservice-based architectures.

There’s been a lot of recent buzz around the service mesh as a necessary infrastructure solution for cloud-native applications. Despite its surge in popularity, there’s still some confusion about the precise value of adoption. Because the service mesh has proven itself a necessary building block when architecting robust microservice-based applications, it has received a lot of accolades, and the momentum behind its adoption has been wild. Beyond the hype, it’s necessary to understand what a service mesh is and what concrete problems it solves so you can decide whether you might need one.

A brief introduction to the service mesh

The service mesh is a dedicated infrastructure layer for handling service-to-service communication in order to make it visible, manageable, and controlled. The exact details of its architecture vary between implementations, but generally speaking, every service mesh is implemented as a series (or a “mesh”) of interconnected network proxies designed to better manage service traffic.

This type of solution has gained recent popularity with the rise of microservice-based architectures, which introduce a new breed of communication traffic. Unfortunately, it’s often introduced without much forethought by its adopters. This is sometimes referred to as the difference between the north-south versus east-west traffic pattern. Put simply, north-south traffic is server-to-client traffic, whereas east-west is server-to-server traffic. The naming convention is related to diagrams that “map” network traffic, which typically draw vertical lines for server-client traffic, and horizontal lines for server-to-server traffic. In the world of server-to-server traffic, aside from considerations happening at the network and transport layers (L3/L4), there’s a critical difference happening in the session layer to account for.

In that new world, service-to-service communication becomes the fundamental determining factor for how your applications behave at runtime. Application functions that used to occur locally as part of the same runtime instead occur as remote procedure calls being transported over an unreliable network. That means that the success or failure of complex decision trees reflecting the needs of your business now require you to account for the reality of programming for distributed systems. For most, that’s a new realm of expertise that requires creating and then baking a lot of custom-built tooling right into your application code. The service mesh relieves app developers from that burden, decouples that tooling from your apps, and pushes that responsibility down into the infrastructure layer.

With a service mesh, each application endpoint (whether a container, a pod, or a host, and however these are set up in your deployments) is configured to route traffic to a local proxy (installed as a sidecar container, for example). That local proxy exposes primitives that can be used to manage things like retry logic, encryption mechanisms, custom routing rules, service discovery, and more. A collection of those proxies form a “mesh” of services that now share common network traffic management properties. Those proxies can be controlled from a centralized control plane where operators can compose policy that affects the behavior of the entire mesh.

Because service-to-service communication is the fundamental determining factor for the runtime behavior of microservice-based applications, the most obvious place to derive value from the service mesh is management of messages used for remote procedure calls (or API calls). Inevitably, comparisons are then made between the service mesh and other message management solutions like messaging-oriented middleware, an enterprise service bus (ESB), enterprise application integration patterns (EAI), or API gateways. The service mesh may have minor feature overlap with some of those, but as a whole, it’s oriented around a larger problem set.

The service mesh is different because it’s implemented as infrastructure that lives outside of your applications. Your applications don’t require any code changes to use a service mesh. The value of a service mesh is primarily realized when examining management of RPCs (or messages), but its value extends to management of all inbound and outbound traffic. Rather than coding that remote communication management directly into your apps, the service mesh allows you to manage that logic across your entire distributed infrastructure more easily.

The problem space

At its core, the service mesh exists to solve the challenges inherent to managing distributed systems. This isn’t a new problem, but it is a problem that many more users now face because of the proliferation of microservices. Programmers who are accustomed to dealing with distributed systems will recognize the fallacies of distributed computing:

  • The network is reliable
  • Latency is zero
  • Bandwidth is infinite
  • The network is secure
  • Topology doesn’t change
  • There is one administrator
  • Transport cost is zero
  • The network is homogeneous

These mistaken assumptions present themselves when running at scale. By that point, it’s typically too late to turn back, and developers often find themselves scrambling to build solutions to these newly discovered landmines. But these are actually well-understood problems with several proven (if not entirely reusable) solutions that have been built over the years.

In the past, application developers have solved these problems by creating custom tools directly within their applications: open a socket, transmit data, retry for some specified period if it fails, close the socket when the transaction reaches some inevitable conclusion, and so on. The burden of programming distributed applications was placed directly on the shoulders of each developer, and the logic to do so was tightly coupled into every distributed application as a result.

As an incremental step toward a reusable solution, network resiliency libraries (e.g., Netflix’s Hystrix or Twitter’s Finagle) emerged. Include these libraries in your application code and you now have a set of pre-developed tools ready to go. While these solutions made incredible leaps forward, they were also of limited value for polyglot applications. Different programming languages require different libraries, and then the challenge instead shifts to managing integration between the two. Consistent management between different application endpoints is an inherent challenge in this model.

Enter the service mesh.

The service mesh is meant to solve the challenges of programming for distributed systems. In today’s world, that means the question you should first be asking yourself is, “Do I have a lot of services communicating with each other in my application infrastructure?”

If you primarily manage traditional monolithic applications (even if they’re inside a container for some odd reason), you might still realize some benefit from a service mesh, but it’s value for you is significantly smaller.

If you do manage a number of smaller (née, micro) services, then the reckoning that is dealing with the fallacies of distributed computing is coming for you (if you haven’t slammed into that wall already). As microservice applications evolve, new features are typically introduced as additional external services. As the distribution of your applications continues to grow, so will your need for a solution like the service mesh.

The service mesh exists to provide solutions to the challenges of ensuring reliability (retries, timeouts, mitigating cascading failures), troubleshooting (observability, monitoring, tracing, diagnostics), performance (throughput, latency, load balancing), security (managing secrets, ensuring encryption), dynamic topology (service discovery, custom routing), and other issues commonly encountered when managing microservices in production.

If you currently face these problems, or if you’ve adopted cloud-native and microservice-architecture design patterns, then the service mesh is a tool you should explore to determine if it will work for your environment. By focusing on why this type of tool exists and the specific types of problems it solves, you can avoid the hype and jump right into quantifying its value for you.

This post is a collaboration between O'Reilly and Buoyant. See our statement of editorial independence.

Continue reading Do you need a service mesh?.

Categories: Technology

Segueing into a product role

O'Reilly Radar - Mon, 2018/07/09 - 08:20

The clearest path to a product management role is at your current organization.

Continue reading Segueing into a product role.

Categories: Technology

How to design a product at a startup

O'Reilly Radar - Mon, 2018/07/09 - 08:15

It's all about building an MVP.

As far as the customer is concerned, the interface is the product.

[Raskin 2000, 5], Jeff Raskin, The Humane Interface

For someone browsing the web, Google is a text box and a page of results, and not the bots that crawl the web, the algorithm to rank pages, or the hundreds of thousands of servers in multiple data centers around the world. For someone who needs a ride, Uber is a button on their phone they can push to order a car, and not the real-time dispatch system, the payment processing systems, or all the effort that goes into recruiting drivers and fighting with regulators. And for someone using a smartphone, the iPhone consists of the parts they can see (e.g., the screen), hear (e.g., a caller’s voice), and touch (e.g., a button), and not the GSM, WiFi, and GPS radios, the multi-core CPU, the operating system, the supply chain that provides the parts, or the factories in China that assemble them. To a customer, the design of the product is all that matters.

Joel Spolsky called this "The Iceberg Secret." Just as the part of the iceberg that you can see above the water represents only 10% of its total size, the part of a product that you can see and touch—the user interface—represents only 10% of the work. The secret is that most people don’t understand this [Spolsky 2002]. When they see a user interface that looks crappy, they assume everything about the product is crappy. If you’re doing a demo to a potential customer, above all else you must have a polished presentation. You can’t ask them to imagine how it would look and just focus on the "functionality." If the pixels on the screen look terrible, the default assumption is that the product must be terrible, too.

You might think the Iceberg Secret doesn’t apply to programmers, but no one is immune. I prefer iPhone to Android, open source projects with beautiful documentation pages to those with plain-text readmes, and blog posts on Medium to those on Blogger. It’s almost as if we’re all hard wired to judge a book by its cover. But product design isn’t just the cover. It’s also how it’s printed, the title, the praise on the back, the typography, the layout, and even the text itself.

Most people make the mistake of thinking design is what it looks like. People think it’s this veneer—that the designers are handed this box and told, Make it look good! That’s not what we think design is. It’s not just what it looks like and feels like. Design is how it works.

[Walker 2003], Steve Jobs

Design is how it works. Yes, the iPhone is prettier than most other smartphones, and that counts for something, but it’s more than just a question of style. The sharp screen, the fonts, and the layout make it easier to read the text. The buttons are big and easy to use. The touchscreen is precise and the UI is fast and responsive. The phone tries to anticipate your needs, dimming the screen automatically in response to the amount of ambient light, or shutting off the screen entirely when you hold the phone to your ear. You don’t have to think about it or fight with it—it "just works." And while other smartphones may have caught up in terms of features and price, they still can’t match the experience. This is why Apple invests so heavily in design, and not coincidentally, this is why it’s the most valuable company in the world.

Design is a useful skill even if you aren’t building a product and even if the word "designer" is not part of your job title. Everybody uses design all the time. You use design every time you create a slide deck for a presentation, format your résumé, build a personal home page, arrange the furniture in your living room, plan out the syllabus for a class, or come up with the architecture for a software system. Design is fundamentally about how to present information so other people can understand it and use it. Given that much of success in life comes down to how well you communicate, it’s remarkable just how little design training most people get as part of their education.

Due to my lack of training, I used to think of design and art ability as something you either had or you didn’t. My own artistic abilities were limited to drawing stick figures all over my notebooks, so clearly I didn’t have that ability. It took me a long time to realize that design and art are both skills that can be learned through an iterative process.

Design is iterative

A few years ago, I took an art class with my sister. The art teacher was a family friend, and he would come over to our house and have us paint still lifes and skylines using pencils and water colors. One day, I was working on a fruit still life and struggling to paint an orange. The best I could do was a bland blotch of orange paint in a vaguely circular shape. The teacher noticed I was frustrated and asked me, "What color is an orange?" Not sure if it was a trick question, I replied, "Orange?" The art teacher smiled and said, "What else?" I stared at the piece of fruit for a minute and said, "I guess there’s a little white and yellow where the light hits the skin." Still smiling, the art teacher said, "Good. And what else?" I stared at the fruit some more. "And nothing else. It’s a goddamn orange. All I see is orange and various shades of orange."

The art teacher leaned over, took the brush from my hands, and began making some changes to my painting. As he worked, he explained what he was doing. "A spherical shape will have a highlight, which we can paint in white and yellow, and some shadows, which can be red, brown, and green. The orange also casts a shadow to the side, so let’s use gray and blue for that, and add in some brown and sienna to separate the edge of the orange from the shadow it casts" (see Figure 1).

Figure 1. How to paint an orange (image courtesy of Charlene McGill)

I stared back and forth between the canvas and the actual fruit. An orange isn’t orange. It’s orange, yellow, white, red, brown, sienna, green, and blue. After a little while, I had a few realizations:

  • Art involves many concrete tools and techniques that can be learned. The art teacher knew all the ingredients of how to paint something spherical, almost like a recipe: you take several cups of base color, mix in a tablespoon of shadow, add a pinch of highlight, stir, and you have a sphere.

  • The representation of an orange in my head is different from what the orange looks like in the real world, but I’m unaware of all the missing details until I try to reproduce the image of an orange on canvas.

  • Similarly, the representation of an orange on a canvas is different from what an orange looks like in the real world. This difference is usually intentional because the goal of art is not to create a photocopy of something in the real world but to present it in a specific way that makes you think or feel something.1

I’m still not much of an artist, but understanding the mindset of an artist has made me realize that artistic talent is a skill that can be improved by practicing, by training your eye to deliberately observe why something looks the way it does, and by recognizing that the goal of art is to communicate something to the viewer. These same three principles apply to design:

  • Design is a skill that can be learned.

  • You have to train your eye to consciously recognize why some designs work and others do not.

  • The goal of design is to communicate something to the user.

I hope to convince you of all three points in this chapter. The most important thing to keep in mind about the first point is that design is an iterative, incremental process. Your first draft will be awful, but the only way to come up with a good design, and the only way to become a good designer, is to keep iterating. What makes this particularly challenging is that during your life, you’ve developed a sense of taste from having seen thousands of products created by professional designers, and there is no way your early design work will match up. It’s like a music lover who has spent years listening to the beautiful violin concertos of Mozart and Bach, who dreams of playing at Carnegie Hall, and who finally picks up the violin for the first time, excitedly draws the bow across the strings, and is horrified by the screeching sounds that come out. Everyone who does creative work goes through a phase where their work does not satisfy their own taste. This is completely normal and the only solution is to do more work. Keep playing the violin. Keep coming up with new designs. Keep iterating and eventually, perhaps after a long time, your skills will catch up to your taste, and you’ll finally produce work that makes you happy [Glass 2009]. But for now, just remember that done is better than perfect.

The second point is that to get better at design, you need to try to consciously understand why a particular design does or doesn’t work for you. Next time you marvel at how simple it is to use an iPad, pause and ask yourself why. What is it about its design that makes it simple enough that almost anyone can use it, from a non-tech-savvy grandparent to a two-year-old? Why is it that these same people can’t figure out how to use a desktop computer or a tablet that requires a stylus? We’ll discuss what your eye should look for in a design in Visual Design.

Finally, the third point—that the goal of design is to communicate with the user—means that although looking pretty is a valuable aspect of design, it’s even more important to recognize that design is about helping people achieve their goals. Therefore, every design needs to start with an understanding of the user, which we’ll cover next as part of user-centered design.

1There is a classic story of Pablo Picasso traveling on train when a passenger recognizes him and asks, "Why do you distort reality in your art instead of painting people the way they actually are?" Picasso asks, "What do you mean by the way they actually are?" The passenger pulls out a photo of his wife from his wallet and says, "Well, this is what my wife actually looks like." Picasso looks at the image and says, "She’s rather small and flat, isn’t she?"

User-centered design

I remember sitting in a conference room at LinkedIn with several co-workers getting ready to kick off our next project. We knew what we wanted to build and we had broken down the work into a list of tasks. All that was left was to record these tasks somewhere so we could track our progress over time. We decided to try out the issue-tracking software that the rest of the company was using. It had all sorts of fancy features, including search, reporting tools, and colorful charts. There was just one problem: we couldn’t figure out how to use it.

We had seven professional programmers in that room. We knew what we wanted to do. We thought we knew how to do it, too, as we had all used issue-tracking software many times before, and we had all been using websites—nay, building websites—for several decades. I therefore find it hard to adequately capture just how frustrating it was to to run into an issue-tracking website that utterly stumped everyone in the room. We spent several hours trying to figure out how to define a new project in the issue tracker, how to start a project once we defined it, how to move tickets between projects, how to use the 15 different view modes, why all the charts were empty after we finished a project, and what the 50 different text boxes on the issue-creation screen were for. It was maddening. After lots of frustration, we gave up and ended up using Post-it notes. The issue tracking software was better than Post-it notes in every aspect, except in the one aspect that mattered the most: helping people achieve their goals.

Notice the emphasis on the words "people" and "goals." Design isn’t about buttons, or colorful charts, or features. It’s about people and goals. In the story just described, the people were software experts and the issue tracker miserably failed to help us accomplish our goal of tracking the work for our project. Worse yet, it failed at the most important design goal of all:

The number-one goal of all computer users is to not feel stupid.

[Cooper 2004, 25], Alan Cooper, The Inmates are Running the Asylum

In the past, my process for designing software—if you could really call it a process—consisted of the following steps:

  1. Sit down with the team and ask "What features would be cool for version 5.0 of our product?"

  2. Come up with a long list of features, argue over prioritization, and set an arbitrary deadline.

  3. Work furiously to get as many of the features as possible done before the deadline. Inevitably run out of time and start cutting the features that are taking too long.

  4. Cram whatever features were completed on time anywhere they would fit into the user interface.

  5. Release version 5.0 to users. Hope and pray that users like it.

  6. Repeat.

There are many things wrong with this process, but perhaps the biggest is that at no point do the goals of a real user come into the picture. I built things that were "cool" rather than what users actually needed. I had no idea how to figure out what users actually wanted (something I’ll cover as part of not available), but I knew how to add features, so that’s exactly what I did. I had a bad case of feature-itis and it took me a long time to find the cure.

The solution is to realize that you can’t bolt a "design" onto a product after the engineering and product work. Design is the product. It must be part of your process from day one. Here are five principles of user-centered design that you should incorporate into your product development process:

  • User stories

  • Personas

  • Emotional design

  • Simplicity

  • Usability testing

User stories

Doing design up front does not mean that you need to come up with a detailed 300-page spec, but before you dive into the code, you should be able to define a user story. A user story is a short description of what you’re building from the perspective of the user. It should answer three questions:

  • Who is the user?

  • What are they trying to accomplish?

  • Why do they need it?

The first question, "Who is the user?", requires you to understand people, which is surprisingly hard. Because you’re a person, you probably think you understand why people act the way they do, or at least your own motivations, but as you saw in the previous chapter, the vast majority of your behavior is controlled by the subconscious and you are often completely unaware of it (see not available).

If you’re a programmer, understanding your users is even harder. Every person forms a conceptual model in their head of how a product works. While a programmer’s model is usually very detailed—often at the level of interfaces, events, messages, APIs, network protocols, and data storage—the typical user’s model is usually less detailed, inaccurate, and incomplete (e.g., many users don’t differentiate between software and hardware or the monitor and the computer). This mismatch in conceptual models makes it difficult for a programmer to communicate with a user.

And therein is the catch: communication is what design is all about. You’re trying to present information to the user, to tell them what can be done, and to show them how to do it. Unfortunately, many programmers don’t realize that because they know so much about their software that they think of it in a completely different way than the user. We can’t remember what it was like to be a novice. This is called the curse of knowledge, a cognitive effect beautifully demonstrated by a Stanford study:

In 1990, Elizabeth Newton earned a Ph.D. in psychology at Stanford by studying a simple game in which she assigned people to one of two roles: "tappers" or "listeners." Tappers received a list of twenty-five well-known songs, such as "Happy Birthday to You" and "The Star Spangled Banner." Each tapper was asked to pick a song and tap out the rhythm to a listener (by knocking on a table). The listener’s job was to guess the song, based on the rhythm being tapped. (By the way, this experiment is fun to try at home if there’s a good "listener" candidate nearby.)

The listener’s job in this game is quite difficult. Over the course of Newton’s experiment, 120 songs were tapped out. Listeners guessed only 2.5 percent of the songs: 3 out of 120.

But here’s what made the result worthy of a dissertation in psychology. Before the listeners guessed the name of the song, Newton asked the tappers to predict the odds that the listeners would guess correctly. They predicted that the odds were 50 percent. The tappers got their message across 1 time in 40, but they thought they were getting their message across 1 time in 2. Why?

When a tapper taps, she is hearing the song in her head. Go ahead and try it for yourself—tap out "The Star-Spangled Banner." It’s impossible to avoid hearing the tune in your head. Meanwhile, the listeners can’t hear that tune—all they can hear is a bunch of disconnected taps, like a kind of bizarre Morse Code.

[Heath Heath 2007, 19], Chip Heath and Dan Heath, Made to Stick

As a programmer, when you’re designing your software, you’re always "hearing the tune in your head." Your user, however, doesn’t hear anything. All they have to work with is the user interface (UI) you designed. You can’t expect the user to know what you know and you can’t rely on filling the gaps with documentation or training—as Steve Krug said, "the main thing you need to know about instructions is that no one is going to read them" [Krug 2014, 51]—so your only option for building a successful product is to get great at design.

This might sound obvious, but it’s easy to forget it as a programmer because the tools you use to do your job are the pinnacle of bad design. In part, this is because most software designed for use by programmers is also designed for use by the computer, and the computer doesn’t care about usability. All day long, you are dealing with memorizing magical incantations (the old joke goes "I’ve been using Vim for about two years now, mostly because I can’t figure out how to exit it."), learning to parse esoteric formats like logfiles, core dumps, and XML (to be proficient at Java, you must also become fluent in a language called stack trace), and being treated like a worthless criminal by error messages ("illegal start of expression," "invalid syntax," "error code 33733321," "abort, retry, fail"). To be a successful programmer, you have to develop a high tolerance for terrible design, almost to the point where you don’t notice it any more. But if you want to build software that normal people can use, you have to have empathy and you have to silence many of your instincts as a programmer.

The process of programming subverts the process of making easy-to-use products for the simple reason that the goals of the programmer and the goals of the user are dramatically different. The programmer wants the construction process to be smooth and easy. The user wants the interaction with the program to be smooth and easy. These two objectives almost never result in the same program.

[Cooper 2004, 16], Alan Cooper, The Inmates are Running the Asylum

Even if you get past the hurdle of understanding your users, the second question, "What are they trying to accomplish?", still trips many people up. One of the most common design mistakes is to confuse a user’s goals (what they want to accomplish) with tasks (how they can accomplish it). A classic example comes from the Space Race during the Cold War. NASA scientists realized that a pen could not work in the microgravity of space, so they spent millions of dollars developing a pen with a pressurized ink cartridge that could write in zero gravity, upside down, underwater, and in a huge range of temperatures. The Soviets, meanwhile, used a pencil. This story is an urban legend,2 but it’s a wonderful illustration of what happens when you lose sight of the underlying goal and become overly focused on a particular way of doing things. As Abraham Maslow said, "I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail" [Maslow 1966, 15].

One of the best ways to tell tasks apart from goals is to use the "five whys" technique you saw in the previous chapter (see not available). The other is to follow Alan Cooper’s advice from The Inmates Are Running the Asylum:

There is an easy way to tell the difference between tasks and goals. Tasks change as technology changes, but goals have the pleasant property of remaining very stable. For example, to travel from St. Louis to San Francisco, my goals are speed, comfort, and safety. Heading for the California gold fields in 1850, I would have made the journey in my new, high-tech Conestoga wagon. In the interest of safety, I would have brought my Winchester rifle. Heading from St. Louis to the Silicon Valley in 1999, I would make the journey in a new, high-tech Boeing 777.

[Cooper 2004, 150], Alan Cooper, The Inmates are Running the Asylum

The point of the third question, "Why do they need it?", is to force you to justify why you’re building what you’re building. This is where the customer development process from the previous chapter comes into play (see not available). If this product or feature isn’t solving an important problem for a real user, you shouldn’t be wasting your time building it.

You should always take the time to answer all three user story questions in writing. The act of transforming your product ideas from the ephemeral and fuzzy form of thoughts in your head into concrete words and drawings on paper will reveal gaps in your understanding, and it’s cheaper to fix those when they are just a few scribbles on a piece of paper than after you’ve written thousands of lines of code. A few lines of text and some sketches in a readme, wiki, or Post-it note are enough to force you to walk through the end-to-end experience from the user’s perspective and ensure that you know what you’re building, who you’re doing it for, and why it’s worth doing.


Here’s another quick way to significantly improve your design skills: stop designing products for the "average person." The average person has one testicle and one fallopian tube [Burnham 2010], so if you’re designing for average, you are designing for no one.

The actual Average User is kept in a hermetically sealed vault at the International Bureau of Standards in Geneva.

[Krug 2014, 18], Steve Krug, Don't Make Me Think

A better idea is to design for personas. A persona is a fictional character that represents a real user of your product who has specific goals, traits, and desires. For example, I designed for the following personas:

  1. Mike: a 19-year-old undergraduate student studying computer science at UMass Amherst. Mike has been obsessed with technology most of his life, started coding in middle school, and spends a lot of his day browsing Reddit and Hacker News. Mike is starting to think about jobs after graduation. He’s interested in startups, but his parents are pushing him to join a well-known, established company, and he’s not sure what to do.

  2. Monica: a 28-year-old Senior Software Engineer working for Oracle. Monica got a computer science degree from MIT and then worked at several big software companies after college, finally landing at Oracle after several years. She is getting bored with the work and is looking for something that will challenge her more and allow her to make a bigger impact in the world. She has a couple of startup ideas, but she’s not sure what to do next.

  3. Mahesh: a 21-year-old programmer who dropped out of Stanford along with his roommate to start a company. Mahesh and his co-founder have been working on the company for six months, but they are struggling. They are not sure how to design the product, what technologies they should use to build it, how to get customers to use it, or where to find developers to help them.

Each persona should include a name, age, short bio, work history, and a set of skills, beliefs, goals, and any other details relevant to your business.3 To make it seem even more like a real person, it’s a good idea to add a photograph to each persona (preferably a photo you find on a stock photography website and not a photo of anyone you know in real life). Once you have defined personas for your product, never mention the "average user" again, either in user stories or in conversations. Don’t let your team argue over whether the "average user" would prefer feature X or Y, as everyone will have a different understanding of what is "average." Instead, only discuss whether your personas would want X or Y. For example, would the "average user" of want a calculator to help them estimate the value of their stock options? I have no idea. Would Mike, Monica, or Mahesh want such a calculator? I can make an educated guess that Mike and Mahesh would find such a tool useful.

Personas should be based on your market research and customer interviews (see not available). Your goal is to identify a small number (typically 1–3) of primary personas whose goals must be fulfilled or else the entire product is a failure. For example, Mike, Monica, and Mahesh are the primary personas for, so if they can’t find what they need, the product might as well not exist. Your goal is to make these primary personas as happy as possible by figuring out their goals and building a product that is exceptional at helping them achieve those goals, and nothing else (see Focus on the differentiators).

The broader a target you aim for, the more certainty you have of missing the bull’s-eye. If you want to achieve a product-satisfaction level of 50%, you cannot do it by making a large population 50% happy with your product. You can only accomplish it by singling out 50% of the people and striving to make them 100% happy. It goes further than that. You can create an even bigger success by targeting 10% of your market and working to make them 100% ecstatic. It might seem counterintuitive, but designing for a single user is the most effective way to satisfy a broad population.

[Cooper 2004, 126], Alan Cooper, The Inmates are Running the Asylum

The reason personas are such a powerful design tool is that they force you to think about real people and to take into account their wants, limitations, personalities, and perhaps most importantly, their emotions.

Emotional design

Studies have shown that people interact with computers and software much like they would with another human. Most people act politely toward computers, though occasionally things get hostile; they react differently to computers with female voices than those with male voices; and in the right scenario, people think of computers as team members or even friends [Reeves and Nass 2003]. Have you ever thrown a temper tantrum when your printer refuses to work? Have you ever found a piece of software that you simply love? Have you ever begged and pleaded with your computer that it didn’t lose your Word document after a crash? Whether you realize it or not, every piece of software makes you feel something. Most of your emotional reactions are automatic and the parts of the brain that control them have not evolved enough to distinguish between a real person and an inanimate object that acts like a person.

This is why the best designs always have an aspect of humanity and emotion in them. For example, Google has many hidden Easter eggs (e.g., try Googling "recursion," "askew," or "Google in 1998"), April Fools’ jokes (e.g., look up PigeonRank and Gmail Paper), an "I’m feeling lucky button," and on many days, they replace their logo with a Google Doodle to commemorate important events. Virgin America replaced the standard, boring flight safety video with an entertaining music video that now has more than 10 million views on YouTube. During the holiday season, Amazon adds a music player to the website which lets you listen to Christmas songs while you shop. On IMDb, the ratings for This Is Spinal Tap go up to 11 and they show parodies of famous movie quotes on their error pages, such as "404: Page not found? INCONCEIVABLE. - Vizzini, The Princess Bride." MailChimp includes its mascot, a monkey dressed as a mailman, on almost every page; Tumblr’s downtime page used to show magical "tumblebeasts" wreaking havoc in their server room; and Twitter’s downtime page shows a "fail whale" (see Figure 2).

Figure 2. MailChimp’s Freddie (top), Tumblr’s Tumblebeasts (bottom left), and Twitter’s fail whale (bottom right)

These might seem like little details, but they are a big deal, as the emotional aspects of a design are as important to users as the functional aspects.4

Think of your product as a person. What type of person do you want it to be? Polite? Stern? Forgiving? Strict? Funny? Deadpan? Serious? Loose? Do you want to come off as paranoid or trusting? As a know-it-all? Or modest and likable? Once you decide, always keep those personality traits in mind as the product is built. Use them to guide the copywriting, the interface, and the feature set. Whenever you make a change, ask yourself if that change fits your app’s personality. Your product has a voice—and it’s talking to your customers 24 hours a day.

[Fried, Hansson, and Linderman 2006, 123], Jason Fried, David Heinemeier Hansson, and Matthew Linderman, Getting Real

Whatever personality or voice you choose for your product, I recommend that politeness is part of it. If people think of your software as a person, then it’s a good idea to teach it some manners. Here are a few examples:

Be considerate

The program just doesn’t care about me and treats me like a stranger even though I’m the only human it knows.

[Cooper 2004, 163], Alan Cooper, The Inmates are Running the Asylum

Whenever possible, try to design software that acts like a considerate human being who remembers you. Remember the user’s preferences, what they were doing the last time they were using your software, and what they’ve searched for in the past, and try to use this information to predict what they will want to do in the future. For example, most web browsers remember the URLs you’ve typed in in the past. Google Chrome takes this even further so that as you start to type in "", it not only completes the URL to "" for you, but if this is a URL you’ve typed in many times before, it will start to fetch the page before you’ve hit Enter so that it loads faster. Google is also considerate with passwords. If you recently changed your password and you try to log in with the old one by accident, instead of the standard "invalid password" error message, Google shows you a reminder that "your password was changed 12 days ago."

Be responsive

A good design is responsive to the user’s needs. For example, Apple laptops detect the amount of ambient light in the room and adjust the screen brightness and the keyboard backlight automatically. Of course, responsiveness doesn’t have to be fancy. One of the simplest design elements that is often overlooked is providing basic feedback. Did the user click a button? Show them an indication to confirm the click, such as changing the button appearance or making a sound. Will it take time to process the click? Show an indication that there is work going on in the background, such as a progress bar or an interstitial. Programmers often overlook this because in local testing, the processing happens on their own computer, so it’s almost instantaneous. In the real world, the processing may happen on a busy server thousands of miles away, with considerable lag. If the UI doesn’t show feedback, the user won’t know if the click went through, and will either jam the button down 10 more times or lose confidence and give up entirely.

Be forgiving

Human beings make mistakes. Constantly. Design your software to assume that the user will make a typo, click the wrong button, or forget some critical information. For example, when you try to send an email in Gmail, it scans the text you wrote for the word "attachment," and if you forgot to attach something, it’ll pop up a confirmation dialog to check if that was intentional. Also, after you click Send, Gmail gives you several seconds to "undo" the operation, in case you change your mind or forgot some important detail. I wish all software had an Undo button. Sometimes I wish life did, too.

The last point on being forgiving of errors is so important that it’s worth discussing it a bit more.

Eliminate the term human error. Instead, talk about communication and interaction: what we call an error is usually bad communication or interaction. When people collaborate with one another, the word error is never used to characterize another person’s utterance. That’s because each person is trying to understand and respond to the other, and when something is not understood or seems inappropriate, it is questioned, clarified, and the collaboration continues. Why can’t the interaction between a person and a machine be thought of as collaboration?

[Norman 2013, 67], Don Norman, The Design of Everday Things

No one likes error messages. No one wants to see "PC Load Letter." And most of all, no one wants to feel like the error is their fault. Online forms are often the worst offenders. You spend a long time filling out dozens of text boxes, click Submit, and when the page reloads, you get an obscure error message at the top of the page. Sometimes it’s not clear what you did wrong, and on some particularly rage-inducing websites, all the data you entered is gone. This is an indication that the designer did not think through the error states of the application. Here are some rules of thumb to avoid this mistake:

  • Instead of error messages, provide help and guidance [Norman 2013, 65]. For example, avoid words like "Error," "Failed," "Problem," "Invalid," and "Wrong." Instead, explain what kind of input you’re looking for and how the user’s input differs.

  • Check the user’s input while the user is typing (not after a page submission) and show feedback, both positive and negative, right next to where the user is looking (not at the top of the page).

  • Never lose the user’s work.

Twitter’s sign-up form is a great example. It gives you feedback while you type, either showing a green checkmark if your input is valid or a red X with a brief message explaining what is required instead, as shown in Figure 3. For instance, the password field has a small progress bar that fills up as you enter a more secure password; if you enter a username that’s already registered, you see suggestions for similar usernames that are available; if you make a typo in your email address, such as, a message shows up that says "Did you mean" It’s a great user experience that makes filling out a form feel less like doing paperwork and more like you’re having a conversation with a person who is helpful and politely asks you for clarification when they don’t understand you.

In addition to showing helpful messages, you should try to create a design that prevents mistakes in the first place. In Lean manufacturing, this is called a poka-yoke, which is a Japanese term for "mistake-proofing." For instance, when you’re typing a new question into Stack Overflow, it automatically searches for similar questions to discourage you from submitting duplicates and automatically warns you if your question is likely to be closed for being subjective (e.g., "what is the best X?"), as shown in Figure 4.

Figure 3. Twitter’s sign-up page does a great job of showing help and guidance

The gold standard is making errors more or less impossible. For example, PC motherboards are designed so that every component has a different type of connector, as shown in Figure 5. This ensures that there is no way to accidentally insert the CPU into a PCI slot or plug an ethernet cable into the VGA slot. Modern ATMs return your ATM card and force you to take it before you can get your cash so that you don’t forget your card. It’s slightly harder but still possible to do this in software, too. For example, with Microsoft Word, I always dreaded the possibility that my computer would crash before I had a chance to save my work. With Google Docs, this sort of error is effectively impossible because all changes are auto-saved almost instantly. An even simpler version is disabling the Submit button on a form immediately after a user clicks it to make it impossible to submit the form more than once.

Figure 4. Stack Overflow tries to prevent errors

In addition to handling error states, you should also make sure your design handles a blank state: that is, what the application looks like the first time a user interacts with it, before she’s entered any data [Fried, Hansson, and Linderman 2006, 97]. A design for a new social network might look great when the user has connected with hundreds of friends and can see all of their updates and pictures in the newsfeed, but what does the design look like when the user first signs up? For example, Figure 6 shows the blank state Twitter previously used for brand-new users.

Figure 5. Some designs make errors nearly impossible (image from Wikipedia)

If you show a completely empty newsfeed, your new users will not have a great experience and are not likely to continue using your service. Figure 7 shows the new design for Twitter’s blank state, which immediately prompts the user to start following popular Twitter accounts.

Figure 6. The old design for Twitter’s blank state [Elman 2011] Figure 7. The new design for Twitter’s blank state [Elman 2011] Simplicity

Almost every creative discipline shares a single goal: simplicity. Isaac Newton, a mathematician and scientist, said "truth is ever to be found in the simplicity, and not in the multiplicity and confusion of things" [Powell 2003, 29]. Steve McConnell, a programmer, wrote in his book Code Complete that "managing complexity is the most important technical topic in software development" [McConnell 2004, 78]. And Jonathan Ive, Apple’s chief designer, said that "there is a profound and enduring beauty in simplicity" [Etherington 2013]. Everyone strives for simplicity. The problem is that making things that are simple is not simple.

At first glance, this is unintuitive. We often think of "simple" as minimalistic and as having nothing extra. So if you start with a blank slate and only add a few things here and there, shouldn’t you end up with a simple design? If you’ve ever written an essay or a complicated piece of code, or tried to design a product, you’ll know that your first draft is always an overly complicated mess. It takes a tremendous amount of work to whittle that mess down into something simple.

I would have written a shorter letter, but I did not have the time.

Blaise Pascal

A better way to think about it is that projects don’t start with a blank slate, but with a vast amount of materials, knowledge, and ideas all mixed together. It’s a little bit like sculpting. You start with a huge block of marble and you need to chip away at it, day by day, until you’ve finally revealed the statue that was within the rock (and within your mind). As Antoine de Saint Exupéry said, "perfection is attained not when there is nothing more to add, but when there is nothing more to remove." You might have to remove extraneous features from a product, redundant words from an essay, or extra code from a piece of software. You keep removing things until all you are left with is the very core of the design, the differentiators, and nothing else (see Focus on the differentiators). That’s simplicity.

Simplicity is about saying what is the one thing I must get done [Heath Heath 2007, 27]? What is the one thing my product must do? What is the one thing my design must communicate to the user? Ask these questions regularly and after you’ve come up with an answer, ask them again. Does the product I designed do that one thing? Or did I get lost in the implementation details and end up doing something else?

The opposite question is equally important: what things should my product not do? Every extra feature has a significant cost. With physical objects, this cost is relatively obvious. For example, imagine a Swiss Army knife with 10 tools crammed inside: a knife, a screwdriver, a can opener, tweezers, and so on. Now you’re considering adding a pair of scissors. Scissors take up a lot of room, so you’ll have to make the knife bigger or make all the existing tools more cramped. Either way, it’s clear that this will make the knife more unwieldy to use and more expensive to produce. Therefore, you either set the bar very high for adding a new tool, or you remove one of the original 10 tools to make room [Cooper 2004, 27].

The trade-offs in software are identical—every new feature makes the previous features harder to use and makes the software more expensive to produce—but it’s not nearly as obvious. In fact, most companies believe that the way to build a better product is to cram more and more features into it, release after release, until it can do everything. Except it can’t, because no one can figure out how to use it, as shown in Figure 8.

The companies that succeed at designing something simple are the ones that recognize that the number of features you can cram into software isn’t constrained by physical limitations, like the amount of space in a knife, but by the mental limitations of a human using it. Design needs to be simple not because simple is prettier, but because human memory can only process a small number of items at a time. If you cram too many things into a design, it will quickly exceed the limits of human memory, and the user will find the product overwhelming and unusable. This is why you have to limit the amount of information in any design (less text, fewer buttons, fewer settings) and the number of features in any product (see Focus on the differentiators).

People think focus means saying yes to the thing you’ve got to focus on. But that’s not what it means at all. It means saying no to the hundred other good ideas that there are. You have to pick carefully. I’m actually as proud of the things we haven’t done as the things I have done. Innovation is saying no to 1,000 things.

[Gallo 2011], Steve Jobs

Most programmers love deleting code, especially as the result of finding a more concise solution to a problem. This usually requires that you understand the problem at a deeper level so that you can come up with an implementation that is more elegant. The same is true of design. You should enjoy the process of removing features and chopping out parts of a design, especially as the result of finding a more elegant solution. As with programming, to come up with a more elegant design, you need to develop a deeper understanding of the problem.

Figure 8. Simplicity (image courtesy of Eric Burke)

Sometimes you develop this understanding through user research, customer development, and techniques like the "five whys". But many interesting problems in the world are wicked problems, where you must build the solution before you can really understand what problem you were solving. That is, solving the problem gives you a clearer view of the problem, which will allow you to build an even better solution. Building the new solution will lead to an even better understanding of the problem, and the cycle can repeat again and again. Design is iterative, and it can take many iterations to come up with a simple solution for a hard problem.

Actually, making the solution simple isn’t the goal. The real goal, in the words of Apple’s design chief Johnathan Ive, is to solve problems so that, "you’re not aware really of the solution [and] you’re not aware of how hard the problem was that was eventually solved" [Richmond 2012]. What matters is not that the solution is simple, but that you make the user’s life simple. The iPhone is an incredibly complicated piece of technology, but using it is simple. The only way to find out if you’ve succeeded at making the user’s life simple is to observe them while they are using your product, a process formally known as usability testing.

Usability testing

In the previous chapter, I discussed the idea that, no matter how much thinking and validation you do, some of your assumptions will still be wrong. The solution was to test those assumptions by putting the product in front of real customers (see not available). The same logic applies to design. No matter how good you become at user-centered design, some of your design ideas will not work, and the only way to figure that out is to put the design in front of real users in the form of a usability test.

Don’t confuse usability testing with focus groups. The goal of a focus group is to see how people feel about an idea or a product. The goal of usability testing is to see how people use your actual product to accomplish specific tasks. While there are companies that can run official usability studies for you, these tend to be expensive and time consuming, and most startups can get by with their own simpler process. Here is a rough outline (see Don’t Make Me Think [Krug 2014, chap. 9] for a more thorough description):

  1. Bring a small number of users (3–5) into your office.

  2. Set up recording equipment (e.g., iPhone on a tripod).

  3. Record the users while they perform a list of tasks with your product.

  4. Have your team watch the recording.

  5. Decide what actions to take based on your learnings.

  6. Repeat every 3–4 weeks.

If you’ve never done usability testing before, you’ll soon learn that the first time you observe people outside of your company using your product is an eye-opening experience. It takes just a few hours per month, and in return, you will regularly learn things about your product design that you would never have found through any other means. The most important thing to remember is if you’re in the room with the users as a facilitator, you are there to observe and not interfere. You can encourage the users and answer logistical questions, but you cannot help them use the product, especially if they make a mistake. The user might get frustrated, but the whole point of usability testing is to find these mistakes and frustrations so you can fix them.

In addition to usability testing, there are a few other tools you can use to improve your designs. One option is to build a mechanism directly into your product that makes it easy to send you feedback, such as a feedback form on a web page. Only a small percentage of users will take the time to send feedback, but it’s usually valuable content when they do. A second option is to periodically conduct usability surveys. It’s a bit like sending a feedback form directly to each user’s inbox. There is an art to building usability surveys correctly, so it’s a good idea to use a dedicated usability product that will take care of the details for you (e.g.,

2See its Snopes page.

3See for a complete guide.

4See for lots of great examples.

Visual Design

Let’s now turn our attention to the visual aspects of design. People have been doing visual design for thousands of years, so it’s a deep field. This section will only cover a "Hello, World" tutorial of visual design. When you’re getting started with a new programming language, your first goal is always to learn just enough that you can create a program in that language that prints "Hello, World" to the screen, which helps you build confidence by getting something simple working very quickly before you dive deeper and start learning how to build more complicated programs. Similarly, in this tutorial, my goal is to introduce you to the basic design skills you need to get something simple working so you can build your confidence before diving deeper and learning how to create more complicated designs.

The basic visual design skills and techniques are:

  • Copywriting

  • Design reuse

  • Layout

  • Typography

  • Contrast and repetition

  • Colors

In this tutorial, I’ll primarily focus on two examples: fixing the design of a résumé, which is a design task that almost everyone is familiar with, and designing a website from scratch (specifically this book’s companion site, which follows a process that is a bit more like what is required for a typical startup. Pay attention not to the specific design decisions I make for the résumé and, as they won’t apply everywhere, but to the thought process behind those decisions.


Although many people think of colors, borders, pictures, and fancy animations as the primary tools of design, the real core of almost all software design is actually text. In fact, you could remove the colors, borders, pictures, and just about everything else from most applications, and as long as you left the text, it would probably still be minimally usable. This is not to say that the other elements don’t matter, but most of the information that a user needs in a software product is in the titles, headers, body paragraphs, menus, and links, so even when doing visual design, your first priority should always be copywriting.

Great interfaces are written. If you think every pixel, every icon, every typeface matters, then you also need to believe every letter matters.

[Fried, Hansson, and Linderman 2006, 101], Jason Fried, David Heinemeier Hansson, and Matthew Linderman, Getting Real

Take the time to think through what you’re going to say to the user and how you’re going to say it (see also Emotional design). A good title and headline—the elevator pitch—are especially important, as they are the first thing the user sees when they use your application, or when they see your application in search results, or when you’re pitching an idea to an investor. Have you noticed that when you’re flipping through a magazine, newspaper, or scientific journal, you only read some of the articles? Have you ever stopped to consider why you read those articles and not the other ones? Your headline must resonate with the personas you’re targeting, telling them not only what you do ("Our software can do X"), but also why the user should care ("Our software can do X so you can succeed at Y"). Knowing how to craft a clear message that explains your why—your mission—is one of the keys to success in all aspects of business (see not available and not available).

It’s usually a mistake to leave copywriting to the end, or even worse, create a design that only contains placeholder text such as the standard Latin filler text "lorem ipsum." It reduces the most important part of the design, the copywriting, to just the shape of the text so you don’t see the variations with real-world data and you don’t focus on writing a message that resonates with your audience [Fried, Hansson, and Linderman 2006, 121]. The first thing I did when building was to write down the information Mike, Monica, and Mahesh (see Personas) would want to see, as shown in Figure 9. I started with the outline of the basic sections—information about the book, the author, a way to buy the book, latest news, and startup resources—and then filled in the details. I ended up with a large amount of clean, semantic HTML. It’s not particularly attractive, but remember, design is iterative, and this is just the very first draft.

The first draft of the résumé, as shown in Figure 10, is loosely based on a style I’ve seen in hundreds of résumés over the years. It’s also a bit ugly, but the copywriting is in place so it’s a good starting point.

Figure 9. Copywriting for Design reuse

Good artists copy; great artists steal.

[Sen 2012], Steve Jobs

If you’re new to design, or almost any discipline, the best way to get started is to copy others. Don’t reinvent the wheel, especially if you aren’t an expert on wheels. In fact, even if you are an expert on wheels (that is, an experienced designer), you should still reuse existing designs as much as possible (the same logic applies to reusing code, as discussed in not available). When you copy others, you save time, you learn, and you get access to high-quality, battle-tested work. Copy and paste might seem like an unsatisfying way to learn design, but as we discussed in the last chapter, copy, transform, and combine are the basis of all creative work (see not available).

Figure 10. A résumé with many design problems

I start every project by browsing existing designs and seeing what I can reuse or adapt to my own needs. For example, there are thousands of templates for web, mobile, and email that you can use instead of coming up with a design from scratch. One of my favorites is Bootstrap, which is not just a template but an open source, responsive HTML/CSS/JavaScript framework that comes with a default set of styles, behaviors, plug-ins, and reusable components.

If you don’t want to jump into code right away, you can use a wireframing or prototyping tool, such as Balsamiq, UXPin, or Justinmind, that lets you put together a design by dragging and dropping from a library of UI elements. There are also hundreds of websites where you can find stock photos, graphics, and fonts, including free options (e.g., Wikimedia Commons, Google Fonts) and paid options (e.g., iStock, Adobe Typekit). Finally, you can also leverage the design community through websites such as Dribbble (a community where designers can share and discuss their work) and DesignCrowd (an online marketplace where you can quickly hire a freelancer to design a logo or a website). See and for the full list of design resources.

I loosely based the final design of on a free Bootstrap template called Agency and the final design of the résumé on a template I found on Hloom.5 However, to help you train your designer’s eye, I won’t use these templates right away but instead will build up to them step by step so you can learn to recognize the different aspects of visual design, starting with layout.


A good layout arranges the elements on the screen so that you can infer a lot of information based on their positions relative to one another. One aspect of layout is proximity. The proximity between elements indicates if they are logically related or not. Items that are logically connected should be closer together; items that are not connected should be further apart [Williams 2014, chap. 2]. Take a look at Figure 11, which shows the original résumé on the left and the exact same résumé, but with slightly better use of proximity, on the right.

Figure 11. The original résumé on the left, and the same résumé with better use of proximity on the right

I’ve put two new lines between the different sections (summary, experience, education), but only half a new line between a section header and the contents of that section (e.g., between "Summary" and "Programmer, writer, speaker, traveler"). I’ve pulled all the information for a single job closer together, but put a new line between different jobs so it’s clear where one starts and another one ends. I did a similar fix for the design, as shown in Figure 12.

In the design on the right, it’s clearer that "Buy Now," "Latest News," and "Startup Resources" are separate sections because I increased the spacing between them. You can also tell that "Webcast: A Guide to Hiring for your Startup" and the two lines below it are all part of one logical unit because I decreased the spacing between them.

Figure 12. The original design on the left, and the same design, but with better use of proximity on the right

Try to balance the close proximity of related elements with lots of whitespace between unrelated elements. The human mind is limited in how much information it can process at a time, so a key aspect of readability is putting lots of whitespace between elements so that you can focus on just one thing at a time. And I do mean lots of whitespace. Most beginners try to cram everything tightly together, so a good rule of thumb is "double your whitespace": put space between your lines, put space between your elements, and put space between your groups of elements [Kennedy 2014]. Medium, a blogging platform known for its beautiful design, is an inspiring example of just how much you can do with whitespace and typography, as shown in in Figure 13.

Another critical aspect of layout is alignment. Alignment allows you to communicate that there is a relationship between elements, not by moving them closer together or further apart (as with proximity) but by positioning them along common lines. Here is the golden rule of alignment:

Nothing should be placed on the page arbitrarily. Every element should have some visual connection with another element on the page.

[Williams 2014, 13], Robin Williams, The Non-Designer's Design Book

Figure 13. A screenshot of Medium showing off their use of whitespace

Notice how the résumé in Figure 11 has many different and seemingly arbitrary alignments: the section headings are center-aligned, the job titles are left-aligned, the company name is center-aligned (but poorly, just using spaces and tabs), the dates are right-aligned (again, poorly), and the job descriptions are center-aligned. Figure 14 shows the same résumé, but with a single, stronger alignment.

Figure 14. The same résumé, but with a stronger use of alignment

Notice how this layout is easier to read, as everything is positioned along a strong vertical line between the section headers and the section contents. Of course, there isn’t actually a line there, but your mind inserts one, as shown in Figure 15.

Figure 15. The mind inserts an imaginary line that helps you make sense of the layout

These sorts of lines show up everywhere in design, so teach yourself to consciously notice them. For example, Figure 16 shows with a better use of alignment. Where is the line in this design?

Figure 16. The design with better use of alignment

As a rule of thumb, try to choose one strong line and and align everything to it. In other words, don’t align some things to the left, some things to the right, and some things to the center. Also, use center alignment sparingly, as it doesn’t create an obvious strong line for the mind, and results in a design that looks more amateurish. This is just a rule of thumb, so you can certainly break it from time to time but you should do so consciously.


Typography is the art and science of arranging text so that it is readable and beautiful. In this section, we’ll look at just a few of the most important aspects of typography: measure, leading, typeface, and style.

Measure is the length of each line. If lines are too short, the reader will be interrupted too frequently to go to the next line. If lines are too long, the reader will get impatient waiting to reach the end of a line. For example, which version of is easier to read in Figure 17?

Figure 17. A comparison of measures on 140 characters (top left), 35 characters (top right), 70 characters (bottom left), and 70 characters and justified alignment (bottom right)

Most people find the bottom images easier to read than the top ones, and the bottom-right image to look the best overall for two reasons. First, the justified alignment in the bottom-right image creates strong lines on the sides of each paragraph, which is helpful when reading large amounts of text (this is why this style is used in most books and newspapers). Second, the bottom images use the proper amount of measure, which is around 45–90 characters per line. As a rule of thumb, you want a measure that’s just long enough to fit all the letters of the alphabet laid out back to back, 2–3 times [Butterick 2015, sect. line length]:

abcdefghijklmnopqrstuvwxyz abcdefghijklmnopqrstuvwxyz abcdefghijklm

Leading is the amount of vertical space between lines. As with measure, if you have too little or too much leading, the text becomes hard to read. The sweet spot for leading tends to be 120%–145% of the font size [Butterick 2015, sect. line spacing], as shown in the bottom image of Figure 18.

Figure 18. A comparison of leading on for a 16px font size: 13px line-height (top), 50px line-height (middle), and 24px line-height (bottom)

The typeface is the design of the letters. Every operating system comes with a number of standard, built-in typefaces, such as Arial, Georgia, Times New Roman, and Verdana. Many of these are not particularly good looking, and even those that are tend to be overused and therefore look bland in most designs. One of the simplest things you can do to dramatically improve your designs is to stop using system typefaces. You can get high-quality alternative typefaces from free sites like Google Fonts and paid sites like Adobe Typekit. But how do you know which of the thousands of typefaces you should use?

At a high level, all typefaces can be grouped into five classifications: serif, sans serif, script, decorative, and monospace. The typefaces within a classification vary widely, and some typefaces don’t fit neatly within any classification, but here are some rules of thumb that will help you get started.

Figure 19. Serif typefaces

Serif typefaces have small lines called serifs at the end of a stroke on each letter or symbol. For example, in Figure 19, notice the little lines that jut out to each side from the bottom of the "r" in the word "Serif," almost like the letter is on a pedestal. The stroke used in serif typefaces also tends to vary in thickness at different parts of the letter. For example, in Figure 19, the letter "S" in "Serif" is thinner at the top and bottom of the S than in the middle. The serifs and variation in thickness make each letter look more distinct, which helps reading speed, especially for large amounts of text. Therefore, serif typefaces are great for large amounts of body text and print material (most books use serif typefaces for the main body of text). As the oldest style of typeface, dating back not just to the days of the printing press but all the way back to letters carved into stone by the ancient Romans, you can also use serif typefaces in headers when you want a "classical" look.

Figure 20. Sans serif typefaces

"Sans" is a French word that means without, so "sans serif" typefaces are those without serifs. Notice how the letter "r" in "serif" in Figure 20 does not have any lines jutting out at the bottom. Sans serif typefaces also tend to have a more uniform stroke thickness throughout the letter. For example, in Figure 20, the "S" in "Sans" is the same thickness everywhere. Because sans serif typefaces have a simple and uniform appearance, they aren’t as good as serif typefaces for large amounts of medium-sized body text, but they are typically better at extreme sizes such as large headers and small helper text. In fact, the tiny details of a serif typeface might look blurry if the letters are too small or you’re viewing them on low-resolution screen, so sans serif typefaces are very popular in digital mediums.

Figure 21. Decorative typefaces

As the name implies, decorative typefaces are used as decoration or accents. These typefaces are distinct, fun, and highly varied, which is great when you need some text to really stand out, as you can see in Figure 21. However, they tend to be hard to read, so you will typically want to limit their use to a few words in a title or subtitle.

Figure 22. Script typefaces

Script typefaces look like handwriting, cursive, or calligraphy, as shown in Figure 22. Similar to decorative typefaces, they are a great way to add an accent to a page, but don’t use them for more than a few words or letters because they are hard to read.

Figure 23. Monospace typefaces

Each letter in a monospace typeface, as shown in Figure 23, takes up the same amount of space, which is typically only useful when displaying snippets of code (that’s why all terminals, text editors, and IDEs use monospace typefaces) and text that needs to look like it came from a typewriter.

There are many styles that you can apply to a typeface to change how it looks, including text size, text thickness (i.e., bold or thin), text obliqueness (i.e., italics), letter spacing, underline, and capitalization. A particular combination of typeface and style is a font. Each font in your design should serve a specific purpose. The résumé violates this rule, as it uses Times New Roman 12pt for all of the text. The only exception is a few underlines for emphasizing the section headings, but the underline is not a good choice. Virtually no book, magazine, or newspaper uses underlines because they make the text harder to read. The only exception is websites, where an underline is used for hyperlinks and therefore should not be used for anything else to avoid confusion. Let’s remove the underline and use several different font styles to improve the look of the résumé, as shown in Figure 24.

The structure of the résumé is clearer now: all the job and education titles are bold, all the company and school names are bold and italic, all the dates are italic, and all the section headings are in uppercase letters. It’s an improvement but it still looks bland because the entire résumé is using just a single typeface, Times New Roman.

It can take a fair bit of experimentation and experience to find typefaces that look good together. If you’re new to the whole font business, then you might want to let the professionals handle it. If you Google "font pairings," you will find dozens of websites that give you wonderful, pre-vetted recommendations. For example, the Google Web Fonts Typographic Project shows dozens of ways to pair fonts available in Google Fonts, Just My Type does the same thing for Adobe Typekit pairings, and Fonts in Use let’s you browse a gallery of beautiful typography from the real world and filter it by industry, format, and typeface. I found lots of great options in Fonts in Use for the résumé, but I chose a conservative set that’s likely to work on other people’s computers, consisting of Helvetica Neue for headings and titles and Garamond for the body text, as shown in Figure 25.

Figure 24. The résumé with several different font styles Figure 25. The résumé with multiple typefaces

For, I’m using the fonts from the Agency template, which are Montserrat, Droid Serif, and Roboto Slab (all available for free in Google Fonts), as shown in Figure 26.

Figure 26. The with the Montserrat, Droid Serif, and Roboto Slab typefaces

These new fonts make the designs look a little cleaner, but overall, they are still fairly bland. We need some contrast to spice things up.

Contrast and repetition

Whereas proximity and alignment tell you that two elements are related, contrast is used to make it clear that two parts of the design are different. For example, when you mix multiple fonts, the most important thing to understand is that you need to have significant contrast between them. In the résumé, the job titles are in bold Helvetica Neue so you can easily distinguish them from the job descriptions, which are in regular Garamond. Notice how this message is reinforced through repetition: all the job titles use one font, all the section headings use another font, and all the body text uses a third font. Once you’ve defined a purpose for some element in your design—whether that’s a font choice or a logo in the corner, or the way elements are aligned—you should repeat it everywhere. This repetition becomes your brand (see not available for more information), and if it’s distinct enough, the reader will be able to recognize your style anywhere (see Figure 27 for an example).

Figure 27. Repeating the same style to create your brand (design from GraphicBurger).

You can create contrast in your fonts through a combination of varying the style (i.e., text size, thickness, capitalization) and the typeface classification. If you use two fonts that are too similar, such as the same typeface at 12pt and 14pt, or two different fonts that use serif typefaces, then they will be in conflict and the design won’t look right. Therefore, each time you introduce a new font, it must be for a specific purpose, and to make that clear, you need to communicate it loudly by having a lot of contrast. For example, in Figure 28, I added a lot more contrast to the résumé title by using a big, thin, uppercase font, with lots of letter spacing.

Figure 28. The résumé with more contrast for the title font

Another key role of contrast is to focus the user’s attention on an important part of the design. While someone reading a book might read every word, users of most products do not:

What [users] actually do most of the time (if we’re lucky) is glance at each new page, scan some of the text, and click on the first link that catches their interest or vaguely resembles the thing they’re looking for. There are almost always large parts of the page that they don’t even look at. We’re thinking "great literature" (or at least "product brochure"), while the user’s reality is much closer to "billboard going by at 60 miles an hour."

[Krug 2014, 21], Steve Krug, Don't Make Me Think

Therefore, not only should every font in your design serve a specific purpose, but every screen should have one central thing that it’s trying to get the user to do. This is known as the call to action (CTA). For example, the main thing I want people to do on is to learn about the book, so I can add a big Learn More button as a CTA, as shown in Figure 29.

Figure 29. The design with a Learn More button as a CTA

It’s a start, but I can make the button jump out more by using colors to increase the contrast.


OKCupid is a great example of using colors and contrast to make a very noticeable CTA, as shown in Figure 30.

Figure 30. OKCupid call to action

As soon as you go to the website, it’s clear what the site is (thanks to the clear copywriting) and what you’re supposed to do (thanks to the clear CTA). The central placement, the large fonts, and the contrasting colors make it so that the CTA practically jumps out at you. This is the key to using contrast effectively: if two items on a page are not the same, make them very different [Williams 2014, 69]. Or as William Zinsser wrote, "Don’t be kind of bold. Be bold." [Zinsser 2006, 70].

How do you know which colors to use to achieve a good contrast? When you’re a kid, playing with colored paints and crayons is a blast. As an adult, choosing colors that work well in a design is slightly less fun. Color theory is even more complicated than typography. To do it well, you have to take into account physiology (e.g., putting red text on a blue background can create an effect known as chromostereopsis, which causes the text to be fuzzy, making reading difficult and even painful [Chromostereopsis 2014]), biology (e.g., about 8% of men have color deficiency while 2%–3% of women have extra color sensors and see more colors than average [Roth 2006]), psychology (e.g., each color comes with a number of associations and can have a significant effect on mood), technology (e.g., digital displays use the RGB color model, while most print devices use CMYK), art sensibility (e.g., some colors go together in a harmonious fashion, others do not), and the physics and mechanics of colors (e.g., the color wheel, primary, secondary, and tertiary colors, color mixing, hue, saturation, lightness, tints, and shades). It’s a lot to learn.

If you’re just getting started, I can offer you two tips that will save you time. The first tip is to do all of your design work in grayscale first and add color last. That is, figure out the copywriting, layout, and typography, and make the design work without any color. At the very end, when everything else is in place, you can add in some color, and even then, only with purpose [Kennedy 2014]. Think of adding color like painting a house: you should do it after you’ve put up the walls, windows, and doors, and not before. It’s easier to experiment with different color schemes when the rest of the design in place, as it allows you to deliberately choose colors as an accent, set a mood, or bring out a particular theme. For example, the résumé we’ve been working on has been black and white the entire time. It’s now easy to toss in a single color as a highlight, as shown in Figure 31.

Notice how this color scheme only makes sense once the layout (two columns) and font choices (a thin Helvetica Neue with a large font size and lots of letter spacing) are in place. Had I tried to add color to the original design, I would’ve probably done something different and had to change it anyway once the layout and typography were in place.

With, I did the entire design in grayscale, and I can allow pictures in the design to influence the colors. For example, the cover image I got for the book had a gray reflection and green text, so I used those two colors throughout the design, as shown in Figure 32.

Figure 31. The résumé with a splash of color as a highlight Figure 32. Letting the gray and green colors in the cover image drive the colors in the rest of the design

The second tip is to use palettes put together by professionals instead of trying to come up with your own. You can, of course, copy the color schemes used on your favorite websites, but there are also tools dedicated specifically to helping you work with colors. For example, Adobe Color CC and Paletton can generate a color scheme for you using color theory (e.g., monochromatic, adjacent colors, triads). Adobe Color CC, COLOURlovers, and Dribbble’s color search also allow you to browse through premade color schemes.

5See and

A quick review of visual design

Figures 3-33 and 3-34 show the progression of the résumé and designs, respectively. Take a minute to look through these images and consciously name what changed between them.

Figure 33. The progression of the résumé design Figure 34. The progression of the design

Hopefully, you were able to spot the following aspects of visual design:

  • Top left: copywriting

  • Top right: layout (alignment and proximity)

  • Bottom left: typography (measure, leading, typefaces, fonts)

  • Bottom right: contrast and colors

Finally, all the steps are based on templates, font combinations, and color palettes I found online, so at the center of it all is design reuse.


The first design challenge you’ll have at your startup is building the initial version of your product. Even if you’ve come up with a great idea and validated it with real customers, resist the temptation to lock yourself in a room for a year to design and build the perfect product. Remember, a product is not just one idea but a constant cycle of new problems, new ideas, and execution. Execution is expensive, so you need to validate each new problem and idea as quickly and cheaply as possible with customers. The best way to do that is to build what’s known as a minimum viable product, or MVP.

MVP is a term that’s often misinterpreted. "Minimum" is often misread as "ship something—anything—as soon as humanly possible," which misleads people into recklessly focusing on time to market above all else. "Viable" is often incorrectly understood as "enough features for the product to function," which misleads people into building many features that are unnecessary and omitting the ones that actually matter. And "product" incorrectly suggests that the MVP must be a product, so people often overlook simpler, cheaper ideas for an MVP.

The term MVP was popularized by Eric Ries’s book The Lean Startup, which has a proper definition: an MVP is "a version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort" [Ries 2011a, 103]. The point of an MVP is learning. The goal is to find the cheapest way to validate a hypothesis with real customers.

The "minimum" in MVP means eliminating anything that does not directly help you validate the current hypothesis. For example, when 37signals first launched their project management tool, Basecamp, their MVP did not include the ability to bill customers. The hypothesis they were validating was that customers would sign up for a web-based project management tool with a clean user interface. A billing system does not help validate that hypothesis so it can be eliminated from the MVP and added later (if customers actually start signing up). On the other hand, they invested time in coming up with a clean and simple design for the MVP, as that was an essential part of the hypothesis they were testing.

The "viable" in MVP means that the MVP has everything it needs for customers to accept it. The MVP might have bugs, it might be missing features, it might be ugly, and it might not even resemble the actual product you ultimately want to build, but it has to solve a problem the customer cares about. See Figure 35 for a great demonstration of the difference between a non-viable and viable MVP.

Figure 35. How to build an MVP that’s viable (image by Henrik Kniberg [Kniberg 2013])

And finally, the "product" in MVP really means "experiment." It could be a working prototype of your product or something simpler, such as as a landing page with a demonstration video, just as long as it can validate your hypothesis (see Types of MVPs for more information).

Building an MVP is not a one-time activity. For one thing, you will most likely have to build multiple MVPs before you find one that works. But even more importantly, building MVPs is more of a way of thinking than just an activity you do early in the life cycle of a product. Think of it like placing little bets instead of betting the house every time you play a game of cards. Whether you’re trying out ideas for a new product that no one has ever used or adding new features to an existing product that has lots of traction, you should use the MVP mindset, which can be summarized as:

  1. Identify your riskiest, most central assumption.

  2. Phrase the assumption as a testable hypothesis.

  3. Build the smallest experiment (an MVP) that tests your hypothesis.

  4. Analyze the results.

  5. Repeat step 1 with your new findings.

No matter how confident you are in an idea, always try to find the smallest and cheapest way to test it, and always try to keep projects small and incremental. Research by the Standish Group on over 50,000 IT projects found that while three out of four small projects (less than $1 million) are completed successfully, only one in 10 large projects (greater than $10 million) are completed on time and on budget, and more than one out of three large projects fail completely [The Standish Group 2013, 4].

The Standish Group has categorically stated with much conviction—backed by intense research—that the secret to project success is to strongly recommend and enforce limits on size and complexity. These two factors trump all others factors.

[The Standish Group 2013, 4], The CHAOS Manifesto 2013

Let’s now take a look at the different types of MVPs you can build.

Types of MVPs

An MVP doesn’t have to be an actual product. It just has to be something that can validate a hypothesis when a customer uses it. Here are the most common types of MVPs:

Landing page

An easy, cheap, and surprisingly effective MVP is a simple web page that describes your product and asks the user for some sort of commitment if they are interested, such as providing their email address to get more info or placing a pre-order. The general idea is to describe the most ideal vision of your product and see how much traction you can get, even if the product does not yet exist. If the most idealized description of your idea can’t convince a few people to sign up for your mailing list, you may want to rethink things. For example, Buffer, a Social Media Management app, started as a landing page that showed a description of the product idea, some pricing details, and a way to sign up for a mailing list to get more info, as shown in Figure 36. They got enough sign-ups, and just as importantly, enough clicks on the pricing options, to have enough confidence to build the actual product.

Figure 36. The Buffer MVP [Gascoigne 2011]

Because the text and images on a landing page are quicker to update than a working product, a landing page is one of the most efficient ways to hone in on the right design, message, and market. You can experiment with different wording, target different customer segments, try out different pricing strategies, and iterate on each one until you find a sweet spot (see not available for how to measure the performance of each iteration). You can host your own landing pages on AWS or GitHub Pages, or use one of the many tools custom-built for landing pages, such as LaunchRock, Optimizely, Lander, or LeadPages.

Explainer video

Before Drew Houston started building DropBox, he wanted to be sure he wasn’t spending years building a product nobody wanted. Even building a simple prototype that users could try on their own computers would have taken a long time, because it would have required building a reliable, high-performance online service to store all the data. Instead, Houston built a much simpler MVP: a landing page with a sign-up form, plus a four-minute explainer video, as shown in Figure 37.

Figure 37. The DropBox explainer video

The video was an effective way to show the product in action rather than just describe it, and it included some Easter eggs (e.g., references to XKCD and Office Space) for the tech-savvy viewer. Houston put the video on Hacker News and Digg, and within 24 hours, the landing page got hundreds of thousands of views and 70,000 sign-ups. This gave Houston the confidence that it was worth building the actual product [Ries 2011b]. Tools such as PowToon, GoAnimate, and Camtasia allow you to make an explainer video for free or on a small budget.


Crowdfunding sites such as Kickstarter or Indiegogo are a bit like a landing page with an explainer video, except instead of email addresses, interested customers give you money to support your project. In other words, this is a way to get customers to buy your product before you build it, which is the best validation you can get. One of the most successful Kickstarter campaigns of all time was for the Pebble watch, which raised $10 million from 68,000 backers with little more than a prototype [Gorman 2013], as shown in Figure 38.

Figure 38. Pebble raised $10 million on Kickstarter before building the product
Wizard of Oz

A Wizard of Oz MVP is one that looks like a real product to a user but behind the scenes, the founders are doing everything manually. For example, when Nick Swinmurn wanted to test his idea for Zappos, a place to buy shoes online, he went around to local shoe stores, took pictures of the shoes they had in stock, and put the pictures up on a website that looked like a real online shoe store, as shown in Figure 39.

Figure 39. A screenshot of from 1999 via the Internet Archive

When a user placed an order, Swinmurn went back to the local shoe store, bought the shoe, and shipped it to the customer. This allowed Swinmurn to validate his hypothesis that people would be willing to buy shoes on the Internet without having to invest in a huge inventory of shoes, an automated ordering system, a factory to stock and deliver the shoes, and so on [Hsieh 2013, 58]. Pay no attention to the man behind the curtain.

Piecemeal MVP

A piecemeal MVP is similar to the Wizard of Oz MVP, except some of the manual pieces are automated as cheaply as possible with existing, off-the-shelf tools. For example, to create the Groupon MVP, Andrew Mason put a custom skin on a WordPress Blog (see Figure 40), used FileMaker to generate coupon PDFs, and sent them out using Apple Mail [Mason 2010].

Figure 40. A screenshot of groupon from 2009 via the Internet Archive

Check out for a list of tools you can use to build an MVP. Whatever type of MVP you end up building, the key is to ensure that you build something that is minimal but still viable. And the best way to do that is to focus on your differentiators.

Focus on the differentiators

In a talk called "You and Your Research" (which has often been nicknamed "You and Your Career," as it has advice that applies to almost any career and not just research), Richard Hamming, a notable mathematician at Bell Labs, describes how he started sitting with the chemistry researchers at lunch:

I started asking, "What are the important problems of your field?" And after a week or so, "What important problems are you working on?" And after some more time I came in one day and said, "If what you are doing is not important, and if you don’t think it is going to lead to something important, why are you at Bell Labs working on it?" I wasn’t welcomed after that; I had to find somebody else to eat with!

[Hamming 1995], Richard Hamming, "You and Your Research"

Richard Hamming did important work at Bell Labs because he intentionally sought out important problems, and not just comfortable ones. Although his method of questioning can make people uncomfortable, it’s something we should all apply to our lives. What’s important in your field? What are you working on? Why aren’t those one and the same?

The same reasoning applies to building an MVP. What is important in your product? What are you actually building in the MVP? Why aren’t those one and the same? The most important aspect of a product is its differentiators: those features that separate it from all the other alternatives. People often refer to differentiators as the "competitive advantage," but that phrase makes it seem like any advantage, no matter how marginal, is enough. It’s not. Your differentiators need to be much, much better than the competition. You’re looking not for a 10% improvement, but a 10x improvement. Anything less, and most customers simply won’t think it’s worth the effort to switch.

Therefore, it’s important to ask yourself, "what two or three things does my product do exceptionally well?" Once you identify this small number of core features, build your MVP around them and ignore just about everything else. For example, when Google first launched Gmail, its differentiators were 1 GB of storage space (in an era when most other email providers only gave you 4 MB) and a zippy user interface (it had conversation view, powerful search features, and used Ajax to show new emails instantly instead of having to refresh the page). Almost all other features, such as a "rich text" composer and address book, were minimal or absent [Buchheit 2010] but it didn’t matter, as the differentiators were so compelling that it made all other email services look primitive.

Another great example was the original iPhone. Apple is known for building complete, polished, end-to-end solutions, but in many ways, the original iPhone was an MVP. It didn’t have an App Store, GPS, 3G, a front-facing camera, a flash for the rear-facing camera, games, instant messaging, copy and paste, multi-tasking, wireless sync, Exchange email, MMS, stereo Bluetooth, voice dialing, audio recording, or video recording. Despite all that, the iPhone was still years ahead of any other smartphone because Apple relentlessly focused on doing a few things exceptionally well: the multi-touch user interface, the hardware design, and the music and web surfing experience were at least ten times better than any other phone. And customers loved it.

Getting customers to love your product, not just like it, is a huge advantage. It’s much easier to take a product that a small number of users love and get a lot more users to love it than it is to take a product that a large number of users like and get them to love it [Graham 2008]. To take users from "like" to "love," you need to sweep them off their feet. You need to "wow" them. Think of the last time something made you say "wow." It probably took somebody going above and beyond to delight you. It probably took something exceptional, and the simple fact is that doing exceptional things takes a lot of time. So if you want users to love you, instead of doing a merely competent job at many things, choose a few of them and knock them out of the park.

How do you know which features to focus on? One way to figure it out is to write a blog post announcing the product launch before you build anything. What are the two or three key items you’re going to highlight in that blog post? What features will you show off in the screenshots? What will be the title of the blog post? Good blog posts are short, so this exercise will help you tease out what features really matter to make your product look enticing. Those are the must-haves for the MVP. Everything else is optional. In fact, everything else is not only optional, but oftentimes, detrimental. Every extra feature has a significant cost (see Simplicity), so unless it’s absolutely essential for delighting customers or validating your hypothesis, it doesn’t belong in the MVP.

Once you’ve figured out your differentiators and built an MVP around them, you can use it to validate your hypotheses. Perhaps the most important validation of all is getting customers to buy the MVP.

Buy the MVP

One of your goals with the MVP, even at a very early stage, is to get customers to buy your solution. Note the emphasis on the word "buy." Many people will tell you they "like" an idea, and they might even mean it, but there is a huge difference between liking something and being committed to buying it. It costs more than just money to buy a new product—it also costs time [Traynor 2014]. It takes time to convince your family (in the case of a consumer product) or your co-workers (in the case of an enterprise product) that the product is worth it, it takes time to install or deploy it, it takes time to train yourself and others to use it, and it takes time to maintain and update it in the future. The time aspect applies even if your product is free for some of your users (e.g., an ad-supported website or a freemium service), so no matter what pricing strategy you’re considering, your goal is to get a firm commitment to buy the product.

Every type of MVP you saw earlier, even the most minimal ones, provides an opportunity to buy. This is obviously the point of a crowdfunded MVP, but you can also have a pre-order form on a landing page MVP, and you can charge money for a Wizard of Oz, even if you have to accept payment in cash. You can tweak the price until you find a sweet spot, but don’t give it away for free. In fact, tweaking the price is a great way to see just how serious a customer is about using your product:

I ask [my customers], "If the product were free, how many would you actually deploy or use?" The goal is to take pricing away as an issue and see whether the product itself gets customers excited. If it does, I follow up with my next question: "Ok, it’s not free. In fact, imagine I charged you $1 million. Would you buy it?" While this may sound like a facetious dialog, I use it all the time. Why? Because more than half the time customers will say something like, "Steve, you’re out of your mind. This product isn’t worth more than $250,000." I’ve just gotten customers to tell me how much they are willing to pay. Wow.

[Blank 2013, 52], Steve Blank, The Four Steps to the Epiphany

What kind of customer would commit to buying a product that doesn’t exist? Or, even if you have a working prototype, what kind of customer is willing to do business with a brand-new startup despite all the bugs, performance problems, missing features, and the fact that you might be out of business in a few months? In the book Diffusion of Innovations, Everett Rogers groups customers into five categories [Rogers 2003, chap. 7]:

  1. Innovators are willing to take risks on new technologies because technology itself is a central interest in their life, regardless of its function, and they are always on the lookout for the latest innovations.

  2. Early adopters are also willing to take risks on new technology, not because of an interest in the technology but because they find it easy to visualize the benefits the technology would have in their lives.

  3. Early majority customers are driven by a need to solve a specific problem. They are able to envision how a new technology could be a solution, but they also know many new innovations end up failing, so they are willing to wait and see if it works out for others before buying it for themselves.

  4. Late majority customers also have a specific problem to solve, but they are not comfortable using a new technology to solve it. They will wait until a technology is mature, has established itself as a standard, and has a support network around it before buying.

  5. Laggards avoid new technologies as much as they can. They are the last to adopt a new innovation and usually only do so because they have no other choice.

The number of customers in each category roughly follows a bell curve, as shown in Figure 41.

Figure 41. Diffusion of innovations [Diffusion of Ideas 2012]

To be a successful company, you usually have to sell to the early and late majority, but you won’t be able to get there until you’ve convinced the innovators and early adopters. That is, innovation diffuses from left to right in the bell curve in Figure 41, and you can’t skip to a new category until you’ve been successful with a previous one. And because each category of customers is looking for something different, it is essential to understand what kind of customer you’re targeting or else you’ll end up with the wrong product, marketing strategy, and sales process.

What the early adopter is buying […] is some kind of change agent. By being the first to implement this change in their industry, the early adopters expect to get a jump on the competition, whether from lower product costs, faster time to market, more complete customer service, or some other comparable business advantage. They expect a radical discontinuity between the old ways and the new, and they are prepared to champion this cause against entrenched resistance. Being the first, they also are prepared to bear with the inevitable bugs and glitches that accompany any innovation just coming to market.

[Moore and McKenna 2006, 25], Geoffrey Moore and Regis McKenna,
Crossing the Chasm

Your goal in the early days of a startup, when you’re still validating the problem and solution, is to find the right early adopters. These are the kinds of customers who will commit to buying your solution long before it’s ready because they believe in your vision rather than the specific product. Steve Blank calls this group of customers earlyvangelists and provides the following handy list to help identify them:

  • They have a problem or need.

  • They understand they have a problem.

  • They’re actively searching for a solution and have a timetable for finding it.

  • The problem is so painful that they’ve cobbled together an interim solution.

  • They’ve committed, or can quickly acquire, budget dollars to purchase.

[Blank 2013, 35], Steve Blank, The Four Steps to the Epiphany

If you can find a customer who has already hacked together their own interim solution, that may be the best indicator of all because then you know you’ve found a real problem and a strong lead. Every industry has earlyvangelists, though there usually aren’t that many. That said, at this early stage, when you have no product and no customers, getting even a single customer is a huge win. Before you can get to thousands or millions of customers, you need to get 1, then 10, and then 100. To do that, you might have to do things that don’t scale.

Do things that don’t scale

One of the most common pieces of advice Y Combinator gives to startups is to "do things that don’t scale." That is, in the early days of a startup, you might have to do many things manually, such as hiring, recruiting users, and customer service. This will feel inefficient, especially to programmers, who will be tempted to yell "But that won’t scale!" But manual labor is often the only way to get the fly wheel started, and only once it’s really going do you need to worry about scaling. For example, the founders of Airbnb went door to door in New York to recruit early users and even helped them photograph their apartments [Graham 2013]. The founders of Homejoy, a startup that helps customers find home cleaning services, initially went around to customers’ homes and did all the cleaning themselves [Cheung Adora 2014]. The founders of Pinterest recruited their first users by going to coffee shops and personally asking strangers to try the product. They would also go to Apple stores and set all the browsers to the Pinterest home page [Altman 2014a]. The employees of Wufoo, a startup that helps users create online forms, used to send handwritten thank-you notes to every single customer:

Perhaps the biggest thing preventing founders from realizing how attentive they could be to their users is that they’ve never experienced such attention themselves. Their standards for customer service have been set by the companies they’ve been customers of, which are mostly big ones. Tim Cook doesn’t send you a hand-written note after you buy a laptop. He can’t. But you can. That’s one advantage of being small: you can provide a level of service no big company can.

Once you realize that existing conventions are not the upper bound on user experience, it’s interesting in a very pleasant way to think about how far you could go to delight your users.

[Graham 2013], Paul Graham, Co-founder of Y Combinator

When you’re a tiny startup and still validating your ideas, you can afford to do things that don’t scale to get your first customers. If the idea works, you can make the process more scalable through automation later. But if it doesn’t work (and most ideas don’t), then you’ve saved an enormous amount of time by not building a bunch of automation for the wrong thing. Also, by getting directly involved in the nitty-gritty details of your business, you become a domain expert, which as you saw earlier, is essential for coming up with great ideas.


A user interface is like a joke. If you have to explain it, it’s not that good.

[LeBlanc 2014], Martin LeBlanc, Founder of Iconfinder

Design is an essential skill because the user interface is the product. The good news is that the design process is iterative: any design can be incrementally improved and any person can incrementally improve their design skills. The best thing you can do is to start reusing existing designs, writing user stories, and designing for personas. With practice, you can learn copywriting, layout, typography, contrast, repetition, and colors. By giving your product a personality, especially a polite one that is responsive, considerate, and forgiving, you can build a design that resonates emotionally. And by frequently running usability tests, you can get direct feedback on how you’re progressing.

However, no matter how good your design is, there is no way to be sure if it will be successful. Therefore, the best strategy is to constantly be running small experiments and adjusting them based on feedback from real users. Do a little market research, talk to potential customers, release a quick MVP, learn from your users’ reactions, and then repeat again and again.

I’m the author of Amazon’s best-selling interview book, but it all started with a little 20-page PDF. Honestly, it wasn’t very good. I’m embarrassed to look at it now. It was good enough to act as an MVP, even though I didn’t think about it that way at the time. It ended up testing the market, establishing that there was a real demand, and from there I could expand on it. It also got me very early feedback on what matters.

There was one other company I started that was much the same thing. It started small, and then by accident, I realized I had a company.

It’s so easy to hear an idea from yourself or someone else and cross it off as being a bad idea for various reasons. I could tell you a million reasons for why my company would fail. And yet, it succeeds despite those reasons (and perhaps because of them, in some cases).

The truth is that it’s so hard to predict what will and won’t work. An MVP allows you to try something out relatively quickly and cheaply. Those results often mean more than your other predictions.

[McDowell 2015], Gayle Laakmann McDowell, Founder and CEO of CareerCup

A lot of people find this hectic, trial-and-error approach unsettling. It’s tempting to look at a successful product and assume that it appeared in the creator’s head exactly as you see it now, fully formed, beautiful, and complete. But that’s like seeing Michael Jordan looking smooth and dominant on the basketball court and assuming he came out of the womb 6′6″ and 216 lbs, with the ability to dunk and an unstoppable fade-away shot.

I’ve missed more than 9000 shots in my career. I’ve lost almost 300 games. 26 times, I’ve been trusted to take the game winning shot and missed. I’ve failed over and over and over again in my life. And that is why I succeed.

[Goldman and Papson 1999, 49], Michael Jordan

Any time you hold a polished product in your hand, remember that what you’re looking at is the final iteration of thousands of trials and errors that included many missteps, pivots, redesigns, and compromises. And the entire way, the company that built it was probably struggling to survive and hoping it could find a working combination before going out of business.

Starting a company is like throwing yourself off the cliff and assembling an airplane on the way down.

[Chang 2013], Reid Hoffman, Co-founder and Chairman of LinkedIn

This is what it means for a startup to be in "search mode." It is a frantic race against time to find a problem worth solving and a solution that’s worth building, and the best way to make that happen is not to hope for a eureka moment but to use an iterative, experimental process.

Continue reading How to design a product at a startup.

Categories: Technology

Four short links: 9 July 2018

O'Reilly Radar - Mon, 2018/07/09 - 04:35

DNA Neural Nets, Ethics of AI, Oblivious Search, Physical Products

  1. Scaling up Molecular Pattern Recognition with DNA-based Winner-Take-All Neural Networks (Nature) -- they use two molecular bio techniques (DNA-strand-displacement and a "seesaw DNA gate motif") to implement a winner-takes-all type of neural network...with a soup of DNA. It recognizes digits from a 10x10 grid of pixels. The network successfully classified test patterns with up to 30 of the 100 bits flipped relative to the digit patterns "remembered" during training, suggesting that molecular circuits can robustly accomplish the sophisticated task of classifying highly complex and noisy information on the basis of similarity to a memory. (via The Next Web)
  2. The Ethics and Governance of Artificial Intelligence (MIT) -- video from three classes is online.
  3. Oblix: An Efficient Oblivious Search Index -- the new word I learned was "oblivious," meaning that the actions of the algorithm over encrypted data do not reveal which (encrypted) documents match the keyword being searched for. Paper a Day makes sense of their work.
  4. Sonos One and Amazon Alexa Teardowns -- using the physical product engineering to relate the smart speaker market positions of Sonos and Amazon. I work with mechanical and electrical engineers, and the invisible degrees of complexity in physical products always amazes me.

Continue reading Four short links: 9 July 2018.

Categories: Technology

Four short links: 6 July 2018

O'Reilly Radar - Fri, 2018/07/06 - 04:50

REST vs. GraphQL, Chinese Sources, Popcorn Robots, and (Human) Learning Research

  1. Should You Migrate from REST to GraphQL? -- a nice precis of the good and bad parts of REST and GraphQL so you can make an informed decision about when to use.
  2. Abacus News: Unboxing China -- interesting website that's a bit like TechMeme but for China, and more consumer focused. See also The ChinAI Newsletter where you can read Jeff Ding's weekly translations of writings on AI policy and strategy from Chinese thinkers—will also include general links to all things at the intersection of China and AI. (via CognitionX)
  3. Popcorn-Driven Robotic Actuators -- Popcorn is a cheap, biodegradable way to actuate a robot (once). Fun silliness.
  4. Self-Regulated Learning: Beliefs, Techniques, and Illusions -- In this review, we summarize recent research on what people do and do not understand about the learning activities and processes that promote comprehension, retention, and transfer. Share with the student or life-long learner in your life.

Continue reading Four short links: 6 July 2018.

Categories: Technology

Data regulations and privacy discussions are still in the early stages

O'Reilly Radar - Thu, 2018/07/05 - 06:05

The O’Reilly Data Show Podcast: Aurélie Pols on GDPR, ethics, and ePrivacy.

In this episode of the Data Show, I spoke with Aurélie Pols of Mind Your Privacy, one of my go-to resources when it comes to data privacy and data ethics. This interview took place at Strata Data London, a couple of days before the EU General Data Protection Regulation (GDPR) took effect. I wanted her perspective on this landmark regulation, as well as her take on trends in data privacy and growing interest in ethics among data professionals.

Continue reading Data regulations and privacy discussions are still in the early stages.

Categories: Technology

Four short links: 5 July 2018

O'Reilly Radar - Thu, 2018/07/05 - 04:50

Programming Language Ideas, Probability in Language, React Tutorial, and Open Plan Pain

  1. Papers on Programming Languages: Ideas from 1970s for Today -- I suspect a vanishingly small number of these are unimplementable in Perl 6.
  2. If You Say Something is Likely, How Likely Do People Think It Is? (HBR) -- more fascinating research into how people translate probabilities into language and back again. There is a serious possibility that you will enjoy this.
  3. React from Zero -- tutorial in the classic "just get something working, then hack on it" style. (via Simon Willison)
  4. The Impact of The "Open" Workspace on Human Collaboration? (Royal Society) -- Contrary to common belief, the volume of face-to-face interaction decreased significantly (approx. 70%) in both cases, with an associated increase in electronic interaction. In short, rather than prompting increasingly vibrant face-to-face collaboration, open architecture appeared to trigger a natural human response to socially withdraw from officemates and interact instead over email and IM.

Continue reading Four short links: 5 July 2018.

Categories: Technology

Four short links: 4 July 2018

O'Reilly Radar - Wed, 2018/07/04 - 04:30

Engagement, Leadership, Code Viz, and Automation

  1. The Three Games of Customer Engagement Strategy -- know what the growth hacker behind your favorite apps is trying to get you to do. Either you are playing to win attention, transactions, or productivity.
  2. Founder to CEO: Matt's Book for Startups -- really good systems and mental models for being effective as a leader.
  3. A Human-Readable Interactive Representation of a Code Library ​-- The interactive document below is an alternate representation of Fuzzyset.js. I created it as an experiment to help me and other programmers understand the internal workings of the library. And I made it look like a page on GitHub to simulate what it might be like if these kinds of documents were commonly provided with programs.
  4. Manual Work Is a Bug -- Four phases: Document the steps, create automation equivalents, create automation, self-service and autonomous services.

Continue reading Four short links: 4 July 2018.

Categories: Technology

120+ new live online training courses for July and August

O'Reilly Radar - Wed, 2018/07/04 - 03:00

Get hands-on training in machine learning, software architecture, Java, Kotlin, leadership skills, and many other topics.

Develop and refine your skills with 120+ new live online training courses we opened up for July and August on our learning platform.

Space is limited and these courses often fill up.

Artificial intelligence and machine learning

Building Intelligent Systems with AI and Deep Learning, July 16

Machine Learning in Practice, August 3

Hands-on Machine Learning with Python: Classification and Regression, August 3

Getting Started with Computer Vision Using Go, August 6

Deep Learning for Natural Language Processing (NLP), August 8

Building Deep Learning Model Using Tensorflow, August 9-10

Hands-on Machine Learning with Python: Clustering, Dimension Reduction, and Time Series Analysis, August 13

Essential Machine Learning and Exploratory Data Analysis with Python and Jupyter Notebook, August 13-14

Machine Learning with R, August 13-14

Deep Reinforcement Learning, August 16

Artificial Intelligence: An Overview for Executives, August 17

Reinforcement Learning with Tensorflow and Keras, August 23-24

Democratizing Machine Learning: A Dive into Google Cloud Machine Learning APIs, August 24


Understanding Hyperledger Fabric Blockchain, August 13-14

Blockchain Applications and Smart Contracts, August 16

Introducing Blockchain, August 27


Introduction to Critical Thinking, August 3

Managing Team Conflict, August 7

Introduction to Strategic Thinking Skills, August 8

Creating a Great Employee Experience through Onboarding, August 9

Introduction to Leadership Skills, August 16

Leadership Communication Skills for Managers, August 16

Introduction to Customer Experience, August 16

Introduction to Delegation Skills, August 16

Having Difficult Conversations, August 23

Data science and data tools

Shiny R, July 27

Applied Network Analysis for Data Scientists: A Tutorial for Pythonistas, July 30-31

Getting Started with Pandas, August 1

Beginning Data Analysis with Python and Jupyter, August 1-2

Mastering Pandas, August 2

Understanding Data Science Algorithms in R: Regression, August 6

Understanding Data Science Algorithms in R: Scaling, Normalization, and Clustering, August 10

Apache Hadoop, Spark, and Big Data Foundations, August 16

Rich Documents with R Markdown, August 16

Mastering Relational SQL Querying, August 21-22

Hands-on Introduction to Apache Hadoop and Spark Programming, August 21-22


VUI Design Fundamentals, August 1

Product management

Information Architecture: Research and Design, August 28

Introduction to Project Management, August 28


Advanced SQL Series: Relational Division, July 9

Reactive Spring Boot, July 13

Pythonic Design Patterns, July 23

Designing Bots and Conversational Apps for Work, July 24

Test-Driven Development In Python, July 24

Beyond Python Scripts: Logging, Modules, and Dependency Management, July 25

Beyond Python Scripts: Exceptions, Error Handling, and Command-Line Interfaces, July 26

Spring Boot and Kotlin, July 30

Players Making Decisions, August 1

Bash Shell Scripting in 3 Hours, August 1

Reactive Spring Boot, August 2

Introduction to Modularity with the Java 9 Platform Module System (JPMS), August 6

Creating a Custom Skill for Amazon Alexa, August 6

Get Started With Kotlin, August 6-7

JavaScript the Hard Parts: Closures, August 10

Python Data Handling: A Deeper Dive, August 13

Getting Started with Python’s Pytest, August 13

Building Chatbots for the Google Assistant using Dialogflow, August 14

Design Patterns Boot Camp, August 14-15

Advanced SQL Series: Window Functions, August 15

Scala Fundamentals: From Core Concepts to Real Code in 5 Hours, August 15

Building Chatbots with AWS, August 17

Fundamentals of Virtual Reality Technology and User Experience, August 17

Mastering Python’s Pytest, August 17

Scalable Programming with Java 8 Parallel Streams, August 20

Scaling Python with Generators, August 20

Test-Driven Development In Python, August 21

Pythonic Object-Oriented Programming, August 22

Linux Foundation System Administrator (LFCS) Crash Course, August 22-24

Beyond Python Scripts: Logging, Modules, and Dependency Management, August 22

Pythonic Design Patterns, August 23

Interactive Java with JShell, August 27

Python: The Next Level, August 27-28

IoT Fundamentals, August 29-30

OCA Java SE 8 Programmer Certification Crash Course, August 29-31

Reactive Programming with Java 8 Completable Futures, August 30

Learn Linux in 3 Hours, August 30

Beyond Python Scripts: Exceptions, Error Handling, and Command-Line, August 31


Introduction to Encryption, August 2

Cyber Security Fundamentals, August 2-3

AWS Security Fundamentals, August 7

Certified Ethical Hacker (CEH) Certification Crash Course, August 14-15

Amazon Web Services (AWS) Security Crash Course, August 17

Introduction to Ethical Hacking and Penetration Testing, August 21-22

CompTIA Security+ SY0-501 Crash Course, August 21-22

CompTIA Network+ Crash Course, August 21-23

Cybersecurity Offensive and Defensive Techniques in 3 Hours, August 24

CISSP Crash Course, August 28-29

Introduction to Digital Forensics and Incident Response (DFIR), August 30

Software architecture

AWS Certified Solutions Architect Associate Crash Course, July 25-26

Implementing Evolutionary Architectures, August 1-2

Shaping and Communicating Architectural Decisions, August 15

From Developer to Software Architect, August 22-23

Software Architecture for Developers, August 24

Visualizing Software Architecture with the C4 Model, August 27

Amazon Web Services: Architect Associate Certification Core Architecture Concepts, August 29-30

Systems engineering and operations

Introduction to Google Cloud Platform, August 1-2

Getting Started with Azure App Service Web Apps, August 2

Deploying Container-Based Microservices on AWS, August 7-8

Continuous Delivery with Jenkins and Docker, August 9

Google Cloud Certified Associate Cloud Engineer Crash Course, August 9-10

AWS Certified Cloud Practitioner Crash Course, August 9-10

Docker: Up and Running, August 9-10

Docker: Beyond the Basics (CI & CD), August 13-14

Amazon Web Services: AWS Managed Services, August 14-15

Practical Kubernetes, August 16-17

Building a Cloud Roadmap, August 20

Amazon Web Services: AWS Design Fundamentals, August 20-21

Getting Started with Amazon Web Services (AWS), August 21-22

Ansible in 3 Hours, August 28

Web programming

REST: A Hands-On Guide to GraphQL and Queryable APIs, August 3

This is the Full Stack Development with Mean, August 7-8

Better Angular Applications with Observables QuickStart, August 9

Component Driven Architecture in Angular, August 13

Building APIs with Django REST Framework, August 13

First Steps with Angular, August 14

Angular Testing QuickStart, August 15

Bootstrap Responsive Design and Development, August 15-17

Getting Started with HTML and CSS, August 16

Advanced Angular Applications with NgRx, August 23-24

Beginning Responsive Web Development with HTML and CSS, August 27-28

Web Services with Node, August 27-28

Modern JavaScript, August 28

Continue reading 120+ new live online training courses for July and August.

Categories: Technology

Four short links: 3 July 2018

O'Reilly Radar - Tue, 2018/07/03 - 03:25

Automation and Employment, Matrices for Deep Learning, Tim Berners-Lee, and How to Read

  1. The Rise of the Robot Reserve Army: Automation and the Future of Economic Development, Work, and Wages in Developing Countries -- In an adaption of the Lewis model of economic development, the paper uses a simple framework in which the potential for automation creates “unlimited supplies of artificial labor,” particularly in the agricultural and industrial sectors due to technological feasibility. This is likely to create a push force for labor to move into the service sector, leading to a bloating of service-sector employment and wage stagnation but not to mass unemployment, at least in the short-to-medium term. (via Sam Kinsley)
  2. The Matrix Calculus You Need for Deep Learning -- We assume no math knowledge beyond what you learned in calculus 1, and provide links to help you refresh the necessary math where needed. Note that you do not need to understand this material before you start learning to train and use deep learning in practice; rather, this material is for those who are already familiar with the basics of neural networks and wish to deepen their understanding of the underlying math.
  3. Solid: Recentralizing the Web -- Tim Berners-Lee's latest project. Solid (derived from "social linked data") is a proposed set of conventions and tools for building decentralized web applications based on linked data principles. (via Vanity Fair)
  4. How to Read (Robert Heaton) -- purposefully reading, with note-taking so you can write a review and build a memory deck.

Continue reading Four short links: 3 July 2018.

Categories: Technology

Four short links: 2 July 2018

O'Reilly Radar - Mon, 2018/07/02 - 03:20

Soft Robots, Debugging Serverless, Map Privacy, and Building Footprints

  1. Adaptive and Resilient Soft Tensegrity Robots -- neat idea for soft robots that invent gaits with a minimum of physical trails, and the video is cute.
  2. Debugging Serverless -- what observability means in this context, and the things you have to pay attention to if you want observability.
  3. Apple Maps and Privacy -- buried in this piece on Apple rebuilding its Maps data using cars driving the streets: “We specifically don’t collect data, even from point A to point B,” notes Cue. “We collect data—when we do it—in an anonymous fashion, in subsections of the whole, so we couldn’t even say that there is a person who went from point A to point B. We’re collecting the segments of it. As you can imagine, that’s always been a key part of doing this. Honestly, we don’t think it buys us anything [to collect more]. We’re not losing any features or capabilities by doing this.”
  4. U.S. Building Footprints -- This data set contains 124,885,597 computer-generated building footprints in all 50 U.S. states. This data is freely available for download and use. Contributed by Microsoft. (via Bing blog)

Continue reading Four short links: 2 July 2018.

Categories: Technology

TensorFlow Day is coming to OSCON

O'Reilly Radar - Mon, 2018/07/02 - 03:00

Explore TensorFlow’s applications and its community on July 17 at TensorFlow Day at OSCON.

Machine learning (ML) is everywhere in computing and the popular press right now, and the rapid rate of innovation is being driven by open source software. TensorFlow is one of the most popular open source ML frameworks, and the subject of TensorFlow Day at OSCON this year (July 17 in Portland, Oregon).

As an open source endeavor, TensorFlow is quite unusual: what’s available on GitHub is really the same code that is used daily in production at Google. And thanks to being open source, it’s now used by a universe of users, from academia to industry, and in places as unexpected as high schools and the arts.

Starting from its core as a scalable Python and C++ API, the project has exploded in 2018, becoming relevant in many new areas of software, such as:

Machine learning isn’t just for AI researchers, it has important applications in almost any modern software system—from fraud detection and object recognition, to robotics and generative art. As TensorFlow spreads out from its original release two years ago at Google, our user and contributor community is vital to the project. The imagination of the community leads to ever more amazing applications and tools.

In holding the TensorFlow Day at OSCON, we wanted to achieve a couple of things. First, create a place for the community to meet face to face around TensorFlow, learn how to use it, and how to contribute. And second, start the conversation with other open source projects about how machine learning can be used.

We’ll have a day of excellent talks, both from the TensorFlow team, and from community members, on topics ranging from training and deploying machine learning, to its application in areas such as health and music. We’ll also be talking frankly about how we resource an open source project of TensorFlow’s scale: building it, handling pull requests, and triaging issues. We care too about responsible use of artificial intelligence, and we’re including an introduction to machine learning fairness.

Thanks to collaboration with IBM, we will be running a Hacking Room alongside the day’s talks. This will give space for groups to work together on an aspect of TensorFlow, learn and collaborate. You can see the full list on the web site, but it’s a place for both newcomers as well as existing contributors. There’ll be demos, collaborative sessions, and people willing to help you figure out how machine learning could work in your project.

Please join all of us at TensorFlow Day on July 17 at OSCON in Portland. Attendance is open to any OSCON attendee. As an open community, we want as many people to come as possible, so this includes Expo Plus passes! You can register here.

Continue reading TensorFlow Day is coming to OSCON.

Categories: Technology


Subscribe to LuftHans aggregator