You are here

Feed aggregator

It’s time for data scientists to collaborate with researchers in other disciplines

O'Reilly Radar - Thu, 2019/03/28 - 05:45

The O’Reilly Data Show Podcast: Forough Poursabzi Sangdeh on the interdisciplinary nature of interpretable and interactive machine learning.

In this episode of the Data Show, I spoke with Forough Poursabzi-Sangdeh, a postdoctoral researcher at Microsoft Research New York City. Poursabzi works in the interdisciplinary area of interpretable and interactive machine learning. As models and algorithms become more widespread, many important considerations are becoming active research areas: fairness and bias, safety and reliability, security and privacy, and Poursabzi’s area of focus—explainability and interpretability.

Continue reading It’s time for data scientists to collaborate with researchers in other disciplines.

Categories: Technology

Four short links: 28 March 2019

O'Reilly Radar - Thu, 2019/03/28 - 05:10

Data-Oriented Design, Time Zone Hell, Music Algorithms, and Fairness in ML

  1. Data Oriented Design -- A curated list of data-oriented design resources.
  2. Storing UTC is Not a Silver Bullet -- time zones will drive you to drink.
  3. Warner Music Signed an Algorithm to a Record Deal (Verge) -- Although Endel signed a deal with Warner, the deal is crucially not for “an algorithm,” and Warner is not in control of Endel’s product. The label approached Endel with a distribution deal and Endel used its algorithm to create 600 short tracks on 20 albums that were then put on streaming services, returning a 50/50 royalty split to Endel. Unlike a typical major label record deal, Endel didn’t get any advance money paid upfront, and it retained ownership of the master recordings.
  4. 50 Years of Unfairness: Lessons for Machine Learning -- We trace how the notion of fairness has been defined within the testing communities of education and hiring over the past half century, exploring the cultural and social context in which different fairness definitions have emerged. In some cases, earlier definitions of fairness are similar or identical to definitions of fairness in current machine learning research, and foreshadow current formal work. In other cases, insights into what fairness means and how to measure it have largely gone overlooked. We compare past and current notions of fairness along several dimensions, including the fairness criteria, the focus of the criteria (e.g., a test, a model, or its use), the relationship of fairness to individuals, groups, and subgroups, and the mathematical method for measuring fairness (e.g., classification, regression). This work points the way toward future research and measurement of (un)fairness that builds from our modern understanding of fairness while incorporating insights from the past.

Continue reading Four short links: 28 March 2019.

Categories: Technology

The journey to the data-driven enterprise from the edge to AI

O'Reilly Radar - Wed, 2019/03/27 - 13:00

Amy O'Connor explains how Cloudera applies an "edge to AI" approach to collect, process, and analyze data.

Continue reading The journey to the data-driven enterprise from the edge to AI.

Categories: Technology

AI and cryptography: Challenges and opportunities

O'Reilly Radar - Wed, 2019/03/27 - 13:00

Shafi Goldwasser explains why the next frontier of cryptography will help establish safe machine learning.

Continue reading AI and cryptography: Challenges and opportunities.

Categories: Technology

Streamlining your data assets: A strategy for the journey to AI

O'Reilly Radar - Wed, 2019/03/27 - 13:00

Dinesh Nirmal shares a data asset framework that incorporates current business structures and the elements you need for an AI-fluent data platform.

Continue reading Streamlining your data assets: A strategy for the journey to AI.

Categories: Technology

Scoring your business in the AI matrix

O'Reilly Radar - Wed, 2019/03/27 - 13:00

Jed Dougherty plots AI examples on a matrix to clarify the various interpretations of AI.

Continue reading Scoring your business in the AI matrix.

Categories: Technology

0x64: Our Producer Dan Lynch Interviewed at Copyleft Conf 2019

FAIF - Wed, 2019/03/27 - 11:19

Bradley and Karen interview their own producer, Dan Lynch, on site at Copyleft Conf 2019.

Show Notes: Segment 0 (00:46) Segment 1 (5:19) Segment 2 (28:23)

Bradley and Karen briefly dissect the interview with Dan.

Segment 3 (32:22)

Karen and Bradley mention that they'll discuss the Linux Foundation initiative, “Community Bridge” in the next episode. If you want a preview Bradley and Karen's thoughts, you can read their blog post about Linux Foundation's “Community Bridge” initiative.

Send feedback and comments on the cast to <oggcast@faif.us>. You can keep in touch with Free as in Freedom on our IRC channel, #faif on irc.freenode.net, and by following Conservancy on identi.ca and and Twitter.

Free as in Freedom is produced by Dan Lynch of danlynch.org. Theme music written and performed by Mike Tarantino with Charlie Paxson on drums.

The content of this audcast, and the accompanying show notes and music are licensed under the Creative Commons Attribution-Share-Alike 4.0 license (CC BY-SA 4.0).

Categories: Free Software

Four short links: 27 March 2019

O'Reilly Radar - Wed, 2019/03/27 - 04:00

Linkers and Loaders, Low-Low-Low Power Bluetooth, Voice, and NVC

  1. Linkers and Loaders -- the uncorrected manuscript chapters for my Linkers and Loaders, published by Morgan-Kaufman.
  2. <1mW Bluetooth LTE Transmitter -- Consuming just 0.6 milliwatts during transmission, it would broadcast for 11 years using a typical 5.8-mm coin battery. Such a millimeter-scale BLE radio would allow these ant-sized sensors to communicate with ordinary equipment, even a smartphone. Ingenious engineering hacks to make this work.
  3. Mumble -- an open source, low-latency, high-quality voice chat software primarily intended for use while gaming.
  4. A Guide to Difficult Conversations (Dave Bailey) -- your quarterly reminder that non-violent communication exists and is a good thing.

Continue reading Four short links: 27 March 2019.

Categories: Technology

Four short links: 26 March 2019

O'Reilly Radar - Tue, 2019/03/26 - 04:15

Software Stack, Gig Economy, Simple Over Flexible, and Packet Radio

  1. Thoughts on Conway's Law and the Software Stack (Jessie Frazelle) -- All these problems are not small by any means. They are miscommunications at various layers of the stack. They are people thinking an interface or feature is secure when it is merely a window dressing that can be bypassed with just a bit more knowledge about the stack. I really like the advice Lea Kissner gave: “take the long view, not just the broad view.” We should do this more often when building systems.
  2. Troubles with the Open Source Gig Economy and Sustainability Tip Jar (Chris Aniszczyk) -- thoughtful long essay with a lot of links for background reading, on the challenges of sustainability via Patreon, etc., through to some signs of possibly-working models.
  3. Choose Simple Solutions Over Flexible Ones -- flexibility does not come for free.
  4. New Packet Radio (Hackaday) -- a custom radio protocol, designed to transport bidirectional IP traffic over 430MHz radio links (ham radio). This protocol is optimized for "point to multipoint" topology, with the help of managed-TDMA. Note that Hacker News commentors indicate some possible FCC violations; though, as the project comes from France, that's probably not a problem for the creators of the software.

Continue reading Four short links: 26 March 2019.

Categories: Technology

Four short links: 25 March 2019

O'Reilly Radar - Mon, 2019/03/25 - 04:00

Hiring for Neurodiversity, Reprogrammable Molecular Computing, Retro UUCP, and Industrial Go

  1. Dell's Neurodiversity Program -- excellent work from Dell making themselves an attractive destination for folks on the autistic spectrum.
  2. Reprogrammable Molecular Computing System (Caltech) -- The researchers were able to experimentally demonstrate 6-bit molecular algorithms for a diverse set of tasks. In mathematics, their circuits tested inputs to assess if they were multiples of three, performed equality checks, and counted to 63. Other circuits drew "pictures" on the DNA "scarves," such as a zigzag, a double helix, and irregularly spaced diamonds. Probabilistic behaviors were also demonstrated, including random walks as well as a clever algorithm (originally developed by computer pioneer John von Neumann) for obtaining a fair 50/50 random choice from a biased coin. Paper.
  3. Dataforge UUCP -- it's like Cory Doctorow guestwrote our timeline: UUCP over SSH to give decentralized comms for freedom fighters.
  4. Go for Industrial Programming (Peter Bourgon) -- I’m speaking today about programming in an industrial context. By that I mean: in a startup or corporate environment; within a team where engineers come and go; on code that outlives any single engineer; and serving highly mutable business requirements. [...] I’ve tried to select for areas that have routinely tripped up new and intermediate Gophers in organizations I’ve been a part of, and particularly those things that may have nonobvious or subtle implications. (via ceej)

Continue reading Four short links: 25 March 2019.

Categories: Technology

Four short links: 22 March 2019

O'Reilly Radar - Fri, 2019/03/22 - 04:40

Explainable AI, Product Management, REPL for Games, and Open Source Inventory

  1. XAI -- An explainability toolbox for machine learning. Follows the Ethical Institute for AI & Machine Learning's 8 principles.
  2. The Producer Playbook -- Guidelines and best practices for producers and project managers.
  3. Repl.it Adds Graphics -- PyGame in the browser, in fast turnaround time.
  4. ScanCode Toolkit -- detects licenses, copyrights, package manifests and dependencies, and more by scanning code ... to discover and inventory open source and third-party packages used in your code.

Continue reading Four short links: 22 March 2019.

Categories: Technology

Automating ethics

O'Reilly Radar - Fri, 2019/03/22 - 04:15

Machines will need to make ethical decisions, and we will be responsible for those decisions.

We are surrounded by systems that make ethical decisions: systems approving loans, trading stocks, forwarding news articles, recommending jail sentences, and much more. They act for us or against us, but almost always without our consent or even our knowledge. In recent articles, I've suggested the ethics of artificial intelligence itself needs to be automated. But my suggestion ignores the reality that ethics has already been automated: merely claiming to make data-based recommendations without taking anything else into account is an ethical stance. We need to do better, and the only way to do better is to build ethics into those systems. This is a problematic and troubling position, but I don't see any alternative.

The problem with data ethics is scale. Scale brings a fundamental change to ethics, and not one that we're used to taking into account. That’s important, but it’s not the point I’m making here. The sheer number of decisions that need to be made means that we can’t expect humans to make those decisions. Every time data moves from one site to another, from one context to another, from one intent to another, there is an action that requires some kind of ethical decision.

Gmail’s handling of spam is a good example of a program that makes ethical decisions responsibly. We’re all used to spam blocking, and we don’t object to it, at least partly because email would be unusable without it. And blocking spam requires making ethical decisions automatically: deciding that a message is spam means deciding what other people can and can’t say, and who they can say it to.

There’s a lot we can learn from spam filtering. It only works at scale; Google and other large email providers can do a good job of spam filtering because they see a huge volume of email. (Whether this centralization of email is a good thing is another question.) When their servers see an incoming message that matches certain patterns across their inbound email, that message is marked as spam and sorted into recipients’ spam folders. Spam detection happens in the background; we don’t see it. And the automated decisions aren’t final: you can check the spam folder and retrieve messages that were spammed by mistake, and you can mark messages that are misclassified as not-spam.

Credit card fraud detection is another system that makes ethical decisions for us. Most of us have had a credit card transaction rejected and, upon calling the company, found that the card had been cancelled because of a fraudulent transaction. (In my case, a motel room in Oklahoma.) Unfortunately, fraud detection doesn’t work as well as spam detection; years later, when my credit card was repeatedly rejected at a restaurant that I patronized often, the credit card company proved unable to fix the transactions or prevent future rejections. (Other credit cards worked.) I’m glad I didn’t have to pay for someone else’s stay in Oklahoma, but an implementation of ethical principles that can’t be corrected when it makes mistakes is seriously flawed.

So, machines are already making ethical decisions, and often doing so badly. Spam detection is the exception, not the rule. And those decisions have an increasingly powerful effect on our lives. Machines determine what posts we see on Facebook, what videos are recommended to us on YouTube, what products are recommended on Amazon. Why did Google News suddenly start showing me alt-right articles about a conspiracy to deny Cornell University students’ inalienable right to hamburgers? I think I know; I’m a Cornell alum, and Google News “thought” I’d be interested. But I’m just guessing, and I have precious little control over what Google News decides to show me. Does real news exist if Google or Facebook decides to show me burger conspiracies instead? What does “news” even mean if fake conspiracy theories are on the same footing? Likewise, does a product exist if Amazon doesn’t recommend it? Does a song exist if YouTube doesn’t select it for your playlist?

These data flows go both ways. Machines determine who sees our posts, who receives data about our purchases, who finds out what websites we visit. We’re largely unaware of those decisions, except in the most grotesque sense: we read about (some of) them in the news, but we’re still unaware of how they impact our lives.

Don’t misconstrue this as an argument against the flow of data. Data flows, and data becomes more valuable to all of us as a result of those flows. But as Helen Nissenbaum argues in her book Privacy in Context, those flows result in changes in context, and when data changes context, the issues quickly become troublesome. I am fine with medical imagery being sent to a research study where it can be used to train radiologists and the AI systems that assist them. I’m not OK with those same images going to an insurance consortium, where they can become evidence of a “pre-existing condition,” or to a marketing organization that can send me fake diagnoses. I believe fairly deeply in free speech, so I’m not too troubled by the existence of conspiracy theories about Cornell’s dining service; but let those stay in the context of conspiracy theorists. Don’t waste my time or my attention.

I’m also not suggesting that machines make ethical choices in the way humans do: ultimately, humans bear responsibility for the decisions their machines make. Machines only follow instructions, whether those instruction are concrete rules or the arcane computations of a neural network. Humans can’t absolve themselves of responsibility by saying, “The machine did it.” We are the only ethical actors, even when we put tools in place to scale our abilities.

If we’re going to automate ethical decisions, we need to start from some design principles. Spam detection gives us a surprisingly good start. Gmail’s spam detection assists users. It has been designed to happen in the background and not get into the user’s way. That’s a simple but important statement: ethical decisions need to stay out of the user’s way. It’s easy to think that users should be involved with these decisions, but that defeats the point: there are too many decisions, and giving permission each time an email is filed as spam would be much worse than clicking on a cookie notice for every website you visit. But staying out of the user's way has to be balanced against human responsibility: ambiguous or unclear situations need to be called to the users' attention. When Gmail can't decide whether or not a message is spam, it passes it on to the user, possibly with a warning.

A second principle we can draw from spam filtering is that decisions can’t be irrevocable. Emails tagged as spam aren’t deleted for 30 days; at any time during that period, the user can visit the spam folder and say “that’s not spam.” In a conversation, Anna Lauren Hoffmann said it’s less important to make every decision correctly than to have a means of redress by which bad decisions can be corrected. That means of redress must be accessible by everyone, and it needs to be human, even though we know humans are frequently biased and unfair. It must be possible to override machine-made decisions, and moving a message out of the spam folder overrides that decision.

When the model for spam detection is systematically wrong, users can correct it. It’s easy to mark a message as “spam” or “not spam.” This kind of correction might not be appropriate for more complex applications. For example, we wouldn’t want real estate agents “correcting” a model to recommend houses based on race or religion; and we could even discuss whether similar behavior would be appropriate for spam detection. Designing effective means of redress and correction may be difficult, and we’ve only dealt with the simplest cases.

Ethical problems arise when a company’s interest in profit comes before the interests of the users. We see this all the time: in recommendations designed to maximize ad revenue via “engagement”; in recommendations that steer customers to Amazon’s own products, rather than other products on their platform. The customer’s interest must always come before the company’s. That applies to recommendations in a news feed or on a shopping site, but also how the customer’s data is used and where it’s shipped. Facebook believes deeply that “bringing the world closer together” is a social good but, as Mary Gray said on Twitter, when we say that something is a “social good,” we need to ask: “good for whom?” Good for advertisers? Stockholders? Or for the people who are being brought together? The answers aren’t all the same, and depend deeply on who’s connected and how.

Many discussions of ethical problems revolve around privacy. But privacy is only the starting point. Again, Nissenbaum clarifies that the real issue isn’t whether data should be private; it’s what happens when data changes context. None of these privacy tools could have protected the pregnant Target customer who was outed to her parents. The problem wasn’t with privacy technology, but with the intention: to use purchase data to target advertising circulars. How can we control data flows so those flows benefit, rather than harm, the user? "Datasheets for datasets" is a proposal for a standard way to describe data sets; model cards proposes a standard way to describe models. While neither of these is a complete solution, I can imagine a future version of these proposals that standardizes metadata so data routing protocols can determine which flows are appropriate and which aren't. It’s conceivable that the metadata for data could describe what kinds of uses are allowable (extending the concept of informed consent), and metadata for models could describe how data might be used. That's work that hasn't been started, but it's work that needed.

Whatever solutions we end up with, we must not fall in love with the tools. It’s entirely too easy for technologists to build some tools and think they’ve solved a problem, only to realize the tools have created their own problems. Differential privacy can safeguard personal data by adding random records to a database without changing its statistical properties, but it can also probably protect criminals by hiding evidence. Homomorphic encryption, which allows systems to do computations on encrypted data without first decrypting it, can probably be used to hide the real significance of computations. Thirty years of experience on the internet has taught us that routing protocols can be abused in many ways; protocols that use metadata to route data safely can no doubt be attacked. It's possible to abuse or to game any solution. That doesn’t mean we shouldn’t build solutions, but we need to build them knowing they aren’t bulletproof, that they’re subject to attack, and that we are ultimately responsible for their behavior.

Our lives are integrated with data in ways our parents could never have predicted. Data transfers have gone way beyond faxing a medical record or two to an insurance company, or authorizing a credit card purchase over an analog phone line. But as Thomas Wolfe wrote, we can’t go home again. There's no way back to some simpler world where your medical records were stored on paper in your doctor’s office, your purchases were made with cash, and your smartphone didn’t exist. And we wouldn’t want to go back. The benefits of the new data-rich world are immense. Yet, we live in a "data smog" that contains everyone's purchases, everyone's medical records, everyone’s location, and even everyone’s heart rate and blood pressure.

It's time to start building the systems that will truly assist us to manage our data. These machines will need to make ethical decisions, and we will be responsible for those decisions. We can’t avoid that responsibility; we must take it up, difficult and problematic as it is.

Continue reading Automating ethics.

Categories: Technology

Four short links: 21 March 2019

O'Reilly Radar - Thu, 2019/03/21 - 05:15

Newsletters, Confidence Intervals, Reverse Engineering, and Human Scale

  1. Email Newsletters: The New Social Media (NYT) -- “With newsletters, we can rebuild all of the direct connections to people we lost when the social web came along.”
  2. Scientists Rise Up Against Statistical Significance (Nature) -- want to replace p-values with confidence intervals, which are easier to interpret without special training. Sample intro to p-values and confidence intervals.
  3. Cutter -- A Qt and C++ GUI for radare2 reverse engineering framework. Its goal is making an advanced, customizable, and FOSS reverse-engineering platform while keeping the user experience in mind. Cutter is created by reverse engineers for reverse engineers.
  4. Computer Latency at a Human Scale -- if a CPU cycle is 1 second, then SSD I/O takes 1.5-4 days, and rotational disk I/O takes 1-9 months. Also in the Hacker News thread, human-scale storage: if a byte is a letter, then a 4kb page of memory is 1 sheet of paper, a 256kb L2 cache is a 64-page binder on the desk, and a 1TB SSD is a warehouse of books.

Continue reading Four short links: 21 March 2019.

Categories: Technology

The fundamental problem with Silicon Valley’s favorite growth strategy

O'Reilly Radar - Thu, 2019/03/21 - 03:00

Our entire economy seems to have forgotten that workers are also consumers, and suppliers are also customers.

The pursuit of monopoly has led Silicon Valley astray.

Look no further than the race between Lyft and Uber to dominate the online ride-hailing market. Both companies are gearing up for their IPOs in the next few months. Street talk has Lyft shooting for a valuation between $15 and $30 billion dollars, and Uber valued at an astonishing $120 billion dollars. Neither company is profitable; their enormous valuations are based on the premise that if a company grows big enough and fast enough, profits will eventually follow.

Most monopolies or duopolies develop over time, and have been considered dangerous to competitive markets; now they are sought after from the start and are the holy grail for investors. If LinkedIn co-founder Reid Hoffman and entrepreneur Chris Yeh’s new book Blitzscaling is to be believed, the Uber-style race to the top (or the bottom, depending on your point of view) is the secret of success for today’s technology businesses.

Blitzscaling promises to teach techniques that are “the lightning-fast path to building massively valuable companies.” Hoffman and Yeh argue that in today’s world, it’s essential to “achieve massive scale at incredible speed” in order to seize the ground before competitors do. By their definition, blitzscaling (derived from the blitzkrieg or “lightning war” strategy of Nazi general Heinz Guderian) “prioritizes speed over efficiency,” and risks “potentially disastrous defeat in order to maximize speed and surprise.”

Many of these businesses depend on network effects, which means the company that gets to scale first is likely to stay on top. So, for startups, this strategy typically involves raising a lot of capital and moving quickly to dominate a new market, even when the company’s leaders may not know how they are going to make money in the long term.

This premise has become doctrine in Silicon Valley. But is it correct? And is it good for society? I have my doubts.

Imagine, for a moment, a world in which Uber and Lyft hadn’t been able to raise billions of dollars in a winner-takes-all race to dominate the online ride-hailing market. How might that market have developed differently?

Blitzscaling isn’t really a recipe for success but rather survivorship bias masquerading as a strategy.

Uber and Lyft have developed powerful services that delight their users and are transforming urban transportation. But if they hadn’t been given virtually unlimited capital to offer rides at subsidized prices taxicabs couldn’t match in order to grow their user base at blitzscaling speed, would they be offering their service for less than it actually costs to deliver? Would each company be spending 55% of net revenue on driver incentives, passenger discounts, sales, and marketing to acquire passengers and drivers faster than the other? Would these companies now be profitable instead of hemorrhaging billions of dollars a year? Would incumbent transportation companies have had more time to catch up, leading to a more competitive market? Might drivers have gotten a bigger share of the pie? Would a market that grew more organically—like the web, e-commerce, smartphones, or mobile mapping services—have created more value over the long term?

We’ll never know, because investors, awash in cheap capital, anointed the winners rather than letting the market decide who should succeed and who should fail. This created a de-facto duopoly long before either company had proven that it has a sustainable business model. And because these two giants are now locked in a capital-fueled deathmatch, the market is largely closed off to new ideas except from within the existing, well-funded companies.

The case for blitzscaling

There are plenty of reasons to believe that blitzscaling makes sense. The internet is awash in billionaires who made their fortunes by following a strategy summed up in Mark Zuckerberg’s advice to “move fast and break things.” Hoffman and Yeh invoke the storied successes of Apple, Microsoft, Amazon, Google, and Facebook, all of whom have dominated their respective markets and made their founders rich in the process, and suggest that it is blitzscaling that got them there. And the book tells compelling tales of current entrepreneurs who have outmaneuvered competitors by pouring on the gas and moving more quickly. Hoffman recalls his own success with the blitzscaling philosophy during the early days of Paypal. Back in 2000, the company was growing 5% per day, letting people settle their charges using credit cards while using the service for free. This left the company to absorb, ruinously, the 3% credit card charge on each transaction. He writes:

I remember telling my old college friend and Paypal co-founder/CEO Peter Thiel, 'Peter, if you and I were standing on the roof of our office and throwing stacks of hundred-dollar bills off the edge as fast as our arms could go, we still wouldn’t be losing money as quickly as we are right now.'

But it worked out. Paypal built an enormous user base quickly, giving the company enough market power to charge businesses to accept Paypal payments. They also persuaded most customers to make those payments via direct bank transfers, which have much lower fees than credit cards. If they’d waited to figure out the business model, someone else might have beat them to the customer that made them a success: eBay, which went on to buy Paypal for $1.5 billion (which everyone thought was a lot of money in those days), launching Thiel and Hoffman on their storied careers as serial entrepreneurs and investors.

Of course, for every company like Paypal that pulled off that feat of hypergrowth without knowing where the money would come from, there is a dotcom graveyard of hundreds or thousands of companies that never figured it out. That’s the “risks potentially disastrous defeat” part of the strategy that Hoffman and Yeh talk about. A strong case can be made that blitzscaling isn’t really a recipe for success but rather survivorship bias masquerading as a strategy.

However, Hoffman and Yeh do a good job of explaining the conditions in which blitzscaling makes sense: The market has to be really big; there has to be a sustainable competitive advantage (e.g., network effects) from getting bigger faster than the competition; you have to have efficient means to bootstrap distribution; and you have to have high gross margins so the business will generate positive cash flow and profits when it does get to scale. This is good management advice for established companies as well as startups, and the book is chock full of it.

Hoffman and Yeh also make the point that what most often drives the need for blitzscaling is competition; an entrepreneur with a good idea can be too close to the center of the bullseye, inevitably drawing imitators. The book opens with an excellent tale of how Airbnb used blitzscaling to respond to the threat of a European copycat company by raising money to open and aggressively expand its own European operations years before the company would otherwise have chosen to do so.

But sometimes it isn’t just the threat of competition that drives the need to turbocharge growth: it’s the size and importance of the opportunity, and the need to get big fast enough to effect change. For example, you can make the case that if Uber and Lyft and Airbnb hadn’t blitzscaled, they would have been tied up in bureaucratic red tape, and the future they are trying to build wouldn’t just have happened more slowly; it would never have happened.

The strategic use of blitzscaling isn’t limited to startups. It can also apply to large companies, governments, and even nonprofits. For example, we’re facing a blitzscaling opportunity right now at Code for America, the non-profit founded and run by my wife Jennifer Pahlka, and on whose board I serve.

Our mission is to use the principles and practices of the digital age to improve how government serves the American public, and how the public improves government. Since Code for America is a non-profit, we aren’t trying to “take the market.” There’s no financial imperative to seize an opportunity before someone else does. Our goal is to show what’s possible, to build a constituency and a consensus for a change in the way government does things, and to encourage the development of an ecosystem of new vendors who can work with government the same way we do. By demonstrating that the work of government can be done quickly and cheaply at massive scale using open source software, machine learning, and other 21st-century technology, we look to shape the expectations of the market.

Emulating the tortoise, not the hare, has been our goal. We’ve always preferred opportunities where time is an ally, not an enemy.

So why is blitzscaling relevant to us? It’s not about making millions and snuffing out the competition—as in many of the most compelling cases for blitzscaling—it’s about building enough momentum to break through the stone walls of an old established order. In our case, we are attempting to save taxpayers money and radically alter the lives of millions of Americans.

Here’s a concrete example: one of the areas we’ve gotten deeply involved in is criminal justice reform. Specifically, we’re helping governments implement new laws and initiatives to redress 30 years of over-aggressive policy that has left almost 70 million Americans with some kind of criminal record and 2.2 million behind bars. (That’s the highest percentage in the world.) A broad consensus is emerging on both left and right that it’s time to rethink our criminal justice system.

Too often, though, those passing new laws have given insufficient thought to their implementation, leaving existing bureaucratic processes in place. For example, to clear a criminal record under 2014’s California Proposition 47, which reduced the penalty for many crimes by reclassifying them as misdemeanors rather than felonies, a person must go to the District Attorney’s office in each jurisdiction where he or she has a record, ask the DA to download their rap sheet, determine eligibility by assessing the obscure codes on the rap sheet, and, if eligible, petition the court for clearance. Facing such a cumbersome, expensive process, only a few thousands of those eligible were able to clear their records.

After the passage in 2016 of California Proposition 64, which decriminalized marijuana and added millions to the rolls of those who had criminal records eligible to be expunged, San Francisco District Attorney George Gascon announced a program for automatic expungement. The DA’s office would not wait for petitioners to appear, but would preemptively download and evaluate all eligible records. Unfortunately, lacking technology expertise, Gascon’s office set out to do it with manual labor, hiring paralegals to download and evaluate the rap sheets and file the necessary paperwork with the courts.

When we demonstrated that we could download the records in bulk and automate the evaluation of rap sheets, working through thousands of records in minutes and automatically generating the paperwork for clearance, they were all in. True automatic expungement looks like a real possibility. Now we aim to scale up our team to support the entire state in this ambitious program.

So what’s the rush? The first reason for urgency is the human toll we can alleviate by getting the work done more quickly. When people can clear their records, it gives them better access to jobs, subsidized housing, and many other benefits.

The second reason is that many other states are also reducing sentences and pushing for record clearance. While we’ve already got our Clear My Record project well underway in California, other states are turning to legacy vendors working through legacy procurement processes. If existing vendors exploit this opportunity and persuade states to sign traditional contracts before we show how cheaply and effectively the job can be done, millions of dollars in public money may be wasted doing it the old way, and years of delay in implementation are at stake. (These contracts typically cost hundreds of millions of dollars and take years to deliver on.)

So we’re asking ourselves, is it enough to show what’s possible and hope that others do the right thing? Or might we get to our desired outcome more effectively if we scale up our own capability to address the problem? The key question we are wrestling with is “how can we move faster?”—which is exactly the question that Hoffman and Yeh’s book seeks to answer.

In short, there are compelling reasons to blitzscale, and the book provides a great deal of wisdom for those facing a strategic inflection point where success depends on moving much faster. But I worry that the book oversells the idea, and that too many entrepreneurs will believe this is the only way to succeed.

Why I’m skeptical of blitzscaling

To understand why I’m skeptical about blitzscaling, you have to understand a bit about my own entrepreneurial history. I started my company, O’Reilly Media, 40 years ago with a $500 initial investment. We took in no venture capital, but despite that have built a business that generates hundreds of millions of dollars of profitable annual revenue. We got there through steady, organic growth, funded by customers who love our products and pay us for them.

Emulating the tortoise, not the hare, has been our goal. We’ve always preferred opportunities where time is an ally, not an enemy. That’s not to say that we haven’t had our share of blitzscaling opportunities, but in each of them, we kickstarted a new market and then let others take the baton.

In 1993, my company launched GNN, the Global Network Navigator, which was the first advertising-supported site on the World Wide Web, and the first web portal. We were so early that we had to persuade the world that advertising was the natural business model for this new medium. We plowed every penny we were making from our business writing and publishing computer-programming manuals into GNN—our own version of throwing hundred dollar bills off the rooftop. And for two years, from 1993 until we sold GNN to AOL in mid-1995, we were the place where people went to go to find new websites.

As the commercial web took off, however, it became clear that we couldn’t play by the same rules as we had in the publishing market, where a deep understanding of what customers were looking for, superior products, innovative marketing, and fair prices gave us all the competitive advantages we needed. Here, the market was exploding, and unless we could grow as fast or faster than the market, we’d soon be left behind. And we could see the only way to do that would be to take in massive amounts of capital, with the price of chasing the new opportunity being the loss of control over the company.

Wanting to build a company that I would own and control for the long term, I decided instead to spin out GNN and sell it to AOL. Jerry Yang and David Filo made a different decision at Yahoo!, founded a year after GNN. They took venture capital, blitzscaled their company, and beat AOL to the top of the internet heap—before being dethroned in their turn by Google.

It happened again in 1996 when O’Reilly launched a product called Website, the first commercial Windows-based web server. We’d been drawn to the promise of a web in which everyone was a publisher; instead, websites were being built on big centralized servers, and all most people could do was consume web content via ubiquitous free browsers. So we set out to democratize the web, with the premise that everyone who had a browser should also be able to have a server. Website took off, and became a multimillion-dollar product line for us. But our product was soon eclipsed by Netscape, which had raised tens of millions of dollars of venture capital, and eventually had a multibillion-dollar IPO—before being crushed in turn by Microsoft.

In the case of both GNN and Website, you can see the blitzscaling imperative: a new market is accelerating, and there is a clear choice between keeping up and being left behind. In those two examples, I made a conscious choice against blitzscaling because I had a different vision for my company. But far too many entrepreneurs don’t understand the choice, and are simply overtaken by better-funded competitors who seize the opportunity more boldly. And in too many markets, in the absence of antitrust enforcement, there is always the risk that no matter how much money you raise and how fast you go, the entrenched giants will be able to leverage their existing business to take over the new market you’ve created. That’s what Microsoft did to Netscape, and what Facebook did to Snapchat.

Had we followed Hoffman and Yeh’s advice, we would have taken on a contest we were very unlikely to win, no matter how much money we raised or how fast we moved. And even though we abandoned these two opportunities when the blitzscalers arrived, O’Reilly Media has had enormous financial success, becoming a leader in each of our chosen markets.

Winning at all costs

There’s another point that Hoffman and Yeh fail to address. It matters what stories we tell ourselves about what success looks like. Blitzscaling can be used by any company, but it can encourage a particular kind of entrepreneur: hard-charging, willing to crash through barriers, and often ruthless.

We see the consequences of this selection bias in the history of the on-demand ride-hailing market. Why did Uber emerge the winner in the ride-hailing wars? Sunil Paul, the founder of Sidecar, was the visionary who came up with the idea of a peer-to-peer taxi service provided by people using their own cars. Logan Green, the co-founder of Lyft, was the visionary who had set out to reinvent urban transportation by filling all the empty cars on the road. But Travis Kalanick, the co-founder and CEO of Uber, was the hyper-aggressive entrepreneur who raised money faster, broke the rules more aggressively, and cut the most corners in the race to the top.

In 2000, a full eight years before Uber was founded, Sunil Paul filed a patent that described many of the possibilities that the newly commercialized GPS capabilities would provide for on-demand car sharing. He explored founding a company at that time, but realized that GPS-enabled phones weren’t common enough. It was just too early for his ideas to take hold.

The market Paul had envisioned began, in fits and starts, around 2007. That year, Logan Green and John Zimmer, the founders of Lyft, started a company called Zimride that was inspired by the bottom-up urban jitneys Green had fallen in love with during a trip to Zimbabwe. They began with a web app to match up college students making long-distance trips with others going in the same direction. In 2008, Garrett Camp and Travis Kalanick founded Uber as a high-end service using SMS to summon black-car drivers.

Neither Zimride nor Uber had yet realized the full idea we now think of as smartphone-enabled ride hailing, and each was working toward it from a different end—peer-to-peer, and mobile on-demand respectively. The two ideas were about to meet in an explosive combination, fueled by the wide adoption of GPS-enabled smartphones and app marketplaces. Following the 2007 introduction of the iPhone and, at least as importantly, the 2008 introduction of the iPhone App Store, the iPhone became a platform for mobile applications.

Once again, it was Paul who first saw the future. Inspired by Airbnb’s success in getting ordinary people to rent out their homes, he realized people might also be willing to rent out their cars. He worked on several versions of this idea. In 2009, while working with the founders of what became Getaround in a class he was teaching at Singularity University, Paul explored peer-to-peer fractional car rental. Then, in 2012, he launched a new company, Sidecar, to provide the equivalent of peer-to-peer taxi service, with ordinary people providing the service using their own cars. He set out to get permission from regulatory agencies for this new approach.

There are few small wins for the entrepreneur; only the big bets pay off. And, as in Las Vegas, only the house always wins.

Green and Zimmer heard about Paul’s work on Sidecar and realized immediately that this model could help them realize their original vision for Zimride. They pivoted quickly from their original vision, launching Lyft as a project within Zimride about three months after Sidecar. When Lyft took off, they sold the original Zimride product and went all-in on the new offering. (That’s blitzscaling for you. Seize the ground first.)

Uber was an even more aggressive blitzscaler. Hearing about Lyft’s plans, Uber announced UberX, its down-market version of Uber using ordinary people driving their own cars instead of chauffeurs with limousines, the day before Lyft launched, even though all that it had developed in the way of a peer-to-peer driver platform was a press release. In fact, Kalanick, the co-founder and CEO, had been skeptical about the legality of the peer-to-peer driver model, telling Jason Calacanis, the host of the podcast This Week in Startups, “It would be illegal.”

And the race was on. Despite his earlier reservations about the legality of the model, Uber out-executed its smaller rivals, in part by ignoring regulation while they attempted to change the rules, and became the market leader. Uber also followed the blitzscaling playbook more closely, raising far more money than its rivals, and growing far faster. Lyft managed to become a strong number two. But by 2015, Sidecar was a footnote in history, going out of business after having raised only $35.5 million to Uber’s $7.3 billion and Lyft’s $2 billion. To date, Uber has raised a total of $24.3 billion, and Lyft $4.9 billion.

Hoffman and Yeh embrace this dark pattern as a call to action. Early in their book, Blake, the cynical sales manager played by Alec Baldwin in the movie Glengarry Glen Ross, appears as an oracle dispensing wise advice:

As you all know, first prize is a Cadillac Eldorado. Anyone wanna see second prize? Second prize is a set of steak knives. Third prize is you’re fired. Get the picture?

In the real world, though, while Sunil Paul’s company went out of business, it was Travis Kalanick of Uber who got fired. Stung by scandal after scandal as Uber deceived regulators, spied on passengers, and tolerated a culture of sexual harassment, the board eventually asked for Kalanick’s resignation. Not only that, Uber’s worldwide blitzscaling attempts—competing in ride-hailing not only with Lyft in the U.S. but with Didi in China and with Grab and Ola in Southeast Asia, and with Google on self-driving cars—eventually spread the company too thin, just as Guderian’s blitzkrieg techniques, which had worked so well against France and Poland, failed during the invasion of Russia.

Meanwhile, the forced bloom of Uber’s market share lead became a liability even in the U.S. Even though Uber had far more money, the price war between the two companies cost Uber far more in markets where its share was large and Lyft’s was small. Lyft focused on the U.S. market and began to chip away at Uber’s early lead. It also made significant gains on Uber as passengers and drivers, stung by the sense that Uber was an amoral company, began to abandon the service. Uber is still the larger and more valuable company, and Dara Khosrowshahi, the new CEO, has made enormous progress in stabilizing its business and restoring its reputation. But Lyft’s gains appear to be sustainable.

Blitzscaling—or sustainable scaling?

While Hoffman and Yeh’s book claims that companies like Google, Facebook, Microsoft, Apple, and Amazon are icons of the blitzscaling approach, this idea is plausible only with quite a bit of revisionist history. Each of these companies achieved profitability (or in Amazon’s case, positive cash flow) long before its IPO, and growth wasn’t driven by a blitzkrieg of spending to acquire customers below cost but by breakthrough products and services, and by strategic business model innovations that were rooted in a future the competition didn’t yet understand. These companies didn’t blitzscale; they scaled sustainably.

Google raised only $36 million before its IPO—an amount that earned Sidecar’s Sunil Paul the dismal third prize of going out of business. For that same level of investment, Google was already hugely profitable.

Facebook’s rise to dominance was far more capital-intensive than Google’s. The company raised $2.3 billion before its IPO, but it too was already profitable long before it went public; according to insiders, it ran close to breakeven from fairly early in its life. The money raised was strategic, a way of hedging against risk, and of stepping up the valuation of the company while delaying the scrutiny of a public offering. As Hoffman and Yeh note in their book, in today’s market, “Even if the money doesn’t prove to be necessary, a major financing round can have positive signaling effects—it helps convince the rest of the world that your company is likely to emerge as the market leader.”

Even Amazon, which lost billions before achieving profitability, raised only $108 million in venture capital before its IPO. How was this possible? Bezos realized his business generated enormous positive cash flow that he could borrow against. It was his boldness in taking the risk of borrowing billions (preserving a larger ownership stake for himself and his team than if he had raised billions in equity), not just Amazon’s commitment to growth over profits, that helped make him the world’s richest man today.

Winners-take-all is an investment philosophy perfectly suited for our age of inequality and economic fragility.

In short, none of these companies (except arguably Amazon) followed the path that Hoffman and Yeh lay out as a recipe for today’s venture-backed companies. Venture-backed blitzscaling was far less important to their success than product and business-model innovation, brilliant execution, and relentless strategic focus. Hypergrowth was the result rather than the cause of these companies’ success.

Ironically, Hoffman and Yeh’s book is full of superb practical advice about innovation, execution, and strategic focus, but it’s wrapped in the flawed promise that startups can achieve similar market dominance as these storied companies by force-feeding inefficient growth.

For a company like Airbnb, a company with both strong network effects and a solid path to profitability, blitzscaling is a good strategy. But blitzscaling also enables too many companies like Snap, which managed to go public while still losing enormous amounts of money, making its founders and investors rich while passing on to public market investors the risk that the company will never actually become a profitable business. Like Amazon and Airbnb, some of these companies may become sustainable, profitable businesses and grow into their valuation over time, but as of now, they are still bleeding red ink.

Sustainability may not actually matter, though, according to the gospel of blitzscaling. After all, the book’s marketing copy does not promise the secret of building massively profitable or enduring companies, but merely “massively valuable” ones.

What is meant by value? To too many investors and entrepreneurs, it means building companies that achieve massive financial exits, either by being sold or going public. And as long as the company can keep getting financing, either from private or public investors, the growth can go on.

But is a business really a business if it can’t pay its own way?

Is it a business or a financial instrument?

Benjamin Graham, the father of value investing, is widely reported to have said: “In the short run, the market is a voting machine. In the long run, it’s a weighing machine.” That is, in the short term, investors vote (or more accurately, place bets) on the present value of the future earnings of a company. Over the long term, the market discovers whether they were right in their bets. (That’s the weighing machine.)

But what is happening today is that the market has almost entirely turned into a betting machine. Not only that, it’s a machine for betting on a horse race in which it’s possible to cash your winning ticket long before the race has actually finished. In the past, entrepreneurs got rich when their companies succeeded and were able to sell shares to the public markets. Increasingly, though, investors are allowing insiders to sell their stock much earlier than that. And even when companies do reach the point of a public offering, these days, many of them still have no profits.

According to University of Florida finance professor Jay Ritter, 76% of all IPOs in 2017 were for companies with no profits. By October 2018, the percentage was 83%, exceeding even the 81% seen right before the dotcom bust in 2000.

Would profitless companies with huge scale be valued so highly in the absence of today’s overheated financial markets?

Too many of the companies likely to follow Hoffman and Yeh’s advice are actually financial instruments instead of businesses, designed by and for speculators. The monetization of the company is sought not via the traditional means of accumulated earnings and the value of a continuing business projecting those earnings into the future, but via the exit, that holy grail of today’s Silicon Valley. The hope is that either the company will be snapped up by another company that does have a viable business model but lacks the spark and sizzle of internet-fueled growth, or will pull off a profitless IPO, like Snap or Box.

The horse-race investment mentality has a terrible side effect: companies that are not contenders to win, place, or show are starved of investment. Funding dries up, and companies that could have built a sustainable niche if they’d grown organically go out of business instead. “Go big or go home” results in many companies that once would have been members of a thriving business ecosystem indeed going home. As Hoffman and Yeh put it:

Here is one of the ruthless practices that has helped make Silicon Valley so successful: investors will look at a company that is on an upward trajectory but doesn’t display the proverbial hockey stick of exponential growth and conclude that they need to either sell the business or take on additional risk that might increase the chances of achieving exponential growth... Silicon Valley venture capitalists want entrepreneurs to pursue exponential growth even if doing so costs more money and increases the chances that the business will fail.

Because this blitzscaling model requires raising ever more money in pursuit of the hockey stick venture capitalists are looking for, the entrepreneur’s ownership is relentlessly diluted. Even if the company is eventually sold, unless the company is a breakout hit, most of the proceeds go to investors whose preferred shares must be repaid before the common shares owned by the founders and employees get anything. There are few small wins for the entrepreneur; only the big bets pay off. And, as in Las Vegas, the house always wins.

Bryce Roberts, my partner at O’Reilly AlphaTech Ventures (OATV), recently wrote about the probability of winning big in business:

Timely reminder that the VCs aren’t even in the home run business.

They’re in the grand slam business.

Interestingly, odds of hitting a grand slam (.07%) are uncannily similar to odds of backing a unicorn (.07% of VC backed startups) https://t.co/O0VgCeuAe3

—indievc (@indievc) December 20, 2018

This philosophy has turned venture capitalists into movie studios, financing hundreds of companies in pursuit of the mega-hit that will make their fund, and at its worst turns entrepreneurs into the equivalent of Hollywood actors, moving from one disposable movie to another. (“The Uber of Parking” is sure to be a hit! And how about “the Uber of Dry Cleaning”?)

The losses from the blitzscaling mentality are felt not just by entrepreneurs but by society more broadly. When the traditional venture-capital wisdom is to shutter companies that aren’t achieving hypergrowth, businesses that would once have made meaningful contributions to our economy are not funded, or are starved of further investment once it is clear that they no longer have a hope of becoming a home run.

Winners-take-all is an investment philosophy perfectly suited for our age of inequality and economic fragility, where a few get enormously rich, and the rest get nothing. In a balanced economy, there are opportunities for success at all scales, from the very small, through successful mid-size companies, to the great platforms.

Is Glengarry Glen Ross’s sales competition really the economy we aspire to?

There is another way

There are business models, even in the technology sector, where cash flow from operations can fund the company, not venture capitalists gripping horse-race tickets.

Consider these companies: Mailchimp, funded by $490 million in annual revenue from its 12 million customers, profitable from day one without a penny of venture capital; Atlassian, bootstrapped for eight years before raising capital in 2010 after it had reached nearly $75 million in self-funded annual revenue; and Shutterstock, which took in venture capital only after it had already bootstrapped its way to becoming the largest subscription-based stock photo agency in the world. (In the case of both Atlassian and Shutterstock, outside financing was a step toward liquidity through a public offering, rather than strictly necessary to fund company growth.) All of these companies made their millions through consumer-focused products and patience, not blitzscaling.

Jason Fried and David Heinemeier Hansson, the founders of Basecamp, a 20-year-old, privately held, profitable Chicago company whose core product is used by millions of software developers, have gone even further: they entirely abandoned the growth imperative, shedding even successful products to keep their company under 50 people. Their book about their approach, It Doesn’t Have to Be Crazy At Work, should be read as an essential counterpoint to Blitzscaling.

Another story of self-funded growth I particularly like is far from tech. RxBar, a Chicago-based nutrition bar company with $130 million of self-funded annual revenue, was acquired last year by Kellogg for $600 million. Peter Rahal, one of the two founders, recalls that he and co-founder Jared Smith were in his parents’ kitchen, discussing how they would go about raising capital to start their business. His immigrant father said something like, “You guys need to shut the fuck up and just sell a thousand bars.”

And that’s exactly what they did, putting in $5,000 each, and hustling to sell their bars via Crossfit gyms. It was that hustle and bias toward customers, rather than outside funding, that got them their win. Their next breakthrough was in their distinctive “No BS” packaging, which made the ingredients, rather than extravagant claims about them, the centerpiece of the message.

The exit for RxBar, when it came, was not the objective, but a strategy for growing a business that was already working. “Jared and I never designed the business to sell it; we designed it to solve a problem for customers,” Rahal recalled. “In January 2017, Jared and I were sitting around and asked what do we want to do with this business? Do we want to continue and make it a family business? Or do we want to roll it up into a greater company, really scale this thing and take it to the next level? We wanted to go put fire on this thing.”

They could have raised growth capital at that point, like Mike Cannon-Brooks of Atlassian or Jon Oringer of Shutterstock did, but acquisition provided a better path to sustainable impact. Kellogg brought them not just an exit, but additional capabilities to grow their business. Rahal continues to lead the brand at Kellogg, still focusing on customers.

Raise less, own more

The fact that the Silicon Valley blitzscaling model is not suited for many otherwise promising companies has led a number of venture capitalists, including my partner Bryce Roberts at OATV, to develop an approach for finding, inspiring, and financing cash-flow positive companies.

Indie.vc, a project at OATV, has developed a new kind of venture financing instrument. It’s a convertible loan designed to be repaid out of operating cash flow rather than via an exit, but that can convert to equity if the company, having established there is a traditional venture business opportunity, decides to go that route. This optionality effectively takes away the pressure for companies to raise ever more money in pursuit of the hypergrowth that, as Hoffman and Yeh note, traditional venture capitalists are looking for. The program also includes a year of mentorship and networking, providing access to experienced entrepreneurs and experts in various aspects of growing a business.

In the Indie.vc FAQ, Bryce wrote:

We believe deeply that there are hundreds, even thousands, of businesses that could be thriving, at scale, if they focused on revenue growth over raising another round of funding. On average, the companies we’ve backed have increased revenues over 100% in the first 12 months of the program and around 300% after 24 months post-investment. We aim to be the last investment our founders NEED to take. We call this Permissionless Entrepreneurship.

This is a bit like the baseball scouting revolution that Michael Lewis chronicled in Moneyball. While all the other teams were looking for home-run hitters, Oakland A’s manager Billy Beane realized that on-base percentage was a far more important statistic for actually winning. He took that insight all the way from the league basement to the playoffs, winning against far richer teams despite the A’s own low-salary budget.

One result of an investment model looking for the equivalent of on-base percentage—that is, the ability to deliver a sustainable business for as little money as possible—is that many entrepreneurs can do far better than they can in the VC blitzscale model. They can build a business that they love, like I did, and continue to operate it for many years. If they do decide to exit, they will own far more of the proceeds.

Even successful swing-for-the fences VCs like Bill Gurley of Benchmark Capital agree. As Gurley, an early Uber investor and board member, tweeted recently:

100% agree with this article, & have voiced this opinion my whole career. The vast majority of entrepreneurs should NOT take venture capital. Why? Article nails it: it is a binary "swing for the fences" exercise. Bootstrapping more likely to lead to individual financial success. https://t.co/s1mAOKwz6m

— Bill Gurley (@bgurley) January 11, 2019

Indie.vc’s search for profit-seeking rather than exit-seeking companies has also led to a far more diverse venture portfolio, with more than half of the companies led by women and 20% by people of color. (This is in stark contrast to traditional venture capital, where 98% of venture dollars go to men.) Many are from outside the Bay Area or other traditional venture hotbeds. The 2019 Indie.vc tour, in which Roberts looks for startups to join the program, hosts stops in Kansas City, Boise, Detroit, Denver, and Salt Lake City, as well as the obligatory San Francisco, Seattle, New York, and Boston.

Where conventional startup wisdom would suggest that aiming for profits, not rounds of funding, will lead to plodding growth, many of our Indie.vc companies are growing just as fast as those from the early-stage portfolios in our previous OATV funds.

Nice Healthcare is a good example. Its founder, Thompson Aderinkomi, had been down the traditional blitzscaling path with his prior venture and wanted to take a decidedly different approach to funding and scaling his new business. Seven months post-investment by Indie.vc, Nice was able to achieve 400% revenue growth, over $1 million in annual recurring revenue, and is now profitable. All while being run by a black founder in Minneapolis. Now that’s a real unicorn! Some of the other fast-growing companies in the Indie.VC portfolio include The Shade Room, Fohr, Storq, re:3d, and Chopshop.

OATV has invested in its share of companies that have gone on to raise massive amounts of capital—Foursquare, Planet, Fastly, Acquia, Signal Sciences, Figma, and Devoted Health for example—but we’ve also funded companies that were geared toward steady growth, profitability, and positive cash flow from operations, like Instructables, SeeClickFix, PeerJ, and OpenSignal. In our earlier funds, though, we were trying to shoehorn these companies into a traditional venture model when what we really needed was a new approach to financing. So many VCs throw companies like these away when they discover they aren’t going to hit the hockey stick. But Roberts kept working on the problem, and now his approach to venture capital is turning into a movement.

A recent New York Times article, “More Startups Have an Unfamiliar Message for Venture Capitalists: Get Lost,” describes a new crop of venture funds with a philosophy similar to Indie.vc. Some entrepreneurs who were funded using the old model are even buying out their investors using debt, like video-hosting company Wistia, or their own profits, like social media management company Buffer.

Sweet Labs, one of OATV’s early portfolio companies, has done the same. With revenues in the high tens of millions, the founders asked themselves why they should pursue risky hypergrowth when they already had a great business they loved and that already had enough profit to make them rich. They offered to buy out their investors at a reasonable multiple of their investment, and the investors agreed, giving back control over the company to its founders and employees. What Indie.vc has done is to build in this optionality from the beginning, reminding founders that an all-or-nothing venture blitzscale is not their only option.

The responsibility of the winners

I’ve talked so far mainly about the investment distortions that blitzscaling introduces. But there is another point I wish Hoffman and Yeh had made in their book.

Assume for a moment that blitzscaling is indeed a recipe for companies to achieve the kind of market dominance that has been achieved by Apple, Amazon, Facebook, Microsoft, and Google. Assume that technology is often a winner-takes-all market, and that blitzscaling is indeed a powerful tool in the arsenal of those in pursuit of the win.

What is the responsibility of the winners? And what happens to those who don’t win?

We live in a global, hyperconnected world. There is incredible value to companies that operate at massive scale. But those companies have responsibilities that go with that scale, and one of those responsibilities is to provide an environment in which other, smaller companies and individuals can thrive. Whether they got there by blitzscaling or other means, many of the internet giants are platforms, something for others to build on top of. Bill Gates put it well in a conversation with Chamath Palihapitiya when Palihapitiya was the head of platform at Facebook: “A platform is when the economic value of everybody that uses it exceeds the value of the company that creates it.”

For every company that pulled off that feat of hypergrowth, there is a dotcom graveyard of hundreds of companies that never figured it out.

The problem with the blitzscaling mentality is that a corporate DNA of perpetual, rivalrous, winner-takes-all growth is fundamentally incompatible with the responsibilities of a platform. Too often, once its hyper-growth period slows, the platform begins to compete with its suppliers and its customers. Gates himself faced (and failed) this moral crisis when Microsoft became the dominant platform of the personal computer era. Google is now facing this same moral crisis, and also failing.

Windows, the web, and smartphones such as the iPhone succeeded as platforms because a critical mass of third-party application developers added value far beyond what a single company, however large, could provide by itself. Nokia and Microsoft were also-rans in the smartphone platform race not just because they couldn’t get customers to buy their phones, but because they couldn’t get enough developers to build applications for them. Likewise, Uber and Lyft need enough drivers to pick people up within a few minutes, wherever they are and whenever they want a ride, and enough passengers to keep all their drivers busy. Google search and Amazon commerce succeed because of all that they help us find or buy from others. Platforms are two-sided marketplaces that have to achieve critical mass on both the buyer and the seller sides.

Yet despite the wisdom Gates expressed in his comments to Palihapitiya about the limitations of Facebook as a platform, he clearly didn’t go far enough in understanding the obligations of a platform owner back when he was Microsoft’s CEO.

Microsoft was founded in 1975, and its operating systems —first MS-DOS, and then Windows—became the platform for a burgeoning personal computer industry, supporting hundreds of PC hardware companies and thousands of software companies. Yet one by one, the most lucrative application categories—word processing, spreadsheets, databases, presentation software—came to be dominated by Microsoft itself.

One by one, the once-promising companies of the PC era—Micropro, Ashton-Tate, Lotus, Borland—went bankrupt or were acquired at bargain-basement prices. Developers, no longer able to see opportunity in the personal computer, shifted their attention to the internet and to open source projects like Linux, Apache, and Mozilla. Having destroyed all its commercial competition, Microsoft sowed the dragon’s teeth, raising up a new generation of developers who gave away their work for free, and who enabled the creation of new kinds of business models outside Microsoft’s closed domain.

The government also took notice. When Microsoft moved to crush Netscape, the darling of the new internet industry, by shipping a free browser as part of its operating system, it had gone too far. In 1994, Microsoft was sued by the U.S. Department of Justice, signed a consent decree that didn’t hold, and was sued again in 1998 for engaging in anti-competitive practices. A final settlement in 2001 gave enough breathing room to the giants of the next era, most notably Google and Amazon, to find their footing outside Microsoft’s shadow.

That story is now repeating itself. I recently did an analysis of Google’s public filings since its 2004 IPO. One of the things those filings report is the share of the ad business that comes from ads on Google’s own properties (Google Ads) versus from ads it places on its partner sites (AdSense). While Google has continued to grow the business for its partners, the company has grown its own share of the market far, far faster. As shown on the chart below, when Google went public in 2004, 51% of ad revenue came from Google’s own search engine while 49% came from ads on third-party websites served up by Google. But by 2017, revenue from Google properties was up to 82%, with only 18% coming from ads on third-party websites.

Where once advertising was relegated to a second-class position on Google search pages, it now occupies the best real estate. Ads are bigger, they now appear above organic results rather than off to the side, and there are more of them included with each search. Even worse, organic clicks are actually disappearing. In category after category—local search, weather, flights, sports, hotels, notable people, brands and companies, dictionary and thesaurus, movies and TV, concerts, jobs, the best products, stock prices, and more—Google no longer sends people to other sites: it provides the information they are looking for directly in Google. This is very convenient for Google’s users, and very lucrative for Google, but very bad for the long-term economic health of the web.

In a recent talk, SEO expert Rand Fishkin gave vivid examples of the replacement of organic search traffic with “no click” searches (especially on mobile) as Google has shifted from providing links to websites to providing complete answers on the search page itself. Fishkin’s statistical view is even more alarming than his anecdotal evidence. He claims that in February 2016, 58% of Google searches on mobile resulted in organic clicks, and 41% had no clicks. (Some of these may have been abandoned searches, but most are likely satisfied directly in the Google search results.) By February 2018, the number of organic clicks had dropped to 39%, and the number of no click searches had risen to 61%. It isn’t clear what proportion of Google searches his data represents, but it suggests the cannibalization is accelerating.

Growth for growth’s sake seems to have replaced the mission that made Google great.

Google might defend itself by saying that providing information directly in its search results is better for users, especially on mobile devices with much more limited screen real estate. But search is a two-sided marketplace, and Google, now effectively the marketplace owner, needs to look after both sides of the market, not just its users and itself. If Google is not sending traffic to its information suppliers, should it be paying them for their content?

The health of its supplier ecosystem should be of paramount concern for Google. Not only has the company now drawn the same kind of antitrust scrutiny that once dogged Microsoft, it has weakened its own business with a self-inflicted wound that will fester over the long term. As content providers on the web get less traffic and less revenue, they will have fewer resources to produce the content that Google now abstracts into its rich snippets. This will lead to a death spiral in the content ecosystem on which Google depends, much as Microsoft’s extractive dominion over PC software left few companies to develop innovative new applications for the platform.

In his book Hit Refresh, Satya Nadella, Microsoft’s current CEO, reflected on the wrong turn his company had taken:

When I became CEO, I sensed we had forgotten how our talent for partnerships was a key to what made us great. Success caused people to unlearn the habits that made them successful in the first place.

I asked Nadella to expand on this thought in an interview I did with him in April 2017:

The creation myth of Microsoft is what should inspire us. One of the first things the company did, when Bill and Paul got together, is that they built the BASIC interpreter for the ALTAIR. What does that tell us today, in 2017? It tells us that we should build technology so that others can build technology. And in a world which is going to be shaped by technology, in every part of the world, in every sector of the economy, that’s a great mission to have. And, so, I like that, that sense of purpose, that we create technology so that others could create more technology.

Now that they’ve gone back to enabling others, Microsoft is on a tear.

We might ask a similar question: what was the creation myth of Google? In 1998, Larry Page and Sergey Brin set out to “organize the world’s information and make it universally accessible and useful.” Paraphrasing Nadella, what does that tell us today, in 2019? It tells us that Google should build services that help others to create the information that Google can then organize, make accessible, and make more useful. That’s a mission worth blitzscaling for.

Google is now 20 years old. One reason for its extractive behavior is that it is being told (now by Wall Street rather than venture investors) that it is imperative to keep growing. But the greenfield opportunity has gone, and the easiest source of continued growth is cannibalization of the ecosystem of content suppliers that Google was originally created to give users better access to. Growth for growth’s sake seems to have replaced the mission that made Google great.

The true path to prosperity

Let’s circle back to Uber and Lyft as they approach their IPOs. Travis Kalanick and Garrett Camp, the founders of Uber, are serial entrepreneurs who set out to get rich. Logan Green and John Zimmer, the founders of Lyft, are idealists whose vision was to reinvent public transportation. But having raised billions using the blitzscaling model, both companies are subject to the same inexorable logic: they must maximize the return to investors.

This they can do only by convincing the market that their money-losing businesses will be far better in the future than they are today. Their race to monopoly has ended up instead with a money-losing duopoly, where low prices to entice ever more consumers are subsidized by ever more capital. This creates enormous pressure to eliminate costs, including the cost of drivers, by investing even more money in technologies like autonomous vehicles, once again “prioritizing speed over efficiency,” and “risking potentially disastrous defeat” while blitzscaling their way into an unknown future.

Unfortunately, the defeat being risked is not just theirs, but ours. Microsoft and Google began to cannibalize their suppliers only after 20 years of creating value for them. Uber and Lyft are being encouraged to eliminate their driver partners from the get-go. If it were just these two companies, it would be bad enough. But it isn’t. Our entire economy seems to have forgotten that workers are also consumers, and suppliers are also customers. When companies use automation to put people out of work, they can no longer afford to be consumers; when platforms extract all the value and leave none for their suppliers, they are undermining their own long-term prospects. It’s two-sided markets all the way down.

The goal for Lyft and Uber (and for all the entrepreneurs being urged to blitzscale) should be to make their companies more sustainable, not just more explosive—more equitable, not more extractive.

As an industry and as a society, we still have many lessons to learn, and, apologies to Hoffman and Yeh, I fear that how to get better at runaway growth is far from the most important one.

Continue reading The fundamental problem with Silicon Valley’s favorite growth strategy.

Categories: Technology

Velocity 2019 will focus on the rise of cloud native infrastructure

O'Reilly Radar - Wed, 2019/03/20 - 12:55

Organizations that want all of the speed, agility, and savings the cloud provides are embracing a cloud native approach.

Nearly all organizations today are doing some of their business in the cloud, but the push for increased feature performance and reliability has sparked a growing number to embrace a cloud native infrastructure. In Capgemini’s survey of more than 900 executives, adoption of cloud native apps is set to jump from 15% to 32% by 2020. The strong combination of growth in cloud native adoption and the considerable opportunities it creates for organizations is why we’re making cloud native a core theme at the O’Reilly Velocity Conference this year.

What’s the appeal of cloud native? These days consumers demand instant access to services, products, and data across any device, at any time. This 24/7 expectation has changed how companies do business, forcing many to move their infrastructure to the cloud to provide the fast, reliable, always-available access on which we’ve come to rely.

Yet, merely packaging your apps and moving them to the cloud isn’t enough. To harness the cloud’s cost and performance benefits, organizations have found that a cloud native approach is a necessity. Cloud native applications are specifically designed to scale and provision resources on the fly in response to business needs. This lets your apps run efficiently, saving you money. These apps are also more resilient, resulting in less downtime and happier customers. And as you develop and improve your applications, a cloud native infrastructure makes it possible for your company to deploy new features faster, more affordably, and with less risk.

Cloud native considerations

The Cloud Native Computing Foundation (CNCF) defines cloud native as a set of technologies designed to:

...empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.

These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.

The alternative to being cloud native is to either retain your on-premises infrastructure or merely "lift and shift" your current infrastructure to the cloud. Both options result in your existing applications being stuck with their legacy modes of operation and unable to take advantage of the cloud's built-in benefits.

While “lift and shift” is an option, it’s become clear as enterprises struggle to manage cloud costs and squeeze increased performance from their pipelines that it’s not enough to simply move old architectures to new locations. To remain competitive, companies are being forced to adopt new patterns, such as DevOps and site reliability engineering, and new tools like Kubernetes, for building and maintaining distributed systems that often span multiple cloud providers. Accordingly, use of cloud native applications in production has grown more than 200% since December 2017.

And the number of companies contributing to this space keeps growing. The CNCF, home to popular open source tools like Kubernetes, Prometheus, and Envoy, has grown to 350 members compared to fewer than 50 in early 2016. The community is extremely active—the CNCF had more than 47,000 contributors work on its projects in 2018. "This is clearly a sign that the cloud native space is a place companies are investing in, which means increased demand for resources," said Boris Scholl, product architect for Microsoft Azure, in a recent conversation.

But going cloud native is not all sunshine and roses; it’s hard work. The systems are inherently complex, difficult to monitor and troubleshoot, and require new tools that are constantly evolving and not always easy to learn. Vendor lock-in is a concern as well, causing many companies to adopt either a multi-cloud approach (where they work with more than one public cloud vendor) or a hybrid cloud approach (a combination of on-premises private cloud and third-party public cloud infrastructure, managed as one), which adds complexity in exchange for flexibility. Applications that are developed specifically to take advantage of one cloud provider’s infrastructure are not very portable.

The challenges are not all technical, either. Going cloud native requires new patterns of working and new methods of collaborating, such as DevOps and site reliability engineering. To be successful, these shifts need buy-in from every part of the business.

In Solstice’s Cloud Native Forecast for 2019, the authors highlight the challenges of change as a top trend facing the cloud community this year. “One of the most challenging aspects of cloud-native modernization is transforming an organization’s human capital and culture,” according to the report. “This can involve ruthless automation, new shared responsibilities between developers and operations, pair programming, test-driven development, and CI/CD. For many developers, these changes are simply hard to implement.”

Cloud native and the evolution of the O’Reilly Velocity Conference

We know businesses are turning to cloud native infrastructure because it helps them meet and exceed the expectations of their customers. We know cloud native methods and tools are expanding and maturing. And we know adoption of cloud native infrastructure is not an easy task. These factors mean systems engineers and operations professionals—the audience Velocity serves—are being asked to learn new techniques and best practices for building and managing the cloud native systems their companies need.

Evolving toward cloud native is a natural step for Velocity because it has a history of shifting as technology shifts. The event's original focus on WebOps grew to encompass a broader audience: systems engineers. Our community today has emerged from their silos to take part in cross-functional teams, building and maintaining far more interconnected, distributed systems, most of which are hosted, at least in part, on the cloud. Our attendees have experienced first-hand the raft of new challenges and opportunities around performance, security, and reliability in building cloud native systems.

At Velocity, our mission is to provide our audience with the educational resources and industry connections they need to successfully build and maintain modern systems, which means turning the spotlight to cloud native infrastructure. We hope you’ll join us as we explore cloud native in depth at our 2019 events in San Jose (June 10-13, 2019) and Berlin (November 4-7, 2019).

Continue reading Velocity 2019 will focus on the rise of cloud native infrastructure.

Categories: Technology

Proposals for model vulnerability and security

O'Reilly Radar - Wed, 2019/03/20 - 11:50

Apply fair and private models, white-hat and forensic model debugging, and common sense to protect machine learning models from malicious actors.

Like many others, I’ve known for some time that machine learning models themselves could pose security risks. A recent flourish of posts and papers has outlined the broader topic, listed attack vectors and vulnerabilities, started to propose defensive solutions, and provided the necessary framework for this post. The objective here is to brainstorm on potential security vulnerabilities and defenses in the context of popular, traditional predictive modeling systems, such as linear and tree-based models trained on static data sets. While I’m no security expert, I have been following the areas of machine learning debugging, explanations, fairness, interpretability, and privacy very closely, and I think many of these techniques can be applied to attack and defend predictive modeling systems.

In hopes of furthering discussions between actual security experts and practitioners in the applied machine learning community (like me), this post will put forward several plausible attack vectors for a typical machine learning system at a typical organization, propose tentative defensive solutions, and discuss a few general concerns and potential best practices.

1. Data poisoning attacks

Data poisoning refers to someone systematically changing your training data to manipulate your model’s predictions. (Data poisoning attacks have also been called “causative” attacks.) To poison data, an attacker must have access to some or all of your training data. And at many companies, many different employees, consultants, and contractors have just that—and with little oversight. It’s also possible a malicious external actor could acquire unauthorized access to some or all of your training data and poison it. A very direct kind of data poisoning attack might involve altering the labels of a training data set. So, whatever the commercial application of your model is, the attacker could dependably benefit from your model’s predictions—for example, by altering labels so your model learns to award large loans, large discounts, or small insurance premiums to people like themselves. (Forcing your model to make a false prediction for the attacker’s benefit is sometimes called a violation of your model’s “integrity”.) It’s also possible that a malicious actor could use data poisoning to train your model to intentionally discriminate against a group of people, depriving them the big loan, big discount, or low premiums they rightfully deserve. This is like a denial-of-service (DOS) attack on your model itself. (Forcing your model to make a false prediction to hurt others is sometimes called a violation of your model’s “availability”.) While it might be simpler to think of data poisoning as changing the values in the existing rows of a data set, data poisoning can also be conducted by adding seemingly harmless or superfluous columns onto a data set. Altered values in these columns could then trigger altered model predictions.

Now, let’s discuss some potential defensive and forensic solutions for data poisoning:

  • Disparate impact analysis: Many banks already undertake disparate impact analysis for fair lending purposes to determine if their model is treating different types of people in a discriminatory manner. Many other organizations, however, aren't yet so evolved. Disparate impact analysis could potentially discover intentional discrimination in model predictions. There are several great open source tools for detecting discrimination and disparate impact analysis, such as Aequitas, Themis, and AIF360.
  • Fair or private models: Models such as learning fair representations (LFR) and private aggregation of teacher ensembles (PATE) try to focus less on individual demographic traits to make predictions. These models may also be less susceptible to discriminatory data poisoning attacks.
  • Reject on Negative Impact (RONI): RONI is a technique that removes rows of data from the training data set that decrease prediction accuracy. See “The Security of Machine Learning” in section 8 for more information on RONI.
  • Residual analysis: Look for strange, prominent patterns in the residuals of your model predictions, especially for employees, consultants, or contractors.
  • Self-reflection: Score your models on your employees, consultants, and contractors and look for anomalously beneficial predictions.

Disparate impact analysis, residual analysis, and self-reflection can be conducted at training time and as part of real-time model monitoring activities.

2. Watermark attacks

Watermarking is a term borrowed from the deep learning security literature that often refers to putting special pixels into an image to trigger a desired outcome from your model. It seems entirely possible to do the same with customer or transactional data. Consider a scenario where an employee, consultant, contractor, or malicious external actor has access to your model’s production code—that makes real-time predictions. Such an individual could change that code to recognize a strange, or unlikely, combination of input variable values to trigger a desired prediction outcome. Like data poisoning, watermark attacks can be used to attack your model’s integrity or availability. For instance, to attack your model’s integrity, a malicious insider could insert a payload into your model’s production scoring code that recognizes the combination of age of 0 and years at an address of 99 to trigger some kind of positive prediction outcome for themselves or their associates. To deny model availability, an attacker could insert an artificial, discriminatory rule into your model’s scoring code that prevents your model from producing positive outcomes for a certain group of people.

Defensive and forensic approaches for watermark attacks might include:

  • Anomaly detection: Autoencoders are a fraud detection model that can identify input data that is strange or unlike other input data, but in complex ways. Autoencoders could potentially catch any watermarks used to trigger malicious mechanisms.
  • Data integrity constraints: Many databases don’t allow for strange or unrealistic combinations of input variables and this could potentially thwart watermarking attacks. Applying data integrity constraints on live, incoming data streams could have the same benefits.
  • Disparate impact analysis: see section 1.
  • Version control: Production model scoring code should be managed and version-controlled—just like any other mission-critical software asset.

Anomaly detection, data integrity constraints, and disparate impact analysis can be used at training time and as part of real-time model monitoring activities.

3. Inversion by surrogate models

Inversion basically refers to getting unauthorized information out of your model—as opposed to putting information into your model. Inversion can also be an example of an “exploratory reverse-engineering” attack. If an attacker can receive many predictions from your model API or other endpoint (website, app, etc.), they can train their own surrogate model. In short, that’s a simulation of your very own predictive model! An attacker could conceivably train a surrogate model between the inputs they used to generate the received predictions and the received predictions themselves. Depending on the number of predictions they can receive, the surrogate model could become quite an accurate simulation of your model. Once the surrogate model is trained, then the attacker has a sandbox from which to plan impersonation (i.e., “mimicry”) or adversarial example attacks against your model’s integrity, or the potential ability to start reconstructing aspects of your sensitive training data. Surrogate models can also be trained using external data sources that can be somehow matched to your predictions, as ProPublica famously did with the proprietary COMPAS recidivism model.

To protect your model against inversion by surrogate model, consider the following approaches:

  • Authorized access: Require additional authentication (e.g., 2FA) to receive a prediction.
  • Throttle predictions: Restrict high numbers of rapid predictions from single users; consider artificially increasing prediction latency.
  • White-hat surrogate models: As a white-hat hacking exercise, try this: train your own surrogate models between your inputs and the predictions of your production model and carefully observe:
    • the accuracy bounds of different types of white-hat surrogate models; try to understand the extent to which a surrogate model can really be used to learn unfavorable knowledge about your model.
    • the types of data trends that can be learned from your white-hat surrogate model, like linear trends represented by linear model coefficients.
    • the types of segments or demographic distributions that can be learned by analyzing the number of individuals assigned to certain white-hat surrogate decision tree nodes.
    • the rules that can be learned from a white-hat surrogate decision tree—for example, how to reliably impersonate an individual who would receive a beneficial prediction.
4. Adversarial example attacks

A motivated attacker could theoretically learn, say by trial and error (i.e., “exploration” or “sensitivity analysis”), surrogate model inversion, or by social engineering, how to game your model to receive their desired prediction outcome or to avoid an undesirable prediction. Carrying out an attack by specifically engineering a row of data for such purposes is referred to as an adversarial example attack. (Sometimes also known as an “exploratory integrity” attack.) An attacker could use an adversarial example attack to grant themselves a large loan or a low insurance premium or to avoid denial of parole based on a high criminal risk score. Some people might call using adversarial examples to avoid an undesirable outcome from your model prediction “evasion.”

Try out the techniques outlined below to defend against or to confirm an adversarial example attack:

  • Activation analysis: Activation analysis requires benchmarking internal mechanisms of your predictive models, such as the average activation of neurons in your neural network or the proportion of observations assigned to each leaf node in your random forest. You then compare that information against your model’s behavior on incoming, real-world data streams. As one of my colleagues put it, “this is like seeing one leaf node in a random forest correspond to 0.1% of the training data but hit for 75% of the production scoring rows in an hour.” Patterns like this could be evidence of an adversarial example attack.
  • Anomaly detection: see section 2.
  • Authorized access: see section 3.
  • Benchmark models: Use a highly transparent benchmark model when scoring new data in addition to your more complex model. Interpretable models could be seen as harder to hack because their mechanisms are directly transparent. When scoring new data, compare your new fancy machine learning model against a trusted, transparent model or a model trained on a trusted data source and pipeline. If the difference between your more complex and opaque machine learning model and your interpretable or trusted model is too great, fall back to the predictions of the conservative model or send the row of data for manual processing. Also record the incident. It could be an adversarial example attack.
  • Throttle predictions: see section 3.
  • White-hat sensitivity analysis: Use sensitivity analysis to conduct your own exploratory attacks to understand what variable values (or combinations thereof) can cause large swings in predictions. Screen for these values, or combinations of values, when scoring new data. You may find the open source package cleverhans helpful for any white-hat exploratory analyses you conduct.
  • White-hat surrogate models: see section 3.

Activation analysis and benchmark models can be used at training time and as part of real-time model monitoring activities.

5. Impersonation

A motivated attacker can learn—say, again, by trial and error, surrogate model inversion, or social engineering—what type of input or individual receives a desired prediction outcome. The attacker can then impersonate this input or individual to receive their desired prediction outcome from your model. (Impersonation attacks are sometimes also known as “mimicry” attacks and resemble identity theft from the model’s perspective.) Like an adversarial example attack, an impersonation attack involves artificially changing the input data values to your model. Unlike an adversarial example attack, where a potentially random-looking combination of input data values could be used to trick your model, impersonation implies using the information associated with another modeled entity (i.e., convict, customer, employee, financial transaction, patient, product, etc.) to receive the prediction your model associates with that type of entity. For example, an attacker could learn what characteristics your model associates with awarding large discounts, like comping a room at a casino for a big spender, and then falsify their information to receive the same discount. They could also share their strategy with others, potentially leading to large losses for your company.

If you are using a two-stage model, be aware of an “allergy” attack. This is where a malicious actor may impersonate a normal row of input data for the first stage of your model in order to attack the second stage of your model.

Defensive and forensic approaches for impersonation attacks may include:

  • Activation analysis: see section 4.
  • Authorized access: see section 3.
  • Screening for duplicates: At scoring time track the number of similar records your model is exposed to, potentially in a reduced-dimensional space using autoencoders, multidimensional scaling (MDS), or similar dimension reduction techniques. If too many similar rows are encountered during some time span, take corrective action.
  • Security-aware features: Keep a feature in your pipeline, say num_similar_queries, that may be useless when your model is first trained or deployed but could be populated at scoring time (or during future model retrainings) to make your model or your pipeline security-aware. For instance, if at scoring time the value of num_similar_queries is greater than zero, the scoring request could be sent for human oversight. In the future, when you retrain your model, you could teach it to give input data rows with high num_similar_queries values negative prediction outcomes.

Activation analysis, screening for duplicates, and security-aware features can be used at training time and as part of real-time model monitoring activities.

6. General concerns

Several common machine learning usage patterns also present more general security concerns.

Blackboxes and unnecessary complexity: Although recent developments in interpretable models and model explanations have provided the opportunity to use accurate and also transparent nonlinear classifiers and regressors, many machine learning workflows are still centered around blackbox models. Such blackbox models are only one type of often unnecessary complexity in a typical commercial machine learning workflow. Other examples of potentially harmful complexity could be overly exotic feature engineering or large numbers of package dependencies. Such complexity can be problematic for at least two reasons:

  1. A dedicated, motivated attacker can, over time, learn more about your overly complex blackbox modeling system than you or your team knows about your own model. (Especially in today’s overheated and turnover-prone data “science” market.) To do so, they can use many newly available model-agnostic explanation techniques and old-school sensitivity analysis, among many other more common hacking tools. This knowledge imbalance can potentially be exploited to conduct the attacks described in sections 1 – 5 or for other yet unknown types of attacks.
  2. Machine learning in the research and development environment is highly dependent on a diverse ecosystem of open source software packages. Some of these packages have many, many contributors and users. Some are highly specific and only meaningful to a small number of researchers or practitioners. It’s well understood that many packages are maintained by brilliant statisticians and machine learning researchers whose primary focus is mathematics or algorithms, not software engineering, and certainly not security. It’s not uncommon for a machine learning pipeline to be dependent on dozens or even hundreds of external packages, any one of which could be hacked to conceal an attack payload.

Distributed systems and models: For better or worse, we live in the age of big data. Many organizations are now using distributed data processing and machine learning systems. Distributed computing can provide a broad attack surface for a malicious internal or external actor in the context of machine learning. Data could be poisoned on only one or a few worker nodes of a large distributed data storage or processing system. A back door for watermarking could be coded into just one model of a large ensemble. Instead of debugging one simple data set or model, now practitioners must examine data or models distributed across large computing clusters.

Distributed denial of service (DDOS) attacks: If a predictive modeling service is central to your organization’s mission, ensure you have at least considered more conventional distributed denial of service attacks, where attackers hit the public-facing prediction service with an incredibly high volume of requests to delay or stop predictions for legitimate users.

7. General solutions

Several older and newer general best practices can be employed to decrease your security vulnerabilities and to increase fairness, accountability, transparency, and trust in machine learning systems.

Authorized access and prediction throttling: Standard safeguards such as additional authentication and throttling may be highly effective at stymieing a number of the attack vectors described in sections 1–5.

Benchmark models: An older or trusted interpretable modeling pipeline, or other highly transparent predictor, can be used as a benchmark model from which to measure whether a prediction was manipulated by any number of means. This could include data poisoning, watermark attacks, or adversarial example attacks. If the difference between your trusted model’s prediction and your more complex and opaque model’s predictions are too large, record these instances. Refer them to human analysts or take other appropriate forensic or remediation steps. (Of course, serious precautions must be taken to ensure your benchmark model and pipeline remains secure and unchanged from its original, trusted state.)

Interpretable, fair, or private models: The techniques now exist (e.g., monotonic GBMs (M-GBM), scalable Bayesian rule lists (SBRL), eXplainable Neural Networks (XNN)), that can allow for both accuracy and interpretability. These accurate and interpretable models are easier to document and debug than classic machine learning blackboxes. Newer types of fair and private models (e.g., LFR, PATE) can also be trained to essentially care less about outward visible, demographic characteristics that can be observed, socially engineered into an adversarial example attack, or impersonated. Are you considering creating a new machine learning workflow in the future? Think about basing it on lower-risk, interpretable, private, or fair models. Models like this are more easily debugged and potentially robust to changes in an individual entity’s characteristics.

Model debugging for security: The newer field of model debugging is focused on discovering errors in machine learning model mechanisms and predictions, and remediating those errors. Debugging tools such a surrogate models, residual analysis, and sensitivity analysis can be used in white-hat exercises to understand your own vulnerabilities or for forensic exercises to find any potential attacks that may have occurred or be occurring.

Model documentation and explanation techniques: Model documentation is a risk-mitigation strategy that has been used for decades in banking. It allows knowledge about complex modeling systems to be preserved and transferred as teams of model owners change over time. Model documentation has been traditionally applied to highly transparent linear models. But with the advent of powerful, accurate explanatory tools (such as tree SHAP and derivative-based local feature attributions for neural networks), pre-existing blackbox model workflows can be at least somewhat explained, debugged, and documented. Documentation should obviously now include all security goals, including any known, remediated, or anticipated security vulnerabilities.

Model monitoring and management explicitly for security: Serious practitioners understand most models are trained on static snapshots of reality represented by training data and that their prediction accuracy degrades in real time as present realities drift away from the past information captured in the training data. Today, most model monitoring is aimed at discovering this drift in input variable distributions that will eventually lead to accuracy decay. Model monitoring should now likely be designed to monitor for the attacks described in sections 1 – 5 and any other potential threats your white-hat model debugging exercises uncover. (While not always directly related to security, my opinion is that models should also be evaluated for disparate impact in real time as well.) Along with model documentation, all modeling artifacts, source code, and associated metadata need to be managed, versioned, and audited for security like the valuable commercial assets they are.

Security-aware features: Features, rules, and pre- or post-processing steps can be included in your models or pipelines that are security-aware, such as the number of similar rows seen by the model, whether the current row represents an employee, contractor, or consultant, or whether the values in the current row are similar to those found in white-hat adversarial example attacks. These features may or may not be useful when a model is first trained. But keeping a placeholder for them when scoring new data, or when retraining future iterations of your model, may come in very handy one day.

Systemic anomaly detection: Train an autoencoder–based anomaly detection metamodel on your entire predictive modeling system’s operating statistics—the number of predictions in some time period, latency, CPU, memory, and disk loads, the number of concurrent users, and everything else you can get your hands on—and then closely monitor this metamodel for anomalies. An anomaly could tip you off that something is generally not right in your predictive modeling system. Subsequent investigation or specific mechanisms would be needed to trace down the exact problem.

8. References and further reading

A lot of the contemporary academic machine learning security literature focuses on adaptive learning, deep learning, and encryption. However, I don’t know many practitioners who are actually doing these things yet. So, in addition to recently published articles and blogs, I found papers from the 1990s and early 2000s about network intrusion, virus detection, spam filtering, and related topics to be helpful resources as well. If you’d like to learn more about the fascinating subject of securing machine learning models, here are the main references—past and present—that I used for this post. I’d recommend them for further reading, too.

Conclusion

I care very much about the science and practice of machine learning, and I am now concerned that the threat of a terrible machine learning hack, combined with growing concerns about privacy violations and algorithmic discrimination, could increase burgeoning public and political skepticism about machine learning and AI. We should all be mindful of AI winters in the not-so-distant past. Security vulnerabilities, privacy violations, and algorithmic discrimination could all potentially combine to lead to decreased funding for machine learning research or draconian over-regulation of the field. Let’s continue discussing and addressing these important problems to preemptively prevent a crisis, as opposed to having to reactively respond to one.

Acknowledgements

Thanks to Doug Deloy, Dmitry Larko, Tom Kraljevic, and Prashant Shuklabaidya for their insightful comments and suggestions.

Continue reading Proposals for model vulnerability and security.

Categories: Technology

Four short links: 20 March 2019

O'Reilly Radar - Wed, 2019/03/20 - 03:55

Embedded Computer Vision, Unix History, Unionizing Workforce, and Text Adventure AI

  1. SOD -- an embedded, modern cross-platform computer vision and machine learning software library that exposes a set of APIs for deep learning, advanced media analysis and processing, including real-time, multi-class object detection and model training on embedded systems with limited computational resource and IoT devices. Open source.
  2. Unix History Repo -- Continuous Unix commit history from 1970 until today.
  3. Kickstarter's Staff is Unionizing -- early days for the union, but I'm keen to see how this plays out. (I'm one of the founding signatories to the Aotearoa Tech Union, though our countries have different workplace laws.)
  4. Textworld -- Microsoft Research project, it's an open source, extensible engine that both generates and simulates text games. You can use it to train reinforcement learning (RL) agents to learn skills such as language understanding and grounding, combined with sequential decision-making. Cue "Microsoft teaches AI to play Zork" headlines. And they have a competition.

Continue reading Four short links: 20 March 2019.

Categories: Technology

Four short links: 19 March 2019

O'Reilly Radar - Tue, 2019/03/19 - 04:05

Digital Life, Information Abundance, Quantum Computing, Language Design

  1. Timeliner -- All your digital life on a single timeline, stored locally. Great idea; I hope its development continues.
  2. What's Wrong with Blaming "Information" for Political Chaos (Cory Doctorow) -- a response to yesterday's "What The Hell is Going On?" link. I think Perell is wrong. His theory omits the most salient, obvious explanation for what's going on (the creation of an oligarchy that has diminished the efficacy of public institutions and introduced widespread corruption in every domain), in favor of rationalizations that let the wealthy and their enablers off the hook, converting a corrupt system with nameable human actors who have benefited from it and who spend lavishly to perpetuate it into a systemic problem that emerges from a historical moment in which everyone is blameless, prisoners of fate and history. I think it's both: we have far more of every medium than we can consume because the information industrial engines are geared to production and distraction not curation for quality. This has crippled the internet's ability to be a fightback mechanism. My country's recent experiences with snuff videos and white supremacist evangelicals doesn't predispose me to think as Perell does that the deluge of undifferentiated information is a marvelous thing, so I think Cory and I have a great topic of conversation the next time we're at the same conference together.
  3. Quantum Computing for the Very Curious (Michael Nielsen) -- an explanation of quantum computing with built-in spaced repetition testing of key concepts. Clever!
  4. 3 Things I Wish I Knew When I Began Designing Languages (Peter Alvaro) -- when I presented at my job talk at Harvard, a systems researcher who I admire very much, said something along the lines of, "Yes, this kind of reminds me of a Racket, and in Racket everything is a parenthesis. So, in your language, what is the thing that is everything that I don't buy?" That was nice.

Continue reading Four short links: 19 March 2019.

Categories: Technology

Pages

Subscribe to LuftHans aggregator