You are here

Feed aggregator

The future of work goes beyond the predictions of the World Economic Forum

O'Reilly Radar - Thu, 2018/09/27 - 03:00

The World Economic Forum’s 2018 jobs report limits research to a narrow range of the workforce.

The World Economic Forum (WEF) has released a 147-page report on the future of jobs. With this document, one of the leading think tanks has weighed in on the great debate of our time: Will machines take over our work? And more broadly, how can society make sure that automation benefits more than a handful of managers and tech experts?

In this article, while summarizing the WEF's key recommendation, I'll point out its limitations, springing from the constraints that the authors place on themselves. This will lead to a list of topics worth further investigation: the prospects for the emergence of new industries, the importance of the non-profit sector, and impacts on the informal economy. I'll also touch on geographic impacts and the willingness of companies to tolerate the innovative employees who supposedly will drive change.

Message to employers: Train your staff

The WEF firmly endorses the optimistic side of the current debate: they predict that the massive job loss caused by automation and big data will be offset by even more jobs created in new, creative, intellectually demanding fields. This is encouraging (although I was disappointed at first, hoping to live off a monthly bank deposit and devote myself to the piano and poetry), but WEF also highlights the resulting "reskilling imperative": the importance of retraining workers and training young people to be agile. Table 3 on page 9 of the report divides roles into those that are stable, new, and redundant ("pick up your check on the way out"). Most people today can find their occupations in the redundant column.

Are corporations up to the task? The WEF points out that they currently offer training opportunities to their best-trained and most privileged staff (page 14), and asks the corporations to offer training to the people who need it most. I worry that managers won't do this. We can already see the mad scramble for scarce staff in fields as different as statistics and nursing. Everyone loses in such a situation--even the highly sought employees experience the stress of overwork.

The report doesn't address society's role in education. Many countries have excellent educational systems, and some (especially in Scandinavia) are noted for retraining adults. The kinds of jobs being created in the automated economy call for lots of training—how long does it take to become a data scientist or robotics engineer?—but companies expect only 10% of their employees to require more than one year of reskilling (page 13). Apparently, society will have to pick up the task. I will return to the question of education later in this article.

Where the WEF stopped short

It's important to see the constraints that WEF placed on themselves when producing this report. They just asked the executives of current companies in 12 categories—mostly big companies—to talk about the jobs they expect to hire or train for. This is why the report failed to address public education. The direction chosen also made it impossible to consider new companies that will arise. Finally, the WEF didn't try to inquire about new job opportunities that might arise totally outside the current corporate structure: nonprofits and the notorious "informal sector." I think this is where people without corporate prospects will gravitate, and society needs to support that.

Thus, the number of new jobs required in an automation economy could be far greater than even the optimistic numbers predicted by the WEF. The following sections of the article look at each area.

We must remember that the economy can be disrupted by unexpected events such as war, along with the completely expected blows of climate change. As I write this, Moody's Analytics predicts at least $17 billion of damage from our current disaster, Hurricane Florence, and costs may rise as high as $50 billion. With current catastrophes ranging from drought in Europe to mudslides in the Philippines, we have to worry that our economy is in for serious trouble, along with all humans and other forms of life. But in this article I will do what most economists do, and just project current trends forward.

New companies

Areas that barely existed a decade ago, such as analytics for health care and 3D printing, are now teeming with promising new ventures. Some are quickly absorbed by large, established companies, while others survive by offering services to those companies. But job growth will happen in this area. Start-ups exacerbate the skills gap, though, because they lack the resources that large companies have to retrain people. Furthermore, established companies will buy or contract with the new companies to avoid spending money on reskilling their own employees. This way of sidestepping their responsibilities could reinforce the dreaded layoffs and exits from the workforce we all want to avoid.

Nonprofits and quality-of-life jobs

If automation truly relieves humans of repetitive and unpleasant work, we can expect a growth in "quality of life" occupations: wellness centers, adult education, travel, the arts, and so on. Some of these occupations are revenue-generating and can support themselves, but a lot of them—particularly education, health care, and the arts—require subsidization. Governments and companies may invest directly, as they do with primary and secondary education and with health care, or the investment may come through complex channels like donations to universities and arts centers. But however investment happens, the non-profit sector requires it. This is not addressed by the WEF.

Two factors make these issues particularly relevant to the future of work. First, the automated economy requires education and mental health services: education for the technical skills workers will need, and mental health to prepare them for delicate interactions with other people. Workers will also need to learn how to interact with robots (this is not a joke).

And these necessities raise serious problems because elites today fail to appreciate the importance of funding the non-profit sector. This skepticism is partly rational, because one can point to plenty of poorly conceived projects and job padding. But I believe that the private sector contains lots of waste as well. We must not punish the non-profit sector in general for its lapses.

Travel and the arts, which may be for-profit or non-profit, are important for a different reason. They represent an income transfer from the affluent to the less affluent, and many areas of the world depend on that income. A healthy economy has to take them into account, even though they are non-productive.

The WEF looked at prospects for jobs in different countries and regions of the world, but did not break down geography further. One of the pressing trends throughout the world is the concentration of jobs in a few entrepreneurial cities, with other areas emptying out or suffering "epidemics of despair." The WEF didn't offer remedies for the well-known drain of talent from rural and inner regions to coastal cities with attractive living conditions. Automation will exacerbate these effects unless society's leaders explicitly counteract them. We'll see more of this issue in the following section.

The informal sector

The term "informal" covers a huge range of employment, such as many of the house cleaners and handymen employed by readers of this article. The informal economy is already estimated to cover some 30% to 80% of all employment. There can be many ways to view the informal sector from both an ethical and a regulatory standpoint, as recognized by the WEF itself.

Aside from blatantly illegal forms of employment, such as drug dealing and the manufacture of knock-off fashion products, much of the informal economy consists of enterprises that create value and help under-served populations through innovative activities that the formal sector doesn't think of doing. But informal companies tend to be in geographic and economic areas that can't take advantage of the advanced analytics and robotics diffusing through the formal sector. If the formal sector becomes radically more efficient, it may put the informal sector out of business, and the urgency of reskilling will be even greater.

Can companies make the shift?

I've talked a lot about the barriers to reskilling, whose importance lay at the center of the WEF report. In their industry profiles section they note a great deal of worry about finding skilled personnel. But there's another looming problem: an enormous number of industries have trouble making the digital shift because they admit they don't understand the opportunities.

I also wonder whether corporations will recognize the need for the new, creative roles—and even more important, whether their need for control and old-fashioned hierarchy will allow them to nurture and make positive use of people in such roles. Seventy percent of change programs fail to achieve their goals, largely due to employee resistance and lack of management support. Naturally, the topic is highly popular in the business literature. As an article in the Harvard Business Review points out, people don't tend to resist technical changes, but do resist changes in social relationships. I take this to indicate that creating empowered and independent-minded staff is extremely threatening to management, who are likely to resist the changes needed for innovation.

So I see the future of work as uncertain. We all know where we'd like to go: a world of self-realized individuals contributing to a better life for all. The WEF report has seriously underestimated the hurdles that lie in our path. The WEF recognizes that corporations see reskilling as a burden, but the difficulty goes much farther—many managers will see it as a threat. Other forces in society may also be suspicious of empowered individuals.

Can society reap the benefits of automation? It will require more than an appeal to management to do a modest amount of reskilling. Advocates must reach out and present automation as a vision that’s more appealing to the general public than hanging on to old jobs and ways of relating. Touting the technical benefits of automation will not be enough to overcome fears, nor will it produce the kind of automation that goes beyond replacing workers. A change of such historic social impact must be presented as a social movement—and a non-partisan one. When ordinary people demand a new relation to technology, education, and the workplace, they may be able to redeem from the cocktail of analytics, devices, and big data a life of meaning and productive contributions.

Continue reading The future of work goes beyond the predictions of the World Economic Forum.

Categories: Technology

Four short links: 26 September 2018

O'Reilly Radar - Wed, 2018/09/26 - 03:50

Walmart's Blockchain, Machine Learning and Text Adventures, Algorithmic Decision-Making, and Networked Brains

  1. Walmart Requires Lettuce, Spinach Suppliers to Join Blockchain (WSJ Blog) -- built on Hyperledger, by way of IBM. I read IBM's brief but still can't figure out the benefits over, say, Walmart running their own APIed database app, but I suspect it has to do with "this way, EVERY blockchain participant has to buy a big app from IBM, instead of just Walmart buying something to run for others to contribute to." (via Dan Hon)
  2. Inform 7 and Machine Learning (Emily Short) -- TextWorld’s authors feel we’re not yet ready to train a machine agent to solve a hand-authored IF game like Zork—and they’ve documented the challenges here much more extensively than my rewording above. What they have done instead is to build a sandbox environment that does a more predictable subset of text adventure behavior. TextWorld is able to automatically generate games containing a lot of the standard puzzles.
  3. Litigating Algorithms: Challenging Government Use of Algorithmic Decision Systems -- session notes from a day-long workshop the EFF ran with the Center on Race, Inequality, and the Law.
  4. BrainNet: A Multi-Person Brain-to-Brain Interface for Direct Collaboration Between Brains -- Five groups of three subjects successfully used BrainNet to perform the Tetris task, with an average accuracy of 0.813. Furthermore, by varying the information reliability of the senders by artificially injecting noise into one sender's signal, we found that receivers are able to learn which sender is more reliable based solely on the information transmitted to their brains. Our results raise the possibility of future brain-to-brain interfaces that enable cooperative problem solving by humans using a "social network" of connected brains.

Continue reading Four short links: 26 September 2018.

Categories: Technology

Four short links: 25 September 2018

O'Reilly Radar - Tue, 2018/09/25 - 04:00

Software Engineering, ML Hardware Trends, Time Series, and Eng Team Playbooks

  1. Notes to Myself on Software Engineering -- Code isn’t just meant to be executed. Code is also a means of communication across a team, a way to describe to others the solution to a problem. Readable code is not a nice-to-have; it is a fundamental part of what writing code is about. A solid list of advice/lessons learned.
  2. Machine Learning Shifts More Work To FPGAs, SoCs -- compute power used for AI/ML is doubling every 3.5 months. FPGAs and ASICs are already predicted to be 25% of the market for machine learning accelerators in 2018. Why? FPGAs and ASICs use far less power than GPUs, CPUs, or even the 75 watts per hour Google’s TPU burns under heavy load. [...] They can also deliver a performance boost in specific functions chosen by customers that can be changed along with a change in programming.
  3. Time Series Forecasting -- one of those "three surprising things" articles. The three surprising things: You need to retrain your model every time you want to generate a new prediction; sometimes you have to do away with train/test splits; and the uncertainty of the forecast is just as important as, or even more so, than the forecast itself.
  4. Health Monitor -- Atlassian's measures of whether your team is doing well. Their whole set of playbooks is great reading for engineering managers.

Continue reading Four short links: 25 September 2018.

Categories: Technology

Handling real-time data operations in the enterprise

O'Reilly Radar - Mon, 2018/09/24 - 04:17

Getting DataOps right is crucial to your late-stage big data projects.

At Strata 2017, I premiered a new diagram to help teams understand why teams fail and when:

Early on in projects, management and developers are responsible for the success of a project. As the project matures, the operations team is jointly responsible for the success.

I've taught in situations where the operations team members complain that no one wants to do the operational side of things. They're right. Data science is the sexy thing companies want. The data engineering and operations teams don't get much love. The organizations don’t realize that data science stands on the shoulders of DataOps and data engineering giants.

What we need to do is give these roles a sexy title. Let's call these operational teams that focus on big data: DataOps teams.

What does the Ops say?

Companies need to understand there is a different level of operational requirements when you're exposing a data pipeline. A data pipeline needs love and attention. For big data, this isn't just making sure cluster processes are running. A DataOps team needs to do that and keep an eye on the data.

With big data, we're often dealing with unstructured data or data coming from unreliable sources. This means someone needs to be in charge of validating the data in some fashion. This is where organizations get into the garbage-in-garbage-out downward cycle that leads to failures. If this dirty data proliferates and propagates to other systems, we open Pandora’s box of unintended consequences. The DataOps team needs to watch out for data issues and fix them before they get copied around.

These data quality issues bring a new level of potential problems for real-time systems. Worst case, the data engineering team didn’t handle a particular issue correctly and you have a cascading failure on your hands. The DataOps team will be at the forefront of figuring out if a problem is data or code related.

Shouldn't the data engineering team be responsible for this? Data engineers are software developers at heart. I've taught many and interacted with even more. I wouldn't let 99% of data engineers I’ve met near a production system. There are several reasons why—such as a lack of operational knowledge, a lack of operational mindset, and being a bull in your production china shop. Sometimes, there are compliance issues where there has to be a separation of concerns between the development and production data. The data engineering team isn’t the right team to handle that.

That leaves us with the absolute need for a team that understands big data operations and data quality. They know how to operate the big data frameworks. They’re able to figure out the difference between a code issue and a data quality issue.

Real-time: The turbo button of big data

Now let's press the turbo button and expand this to include batch and real-time systems.

Outages and data quality issues are painful for batch systems. With batch systems, you generally aren't losing data. You're falling behind in processing or acquiring data. You'll eventually catch up and get back to your steady state of data coming in and being processed on time.

Then there's real time. An outage for real-time systems brings a new level of pain. You're dealing with the specter of permanently losing data. In fact, this pain during down time is how I figure out if a company really, really needs real-time systems. If I tell them they’ll need a whole new level of service level agreement (SLA) for real time and they disagree, that probably means they don’t need real time. Having operational downtime for your real-time cluster should be so absolutely painful that you will have done everything in your power to prevent an outage. An outage of your real-time systems for six hours should be a five-alarm fire.

All of this SLA onus falls squarely on the DataOps team. They won’t just be responsible for fixing things when they go wrong; they’ll be an active part of the design of the system. DataOps and data engineering will be choosing technologies that design with the expectation of failure. The DataOps team will be making sure that data moves, preferably automatically, to disaster recovery or active active clusters. This is how you avoid six-hour downtimes.

Busting out real-time technologies and SLA levels comes at the expense of conceptual and operational complexity. When I mentor a team on their real-time big data journey, I make sure management understands that the architects and developers aren’t the only ones who need new skills. The operations teams will need new skills and to learn the operations of new technologies.

There isn’t an “I” in DataOps, either

In my experience, the leap in complexity from small data to real-time big data is 15x. Once again, this underscores the need for DataOps. It will be difficult for a single person to keep up with all of the changes in both small data and big data technologies. The DataOps team will need to specialize in big data technologies and keep up with the latest issues associated with them.

As I mentored more teams on their transition to real-time systems, I saw common problems across organizations. It was because the transition to real-time data pipelines brought cross-functional changes.

With a REST API, for example, the operations team can keep their finger on the button. They have fine-grained control over who accesses the REST endpoint, how, and why. This becomes more difficult with a real-time data pipeline. The DataOps team will need to be monitoring the real-time data pipeline usage. First and foremost, they’ll need to make sure all data is encrypted and that access requires a login.

A final important facet of DataOps is dealing with data format changes. With real-time systems, there will be changes to the data format. This will be a time when the data engineering and DataOps teams need to work together. The data engineering team will deal with the development and schema sides of the problem. The DataOps team will need to deal with production issues arising from these changes and triage processing that fails due to a format change.

If you still aren’t convinced, let me give it one last shot

Getting DataOps right is crucial to your late-stage big data projects. This is the team that keeps your frameworks running and your data quality high. DataOps adds to the virtuous upward cycle of good data. As you begin a real-time or batch journey, make sure your operations team is ready for the challenges that lay ahead.

This post is part of a collaboration between O'Reilly and Mesosphere. See our statement of editorial independence.

Continue reading Handling real-time data operations in the enterprise.

Categories: Technology

Four short links: 24 September 2018

O'Reilly Radar - Mon, 2018/09/24 - 04:15

Continuous Delivery, Turing Complete Powerpoint, ARPA-E, and Observability

  1. Drone -- a continuous delivery platform built on Docker, written in Go. A continuous delivery system built on container technology. Drone uses a simple YAML configuration file, a superset of docker-compose, to define and execute pipelines inside Docker containers.
  2. On the Turing Completeness of Powerpoint (YouTube) -- Video highlighting my research on PowerPoint Turing Machines for CMU's SIGBOVIK 2017. (via Andy Baio)
  3. ARPA-E: Successful, and Struggling -- In Cory Doctorow's words, ARPA-E is a skunkworks project that gives out grants for advanced sustainable energy research that's beyond the initial phases but still too nascent to be commercialized. They've focused on long-term energy storage (a key piece of the picture with renewables) and the portfolio of inventions that have emerged from their funding is mind-bogglingly cool. Reminds me of Doing Innovation in the Capitalist Economy, by Bill Janeway, who argues that the state funds early research until VCs have commercialization opportunities (this explains why VCs are heavy in biotech and internet...they've been foci of state-funded research for decades). Such a good book, by the way.
  4. Structured Logs vs. Events (Twitter) -- Charity Majors drops some great clue bombs about observability. The most effective way to structure your instrumentation, so you get the maximum bang for your buck, is to emit a single arbitrarily wide event per request per service hop. We're talking wide. We usually see 200-500 dimensions in a mature app. But just one write. [...] All of it. In one fat structured blob. Not sprinkled around your code in functions like satanic fairy dust. You will crush your logging system that way, and you'd need to do exhaustive post-processing to recreate the shared context by joining on request-id (if you're lucky).

Continue reading Four short links: 24 September 2018.

Categories: Technology

The virtues of privacy by design

O'Reilly Radar - Fri, 2018/09/21 - 06:55

How we can put privacy at the heart of our design processes.

Continue reading The virtues of privacy by design.

Categories: Technology

3 Docker Compose features for improving team development workflow

O'Reilly Radar - Fri, 2018/09/21 - 04:00

Using advanced Docker Compose features to solve problems in larger projects and teams.

A developer today is bombarded with a plethora of tools that cover every possible problem you might have—but, selecting which tools to use is The New Problem. Even in container-land, we're swimming in an ocean of tool choices, most of which didn't exist a few years ago.

I'm here to help. I make a living out of helping companies adopt a faster and more efficient workflow for developing, testing, packaging, and shipping code to servers. Today that means containers, but it's often not just the tool that's important; it's the way you use it and the way you scale it in a team.

For now, let's focus on Docker Compose. It has become the de facto standard for managing container-based developer environments across any major OS. For years, I've consistently heard about teams tossing out a list of tools and scripts this single tool replaces. That's the reason people adopt Compose. It works everywhere, saves time, and is easy to understand.

But getting it to work across dev, test, and prod for a team can be tricky. Here are three main areas to focus on to ensure your Compose workflow works for everyone.

Environment variables

Eventually, you'll need a compose file to be flexible and you'll learn that you can use environment variables inside the Compose file. Note, this is not related to the YAML object "environment," which you want to send to the container on startup. With the notation of ${VARNAME}, you can have Compose resolve these values dynamically during the processing of that YAML file. The most common examples of when to use this are for setting the container image tag or published port. As an example, if your docker-compose.yml file looks like this:

version: '2' services:   ghost:     image: ghost:${GHOST_VERSION}

...then you can control the image version used from the CLI like so:

GHOST_VERSION=2 docker-compose up

You can also set those variables in other ways: by storing them in a .env file, by setting them at the CLI with export, or even setting a default in the YAML itself with ${GHOST_VERSION:-2}. You can read more about variable substitution and various ways to set them in the Docker docs.


A relatively new and lesser-known feature is Extension Fields, which lets you define a block of text in Compose files that is reused throughout the file itself. This is mostly used when you need to set the same environment objects for a bunch of microservices, and you want to keep the file DRY (Don't Repeat Yourself). I recently used it to set all the same logging options for each service in a Compose file like so:

version: '3.4' x-logging:   &my-logging   options:     max-size: '1m'     max-file: '5' services:   ghost:     image: ghost     logging: *my-logging   nginx:     image: nginx     logging: *my-logging

You'll notice a new section starting with an x-, which is the template, that you can then name with the & and call from anywhere in your Compose file with * and the name. Once you start to use microservices and have hundreds or more lines in your Compose file, this will likely save you considerable time and ensure consistency of options throughout. See more details in the Docker docs.

Control your Compose Command Scope

The docker-compose CLI controls one or more containers, volumes, networks, etc., within its scope. It uses two things to create that scope: the Compose YAML config file (it defaults to docker-compose.yml) and the project name (it defaults to the directory name holding the YAML config file). Normally you would start a project with a single docker-compose.yml file and execute commands like docker-compose up in the directory with that file, but there's a lot of flexibility here as complexity grows.

As things get more complex, you may have multiple YAML config files for different setups and want to control which one the CLI uses, like docker-compose -f custom-compose.yml up. This command ignores the default YAML file and only uses the one you specify with the -f option.

You can combine many Compose files in a layered override approach. Each one listed in the CLI will override the settings of the previous (processed left to right)—e.g., docker-compose -f docker-compose.yml -f docker-override.yml.

If you manually change the project name, you can use the same Compose file in multiple scopes so they don't "clash." Clashing happens when Compose tries to control a container that already has another one running with the same name. You likely have noticed that containers, networks, and other objects that Compose creates have a naming standard. The standard comprises three parts: projectname_servicename_index. We can change the projectname, which again, defaults to the directory name with a -p at the command line. So if we had a docker-compose.yml file like this:

version: '2' services:   ghost:     image: ghost:${GHOST_VERSION}     ports:       - ${GHOST_PORT}:2368

Then we had it in a directory named "app1" and we started the ghost app with inline environment variables like this:

app1> GHOST_VERSION=2 GHOST_PORT=8080 docker-compose up

We'd see a container running named this:


Now, if we want to run an older version of ghost side-by-side at the same time, we could do that with this same Compose file, as long as we change two things. First, we need to change the project name to ensure the container name will be different and not conflict with our first one. Second, we need to change the published port so they don't clash with any other running containers.

app1> GHOST_VERSION=1 GHOST_PORT=9090 docker-compose -p app2 up

If I check running containers with a docker container ls, I see:

app1_ghost_1 running ghost:2 on port 8080 app2_ghost_1 running ghost:1 on port 9090

Now you could pull up two browser windows and browse both 8080 and 9090 with two separate ghost versions (and databases) running side by side.

Most of what I've learned on advanced Compose workflows has come from trying things I've learned in the Docker docs, as well as the teams I work with to make development, testing, and deployments easier. I share these learnings everywhere I can, and I encourage you to do the same. What other features or team standards have you found useful with Docker Compose? Please share with me and the community on Twitter @BretFisher.

Continue reading 3 Docker Compose features for improving team development workflow.

Categories: Technology

Four short links: 21 September 2018

O'Reilly Radar - Fri, 2018/09/21 - 04:00

Linux Foundation, Data Unit Tests, Software Pooping the World, and Predictions

  1. Something is Rotten in the Linux Foundation (Val Aurora) -- Linux Foundation sponsors should demand that the Linux Foundation release all former employees from their non-disparagement agreements, then interview them one-on-one, without anyone currently working at the foundation present. At a minimum, the sponsors should insist on seeing a complete list of ex-employee NDAs and all funds paid to them during and after their tenure. If current Linux Foundation management balks at doing even that, well, won’t that be interesting?
  2. Deequ -- unit tests for data.
  3. If Software is Eating the World, What Will Come Out the Other End? (John Battelle) -- So far, it’s mostly shit. More rhetoric than depth, but 10/10 for rhetoric. You start a frame, you'd better be prepared to end it.
  4. 25 Years of Wired's Predictions (Wired) -- always ask yourself "what reward does this predictor get for making a good prediction?" In the case of people who write in magazines: a cent a word or so. In the case of Michael Crichton, who proclaimed in the fourth issue that “it is likely that what we now understand as the mass media will be gone within 10 years—vanished, without a trace.” he didn't even need the paycheck. (For good reading on predictions, try Superforecasting by Phil Tetlock.

Continue reading Four short links: 21 September 2018.

Categories: Technology

Shaping the stories that rule our economy

O'Reilly Radar - Thu, 2018/09/20 - 06:15

The economy we want to build must recognize increasing the value to and for humans as the goal.

Mariana Mazzucato opens her new book, The Value of Everything, with an ironic reminder from Plato’s Republic: “Our first business is to supervise the production of stories and choose those we think suitable and reject the rest.” She is acknowledging the way that much of our society is, as George Soros reminds us, reflexive, shaped by ideas that become true or false as we are persuaded to believe them, and also pointing out that as a result, controlling the stories we are told is a prime instrument of power.

One of those mind-shaping stories is about the source of value in the economy—where it comes from, who produces it, and who should get the benefits. It’s easy, Mazzucato argues, to believe the stories we have been told are simply true, and to no longer question them. And question them we must, because the stories that rule our economy today are often wrong and, at best, incomplete.

So, we must ask why we have been led to believe by our modern economic accounting that the owners of capital are the primary value creators in our society, deserving of the greatest part of the fruits of increased productivity; that government by definition is outside “the production boundary” (that is, does not create any value but simply redistributes it or consumes it); and that unpaid household labor, too, creates no economic value. Are these stories correct, or merely self-serving?

Mazzucato spends much of the first part of her book giving a master class on the historical struggle to define value and its curious absence from the discussion in modern economics.

“Until the mid-19th century,” Mazzucato writes, “almost all economists assumed that in order to understand the prices of goods and services, it was first necessary to have an objective theory of value, a theory tied to the conditions in which those goods and services were produced, including the time needed to produce them and the quality of the labor employed; and the determinants of 'value' actually shaped the price of goods and services. Then, this thinking began to go in reverse. Many economists came to believe that the value of things was determined by the 'market'—or in other words, what the buyer was prepared to pay. All of a sudden, value was in the eye of the beholder. Any goods and services being sold at an agreed market price were by definition value-creating.”

That this economic framework leaves out much of what makes human life worth living is obvious to most of us. But that’s not the primary focus of Mazzucato’s critique. She takes aim instead at the legitimization of financial rent extraction by this model. Part and parcel of this modern theory of value is the idea that value creators are entitled to whatever they can extract from the rest of society. The wizards of Wall Street who brought us the 2008 financial crisis were, by this new definition, creating value even when knowingly selling defective mortgage securities, as long as they were able to find willing buyers for their goods. In 2009, with the world economy in tatters, Goldman Sachs CEO Lloyd Blankfein was able to say with a straight face that his employees were the most productive in the world.

The inclusion of finance as a productive activity is a remarkably recent addition to the economic playbook. It was only in 1993 that the System of National Accounts, which controls how countries calculate their GDP, began to count financial activity as value added, rather than simply as a cost to businesses. Mazzucato notes, “This turned what had previously been viewed as a deadweight cost into a source of value added overnight.”

And with that change to the idea of value came changes to the regulations that managed the activities of finance, eliminating the clear dividing lines that had previously separated finance as an enabler of industrial investment from its speculative activities, which were previously tolerated but frowned on. (One of the things that jumped out at me in this history is the way that government measurement of value and consequent regulation resembles, albeit in slow motion, the algorithmic regulation carried out by internet platforms such as Google and Facebook, Amazon and Apple. Government is the platform for our economy as surely as Apple is the platform for the App Store; its regulations shape what is allowed, who gets what and why.)

Pharma pricing, too, comes in for a drubbing in Mazzucato’s book. If price determines value, it is perfectly legitimate for a drug company to charge whatever the market will bear. After all, proponents argue, the cost of a drug should not be based on what it took to produce it, but on what it is worth to someone suffering from the condition it alleviates.

Mazzucato does not find this argument persuasive. Picking up on themes from her earlier book, The Entrepreneurial State, she points out that far more of the costs of the research on which new drugs are based have been borne by the public in the form of government-funded R&D. The so-called investors have come in only when government has taken most of the risks. Yet, because of our distorted theory of value, government gets no credit, and companies that have brought relatively little to the table reap outsized returns.

Mazzucato deeply questions the idea, widespread in modern Western societies and embedded in the very definition of GDP, that government does not in itself produce value. She makes the case that mission-driven government investment has created many of the marvels of the modern world, and it is only the oddity of our beliefs about value that keeps government from reaping the benefit on behalf of the public that footed the bill.

In a recent email conversation with me on the need to rethink the deal between private and public investors, Mazzucato wrote, “If the public sector is a co-creator of wealth and not just a redistributor or enabler, then why does it not get a return for its risk-taking? That return could come in different forms, from conditions on reinvestment (to prevent hoarding and financialization) to conditions on the structure of intellectual property rights (less strong and wide) to equity in downstream investments.” She pointed out that Tesla and Solyndra got virtually the same amount in government-guaranteed loans, but while the taxpayer picked up the loss for Solyndra, they did not get any upside from Tesla. The price per share went from $9 when Tesla took out the loan in 2009 to $90 when it was paid back in 2013—yet the deal the government struck only allowed them to get three million shares if the loan was not paid back (that is, because the company was failing.) No private investor would take such a bad deal! While there are those who argue the government gets its return via future tax receipts and downstream economic spillovers, I agree with Mazzucato that any story in which losses belong to the government, but any gains belong to the private sector “value creators” is one of those self-serving stories Plato advised those in power to tell.

Mazzucato's point, though, is not to “try to argue for one correct theory of value,” but “to bring back value theory as a hotly debated area.” She notes that “value is not a given thing, unmistakably inside or outside the production boundary.” The definition of productivity, going back to Ricardo, is that of an excess of production over that which must be consumed, leaving the balance available for reinvestment in further production. But if instead the surplus is hoarded or frittered away by those who did not actually produce it (the classical definition of “economic rents”), the economy becomes less productive. In the new economics, though, she points out, “incomes must by definition reflect productivity. There is no space for rents, in the sense of people getting something for nothing.”

From Mazzucato’s perspective, equating price and value has real-world consequences for lower economic productivity. The modern economy is rife with economic rents, particularly those imposed by the financial industry, but also various forms of government-granted monopolies, the hereditary accumulation of control over scarce resources, or outright use of force. The modern idea that rents are simply an “imperfection”, a barrier that can be competed away, has replaced the classical economic idea of rent as unearned income—that is, income received from unproductive activities is treated as equivalent to that derived from the production of real value.

In the spirit of adding to the debate, I would make the case that rethinking the relative role of industries such as finance or pharma versus government as value creators is only the beginning. While Mazzucato gives a nod to the absence of household labor and what we now call the caring economy from our economic accounting, the book would benefit from a clarion call for putting these squarely inside the production boundary.

In each of the past debates about value, more and more human activity came to be understood as value creating not just because economists persuaded us of the new idea, but because of changes in society itself. The ideas of the 18th-century physiocrats made sense in a world where people rarely moved far from their homes, and land and agriculture were the principal sources of value. As the economy became less local, it was much easier to see that without the productive benefit of trade and all the services that make it possible, agricultural produce might rot in the field for lack of distribution, or never be grown in the first place. As the industrial revolution provided more and better goods, how could we not recognize industry as a kind of productivity? And as finance and debt allowed for the acceleration not just of production but of consumption, thereby enabling faster circulation within the economy, how could we fail to recognize some productive role for finance?

So, too, it seems to me, that as machines make the production of the necessities of life less expensive, humanity has the opportunity for greater leisure. Though Mazzucato largely omits this from her history, at some point we allowed entertainment within the production boundary. How did we get from the point where Adam Smith specifically called out opera singers and dancers as examples of unproductive members of society to one in which we think of our creative industries as major contributors to the economy, yet still fail to recognize our mothers, our teachers, our caregivers, those family members who clean our homes or put dinner on the table, as productive?

As we progress further into the age of intelligent machines, able to do more and more of even the routine cognitive labor that makes up much of the modern economy, we will clearly need to confront this curious gap in our economic accounting.

In his fascinating new book AI Superpowers, Kai Fu Lee, China's leading AI investor, looks at the future of work question and makes the case that we need to look not to solutions like universal basic income, but instead at a "Social Investment Stipend" that pays people to invest time in other people, in their community, and in the environment. To use Mazzucato’s language, this would be to put the caring economy inside the production boundary.

Yet another area Mazzucato points out as needing further elucidation is how we deal with “activities that appear to add value to the economy but whose output is not priced.” How might we measure the value exchange between platforms like Google, Facebook, and YouTube and their users? How might we measure the value of free and open source software? While economists have developed workarounds to include these kinds of services in the national accounts, it seems to me that these are the equivalent of the epicycles of Ptolemaic astronomy, increasingly contorted attempts to work around the failings of the underlying theory.

I am intrigued by signs that we are entering a new kind of economy that runs in parallel with the money economy measured by economists. For example, in economic theory, price signaling is the primary coordinator of the market's "invisible hand, yet Google search is a massive information market in which price plays no role. The matching market of information producers and consumers is coordinated by hundreds of other algorithmically collated information signals. There is a parallel priced market (for advertising) that runs as a kind of sidecar on the search market itself, and that provides Google’s enormous profits. But how are we to measure the value exchange in the primary search market? In some cases, content is produced precisely to drive advertising. In other cases, the content is produced to drive transactions. But vast amounts of it are produced and consumed merely for the joy of producing and consuming, an economy of abundance that reminds us that what is bought and sold is not all that humans exchange.

There is also more to be said on the questions Mazzucato raises about the extent to which finance is actually productive, versus extractive. “The celebration of finance by political leaders and expert bankers is, however, not universally shared among economists,” she writes. “It clashes with the common experience of business investors and households, for whom financial institutions’ control of the flow of money seems to guarantee the institutions’ own prosperity far more readily than that of their customers.”

In this critique, she follows in the footsteps of earlier economists, who see economic rents as the primary enemies of equitable value distribution. In an on-stage conversation at Bloomberg last week, she remarked to me: “Most of the people who like to cite Adam Smith have obviously never read him. For him, the ‘free market’ did not mean ‘free of government regulation.’ It meant ‘free of rents’, and he recognized that one of the jobs of government is to keep the market that way.” Of course, as she notes in the book, government is currently doing a poor job of maintaining a market free of rents, precisely because our current definition of “value” allows rent-seekers free rein.

Figure 2. Mariana Mazzucato and Tim O'Reilly at Bloomberg. Image: O'Reilly Media.

“The 'banking problem' arose,” she writes, “because as the 20th century progressed, banks’ role in fueling economic development steadily diminished in theory and practice—while their success in generating revenue and profit, through operations paid for by households, firms, and governments, steadily increased.” As Mazzucato notes, very little business investment today is provided by banks; far from being an enabler of economic activity, they have become net value extractors. Said more pungently, venture capitalist and economist Bill Janeway once explained to me that at some point banks stopped serving their customers and started trading against them.

Mazzucato makes the case that we need policies that recognize the difference when the financial industry is simply trading existing financial assets, rather than funding additional production in the real economy. These policies might include tax reforms such as financial transaction taxes that would penalize short-term trades, or setting up new institutions like public banks that provide patient long-term committed finance. (Here is a recent policy brief she wrote on patient long term capital.)

I would have also liked to see further discussion of the way that finance is different from what I call the traditional “Adam Smith market” of goods and services traded among people and firms, and has become something profoundly different—a betting market.

Janeway’s fascinating book Doing Capitalism in the Innovation Economy tackles this problem by describing the economy as a “three-player game” between government, “the market” (that is, the Adam Smith-style market that I describe above), and financial capitalism, which is a futures betting market on the other two. The debate over which parts of banking belong in the traditional economy and which are part of an entirely new virtualized betting economy independent of traditional market fundamentals needs much further thought.

The recognition that we use the term “the market” for two very different things would be a useful addition to the discussion. In both popular and scholarly discussions (including occasionally in Mazzucato’s book), we see this confusion between stock market value and the size and profitability of a business in the real economy.

We constantly see references to the "size" of companies as their market capitalization, not their revenues, their profits, their number of employees, or other real-world factors. For example, when people refer to Uber as a “$60 billion dollar company,” they are referring to the betting odds, expressed in dollars, as to its market “worth.” Meanwhile, Uber currently has approximately $10 billion in revenue, on which it earns zero—or to be more exact, loses billions of dollars. So the financial market bet is completely at variance with the current reality in the market of actual goods and services. In the best case, this betting market brings previously unimaginable new futures into being. In the worst case, though, it is a bet on future value extraction via the establishment of economic rents, not on true value creation!

In my own book, WTF? What’s the Future and Why It’s Up to Us, I call this special betting economy currency “supermoney.” In Silicon Valley, too often a company must be understood not as a producer of goods and services, but as a financial instrument, designed to get funding, perhaps to win attention and popularity and get acquired or go public based on hope and enthusiasm, but perhaps never be a working business.

This special currency allows highly valued companies to pay their employees more, acquire other companies cheaply, and often create enormous wealth for their investors and the early-stage employees, who are now increasingly able to cash in on the company long before it has attained any real profits. They may have disrupted and destroyed existing businesses that make money the old fashioned way, inflated the value of local real estate, and created other economic damage. Financing entrepreneurial risk-taking is a socially productive use of finance; financing entrepreneurial theater for the benefit of gullible investors while allowing insiders to take profits off the table before the economic outcome is clear is another version of the financial market game of socializing losses and privatizing gains.

The value of a company’s stock is, in theory, the net present value of a stream of its future profits–what would it be worth to own that company over time? Because the future is not certain, the price of a stock is essentially a bet on the company’s future growth and earnings. Benjamin Graham, the father of value investing, that now rare practice of which Warren Buffett is the leading practitioner, explained this concept by saying that in the short run, the market is like a voting machine, but in the long run, the market is like a weighing machine. But what happens when companies are valued on growth alone, and never expected to produce actual profits (i.e., a productive surplus)? This is akin to a horse race where bets are settled before the race has finished being run. The financial betting economy and the real economy of goods and services become radically disconnected.

But more importantly, I describe our financial markets as the first rogue AI, hostile to humanity, much like the rogue paperclip maximizer in Nick Bostrom’s Superintelligence. When we optimize for the financial betting markets, humans are a cost to be eliminated, because increased corporate profits drive huge multiples in the betting markets where wealth is being extracted. The economy we want to build must recognize increasing the value to and for humans as the goal. To build that economy, we need a strenuous engagement with the questions Mariana Mazzucato asks in this book, an engagement deeply informed by the history of economic thought and how we got where we are today.

I found this book enormously stimulating. You will, too. Highly recommended.

Continue reading Shaping the stories that rule our economy.

Categories: Technology

Four short links: 20 September 2018

O'Reilly Radar - Thu, 2018/09/20 - 04:05

Code of Conduct Software, Decision Matrices, Festival of Maintenance, Ambisonic 3D

  1. CoC Beacon -- GoFundMe to get a SaaS product to provide project maintainers with a complete set of tools for managing their codes of conduct at all stages: setting up their enforcement teams, documenting their processes, reporting incidents, managing incident reports, forming consensus about enforcement decisions, and communicating clearly with reporters and offenders. I gave. (via BoingBoing)
  2. Decision Matrix -- like the Eisenhower important/urgent 2x2, this is a consequential/reversable 2x2. Nice.
  3. 2018 Festival of Maintenance -- well worth celebrating, as Why I am Not a Maker pointed out.
  4. Ambisonic 3D Microphones -- nifty tech that's useful for VR. MrRadar on Hacker News explains: It's basically the same concept as differential stereo encoding (where you record an R+L and R-L channel and use them to derive R and L, or just play the R+L channel for mono) extended to all three axis to create surround sound (so you have a sum channel, a horizontal difference channel, a vertical difference channel, and a depth difference channel). This was all developed in the 70s (and thus out of patent today) but abandoned for more direct means of encoding surround since it was more complex to process the signals for not much gain. Of course now with DSPs, the signal processing is much easier, and with VR there's suddenly a niche for it to fill since it fully preserves the 3D soundscape (unlike, e.g., 7.1 surround, which only records seven point sources at fixed positions).

Continue reading Four short links: 20 September 2018.

Categories: Technology

Four short links: 19 September 2018

O'Reilly Radar - Wed, 2018/09/19 - 04:30

Golden Age of Software, Another Better C, Robot String Art, and Automated Game Design

  1. Falling in Love with Rust (Bryan Cantrill) -- what caught my eye was: I have believed (and continue to believe) that we are living in a Golden Age of software, one that will produce artifacts that will endure for generations.
  2. Kit -- a programming language designed for creating concise, high-performance cross-platform applications. Kit compiles to C, so it’s highly portable; it can be used in addition to or as an alternative to C, and was designed with game development in mind.
  3. String Art from the Hand of a Robot -- NP-hard geometry from the claws of a mighty robot.
  4. Automated Game Design via Conceptual Expansion -- In this paper, we introduce a method for recombining existing games to create new games through a process called conceptual expansion.

Continue reading Four short links: 19 September 2018.

Categories: Technology

Four short links: 18 September 2018

O'Reilly Radar - Tue, 2018/09/18 - 04:20

Causal Inference, Remote Only, Human Augmentation, and C64 OS

  1. Seven Tools of Causal Inference (Morning Paper) -- To understand "why?" and to answer "what if?" questions, we need some kind of a causal model. In the social sciences and especially epidemiology, a transformative mathematical framework called "Structural Causal Models" (SCM) has seen widespread adoption. Pearl presents seven example tasks which the model can handle, but which are out of reach for associational machine learning systems.
  2. Remote Only -- an overview manifesto about how remote-only organizations work.
  3. Third Thumb Changes the Prosthetic Game -- very clever UI.
  4. C64 OS -- a fun project to build a useful operating system for a C64 (The C64 was introduced in 1982 and has an 8-bit, 1MHz, 6510 CPU with just 64 kilobytes of directly addressable memory. It has a screen resolution of 320x200 pixels, and a fixed palette of 16 colors.). The explanation of the C64's constraints are engaging and the solutions interesting.

Continue reading Four short links: 18 September 2018.

Categories: Technology

Kelsey Hightower and Chris Gaun on serverless and Kubernetes

O'Reilly Radar - Tue, 2018/09/18 - 04:05

Exploring use cases for the two tools.

This episode of the O’Reilly Podcast, features a conversation on serverless and Kubernetes, with Kelsey Hightower, developer advocate for Google Cloud Platform at Google (and co-author of Kubernetes: Up and Running), and Chris Gaun, Kubernetes product manager at Mesosphere.

Discussion points:

  • Hightower and Gaun agree that the biggest issue people face when deciding to start using Kubernetes is an underestimation of the learning curve.
  • Whether or not there is a competition between Kubernetes containers and serverless
  • Considerations when attempting to move an existing application to a serverless architecture
  • The new open source frameworks that work on Kubernetes (including Kubeless and OpenFaaS)
  • Workflow engines being used on top of Kubernetes (including Kubeflow and Argo)
  • Security issues regarding Kubernetes clusters

This post is a collaboration between Mesosphere and O’Reilly. See our statement of editorial independence.

Continue reading Kelsey Hightower and Chris Gaun on serverless and Kubernetes.

Categories: Technology

Four short links: 17 September 2018

O'Reilly Radar - Mon, 2018/09/17 - 04:10

Wasted Time, Caught Marshmallows, One-Command Language, and The 9.9%

  1. The Developer Coefficient -- While it’s a priority for senior executives to increase the productivity of their developers, the average developer spends more than 17 hours a week dealing with maintenance issues, such as debugging and refactoring. In addition, they spend approximately four hours a week on “bad code,” which equates to nearly $85 billion worldwide in opportunity cost lost annually, according to Stripe’s calculations on average developer salary by country.
  2. High-Speed, Non-Deformation Marshmallow Catching -- impressive! (via IEEE Spectrum)
  3. SUBLEQ: A Programming Language with Only One Command -- this is built of solid zomg, right down to the no-caps manifesto, aka interview with the author. (via BoingBoing)
  4. The 9.9% (The Atlantic) -- In between the top 0.1% and the bottom 90% is a group that has been doing just fine. It has held on to its share of a growing pie decade after decade. And as a group, it owns substantially more wealth than do the other two combined.

Continue reading Four short links: 17 September 2018.

Categories: Technology

Four short links: 14 September 2018

O'Reilly Radar - Fri, 2018/09/14 - 04:05

Automatic Bugfixes, Research Code, Automatic Diagrams, and Alexa Mapped

  1. SapFix and Sapiens (Facebook) -- SapFix can automatically generate fixes for specific bugs, and then propose them to engineers for approval and deployment to production. I'm a huge fan of tools for software developers. This seems pretty cool.
  2. Papers With Code -- list of research papers with links to the source code, updated weekly. (via Roundup)
  3. erd -- Translates a plain text description of a relational database schema to a graphical entity-relationship diagram.
  4. Anatomy of an AI System (Kate Crawford) -- The Amazon Echo as an anatomical map of human labor, data, and planetary resources.

Continue reading Four short links: 14 September 2018.

Categories: Technology

The secret to great design

O'Reilly Radar - Fri, 2018/09/14 - 04:00

Asking good design questions will elucidate problems and opportunities.

Continue reading The secret to great design.

Categories: Technology

Practical ML today and tomorrow

O'Reilly Radar - Thu, 2018/09/13 - 13:00

Hilary Mason explores the current state of AI and ML and what’s coming next in applied ML.

Continue reading Practical ML today and tomorrow.

Categories: Technology

Wait ... pizza is a vegetable? Decoding regulations using machine learning

O'Reilly Radar - Thu, 2018/09/13 - 13:00

Dinesh Nirmal explains how AI is helping supply school lunch and keep ahead of regulations.

Continue reading Wait ... pizza is a vegetable? Decoding regulations using machine learning.

Categories: Technology

Sound design and the future of experience

O'Reilly Radar - Thu, 2018/09/13 - 13:00

Amber Case covers methods product designers and managers can use to improve interactions through an understanding of sound design.

Continue reading Sound design and the future of experience.

Categories: Technology

Smarter cities through Geotab with BigQuery ML and geospatial analytics

O'Reilly Radar - Thu, 2018/09/13 - 13:00

Chad Jennings explains how Geotab's smart city application helps city planners understand traffic and predict locations of unsafe driving.

Continue reading Smarter cities through Geotab with BigQuery ML and geospatial analytics.

Categories: Technology


Subscribe to LuftHans aggregator