You are here

Technology

115+ live online training courses opened for October, November, and December

O'Reilly Radar - Wed, 2018/10/03 - 03:00

Get hands-on training in machine learning, Python, Kubernetes, blockchain, security, and many other topics.

Learn new topics and refine your skills with more than 115 live online training courses we opened up for October, November, and December on our learning platform.

Artificial intelligence and machine learning

Beginning Data Analysis with Python and Jupyter, October 17-18

Managed Machine Learning Systems and Internet of Things, November 1-2

Essential Machine Learning and Exploratory Data Analysis with Python and Jupyter Notebook, November 5-6

Deep Learning Fundamentals, November 6

Deep Reinforcement Learning, November 14

Getting Started with Machine Learning, November 15

Hands-On with Google Cloud AutoML, November 16

Deploying Machine Learning Models to Production: A Toolkit for Real-World Success, December 3-4

Hands-on Machine Learning with Python: Clustering, Dimension Reduction, and Time Series Analysis, December 4

Blockchain

Blockchain Applications and Smart Contracts, November 15

Business

How to Give Great Presentations, October 22

Emotional Intelligence in the Workplace, November 6

60 Minutes with Barry O’Reilly: 10 Steps to Digital Transformation, November 8

Introduction to Time Management Skills, November 9

Introduction to Leadership Skills, November 12

Introduction to Project Management, November 12

Introduction to Critical Thinking, November 15

Your First 30 Days as a Manager, November 20

Managing Your Manager, November 28

Giving a Powerful Presentation, November 28

Mastering Usability Testing, December 3

Managing Team Conflict, December 4

Data science and data tools

Programming with Data: Python and Pandas, October 16

Advanced SQL Series: Proximal and Linear Interpolations, November 7

Apache Hadoop, Spark, and Big Data Foundations, November 7

Beginning Machine Learning with scikit-learn, November 7

SQL for Any IT Professional, November 8

Advanced SQL Series: Window Functions, November 13

Beginning R Programming, November 13-14

Programming with Data: Python and Pandas, November 14

Intermediate Machine Learning with scikit-learn, November 16

Hands-On Introduction to Apache Hadoop and Spark Programming, November 19-20

Python Data Handling: A Deeper Dive, November 20

Programming

Linux in 3 Hours, October 19

Scalable Concurrency with the Java Executor Framework, October 29

Scala Core Programming: Methods, Classes, and Traits, November 2

Getting Started with Python’s Pytest, November 5

Design Patterns Boot Camp, November 5-6

Beyond Python Scripts: Logging, Modules, and Dependency Management, November 7

Beyond Python Scripts: Exceptions, Error Handling, and Command-Line Interfaces, November 8

Java 11 for the Impatient, November 8

SOLID Principles of Object-Oriented and Agile Design, November 9

Clean Code, November 12

Linux Troubleshooting, November 12

An Introduction to Go for Systems Programmers and Web Developers, November 12-13

Python: The Next Level, November 13-14

Design Patterns in Java, November 13-14

Git Fundamentals, November 14-15

Scaling Python with Generators, November 15

Pythonic Design Patterns, November 16

Modern Application Development with C#, November 19-20

Learn Linux in 3 Hours, November 26

Reactive Spring Boot, November 26

Functional Programming in Java, November 26-27

OCA Java SE 8 Programmer Certification Crash Course Java Cert, November 26-28

Spring Boot and Kotlin, November 27

What's New In Java, November 29

Modern Java Exception Handling, November 30

Test-Driven Development in Python, December 4

Security

CompTIA Security+ SY0-501 Crash Course, October 17-18

CompTIA Security+ SY0-501 Certification Practice Questions and Exam Strategies, October 24

Cybersecurity Offensive and Defensive Techniques in 3 Hours, November 1

Intense Introduction to Hacking Web Applications, November 2

CCNA Cyber Ops SECFND 210-250 Crash Course, November 8

CCNA Cyber Ops SECOPS Crash Course, November 12

CCNA Routing and Switching Exam Prep, November 13

CompTIA Network+ Crash Course, November 13-15

Amazon Web Services (AWS) Security Crash Course, November 14

AWS Advanced Security with Config, GuardDuty, and Macie, November 14

Introduction to Digital Forensics and Incident Response (DFIR), November 16

Architecture for Continuous Delivery, November 19

Introduction to Ethical Hacking and Penetration Testing, November 19-20

Comparing Service-Based Architectures, November 20

CompTIA PenTest+ Crash Course, November 26-27

Security Operation Center (SOC) Best Practices, November 27

CISSP Crash Course, November 27-28

Systems engineering and operations

Managing Complexity in Network Engineering, October 25

Chaos Engineering: Planning and Running Your First Game Day, November 1

Ansible in 3 Hours, November 5

AWS Certified SysOps Administrator (Associate) Crash Course, November 5-6

Kubernetes in 3 Hours, November 6

AWS Certified Cloud Practitioner Crash Course, November 6-7

9 Steps to Awesome with Kubernetes, November 7

Implementing and Troubleshooting TCP/IP, November 7

Google Cloud Platform (GCP) for AWS Professionals, November 12

Learn Serverless Application Development with Webtask, November 13

Introduction to Google Cloud Platform, November 14-15

IP Subnetting from Beginning to Mastery, November 15-16

Getting Started with OpenStack, November 16

Getting Started with Continuous Integration (CI), November 19

Chaos Engineering: Planning, Designing, and Running Automated Chaos Experiments, November 26

Continuous Deployment to Kubernetes, November 26-27

Istio on Kubernetes: Enter the Service Mesh, November 27

Red Hat Certified System Administrator (RHCSA) Crash Course, November 27-30

Quality of Service (QoS) for Cisco Routers and Switches, November 28

Automating with Ansible, December 3

Amazon Web Services (AWS) Technical Essentials, December 4

Web programming

How the Internet Really Works, November 1

Bootstrap Responsive Design and Development, November 7-9

Building APIs with Django REST Framework, November 12

Using Redux to Manage State in Complex React Applications, November 13

Continue reading 115+ live online training courses opened for October, November, and December.

Categories: Technology

Highlights from the O'Reilly Velocity Conference in New York 2018

O'Reilly Radar - Tue, 2018/10/02 - 13:00

Watch highlights from expert talks covering DevOps, SRE, security, machine learning, and more.

People from across the systems engineering world came together in New York for the O'Reilly Velocity Conference. Below you'll find links to highlights from the event.

Continuous disintegration

Anil Dash asks: How could our processes and tools be designed to undo the biggest bugs and biases of today’s tech?

Securing the edge: Understanding and managing security events

Laurent Gil shares the latest cybersecurity research findings based on real-world security operations.

The programmer's mind

Jessica McKellar draws parallels between the free and open source software movement and the work to end mass incarceration.

O’Reilly Radar: Systems engineering tool trends

Roger Magoulas shares insights from O'Reilly's online learning platform that point toward shifts in the systems engineering ecosystem.

Test, measure, iterate: Balancing “good enough” and “perfect” in the critical path

Kris Beevers examines the trade-offs between risk and velocity faced by any high-growth, critical path technology business.

ML on code: Machine learning will change programming

Francesc Campoy Flores explores ways machine learning can help developers be more efficient.

How do DevOps and SRE relate? Hint: They're best friends

Dave Rensin explains why DevOps and SRE make each other better.

Practical performance theory

Kavya Joshi says performance theory offers a rigorous and practical approach to performance tuning and capacity planning.

Chaos Day: When reliability reigns

Tammy Butow explains how companies can use Chaos Days to focus on controlled chaos engineering.

Critical path-driven development

Jaana Dogan explains why Google teaches its tracing tools to new employees and how it helps them learn about Google-scale systems end to end.

Why marketing matters

Michael Bernstein offers an unflinching look at some of the fallacies that developers believe about marketing.

Practical ethics

Laura Thomson shares Mozilla’s approach to data ethics, review, and stewardship.

Continue reading Highlights from the O'Reilly Velocity Conference in New York 2018.

Categories: Technology

Continuous disintegration

O'Reilly Radar - Tue, 2018/10/02 - 13:00

Anil Dash asks: How could our processes and tools be designed to undo the biggest bugs and biases of today’s tech?

Continue reading Continuous disintegration.

Categories: Technology

Test, measure, iterate: Balancing “good enough” and “perfect” in the critical path

O'Reilly Radar - Tue, 2018/10/02 - 13:00

Kris Beevers examines the trade-offs between risk and velocity faced by any high-growth, critical path technology business.

Continue reading Test, measure, iterate: Balancing “good enough” and “perfect” in the critical path.

Categories: Technology

The programmer's mind

O'Reilly Radar - Tue, 2018/10/02 - 13:00

Jessica McKellar draws parallels between the free and open source software movement and the work to end mass incarceration.

Continue reading The programmer's mind.

Categories: Technology

ML on code: Machine learning will change programming

O'Reilly Radar - Tue, 2018/10/02 - 13:00

Francesc Campoy Flores explores ways machine learning can help developers be more efficient.

Continue reading ML on code: Machine learning will change programming.

Categories: Technology

Securing the edge: Understanding and managing security events

O'Reilly Radar - Tue, 2018/10/02 - 13:00

Laurent Gil shares the latest cybersecurity research findings based on real-world security operations.

Continue reading Securing the edge: Understanding and managing security events.

Categories: Technology

How do DevOps and SRE relate? Hint: They're best friends

O'Reilly Radar - Tue, 2018/10/02 - 13:00

Dave Rensin explains why DevOps and SRE make each other better.

Continue reading How do DevOps and SRE relate? Hint: They're best friends.

Categories: Technology

Practical performance theory

O'Reilly Radar - Tue, 2018/10/02 - 13:00

Kavya Joshi says performance theory offers a rigorous and practical approach to performance tuning and capacity planning.

Continue reading Practical performance theory.

Categories: Technology

Four short links: 2 October 2018

O'Reilly Radar - Tue, 2018/10/02 - 03:25

Apple MDM, Source Explorer, Verification-Aware Programming, and Superstar Economics

  1. MicroMDM -- open source mobile device management system (IT department lingo for "rootkit") for Apple devices.
  2. Sourcegraph Open Sourced -- Code search and intelligence, self-hosted and scalable.
  3. Dafny -- a verification-aware programming language. Verification (proving software correct) is a critical research area for the future of software, imho.
  4. The Economics of Superstars -- The key difference between this technology and public goods is that property rights are legally assigned to the seller: there are no issues of free riding due to nonexclusion; customers are excluded if they are unwilling to pay the appropriate admission fee. The implied scale economy of joint consumption allows relatively few sellers to service the entire market. And fewer are needed to serve it the more capable they are. When the joint consumption technology and imperfect substitution features of preferences are combined, the possibility for talented persons to command both very large markets and very large incomes is apparent. (via Hacker News)

Continue reading Four short links: 2 October 2018.

Categories: Technology

Four short links: 1 October 2018

O'Reilly Radar - Mon, 2018/10/01 - 04:25

DARPA History, Probabilistic Programming, Superstar Macroeconomics, and Interactive Narrative

  1. 60 Years of Challenges and Breakthroughs (DARPA) -- a short interesting history video about the internet, TCP/IP, Licklider, and more.
  2. Introduction to Probabilistic Programming -- a first-year graduate-level introduction to probabilistic programming. It not only provides a thorough background for anyone wishing to use a probabilistic programming system, but also introduces the techniques needed to design and build these systems. It is aimed at people who have an undergraduate-level understanding of either or, ideally, both probabilistic machine learning and programming languages. Probabilistic methods are a way of automating inference, and of use as we try to make software smarter.
  3. The Macroeconomics of Superstars (PDF download) -- We describe superstars as arising from digital innovations, which replace a fraction of the tasks in production with information technology that requires a fixed cost but can be reproduced at zero marginal cost. This generates a form of increasing returns to scale. To the extent that the digital innovations are excludable, it also provides the innovator with market power. Our paper studies the implications of superstar technologies for factor shares, for inequality, and for the efficiency properties of the superstar economy. (via Hacker News)
  4. Inform: Past, Present, Future (Emily Short) -- Graham Nelson's talk about how Inform came to be what it is, and where it's going. Inform is the amazing compiler that lets you write Infocom adventures...but is so much more than that. Anyone interested in programming language design, literate programming, or AR/VR interactive fiction should read this.

Continue reading Four short links: 1 October 2018.

Categories: Technology

Y2K and other disappointing disasters

O'Reilly Radar - Fri, 2018/09/28 - 04:10

How risk reduction makes sure bad things happen as rarely as possible.

Continue reading Y2K and other disappointing disasters.

Categories: Technology

Four short links: 28 September 2018

O'Reilly Radar - Fri, 2018/09/28 - 04:00

Observing Kubernetes, Ada Lovelace, Screen Time, and 6502 C

  1. kubespy -- Tools for observing Kubernetes resources in real time.
  2. Ada Lovelace's Note G -- a very readable explanation of what she did and why it's notable and remarkable, complete with loops and versions of her program in C and Pascal. (via Chris Palmer)
  3. Limiting Children’s Screen Time to Less Than Two Hours a Day Linked to Better Cognition (Neuroscience News) -- a summary of a paper in Lancet, the leading British medical journal. Taken individually, limited screen time and improved sleep were associated with the strongest links to improved cognition, while physical activity may be more important for physical health. However, only one in 20 U.S. children aged between 8-11 years meet the three recommendations advised by the Canadian 24-hour Movement Guidelines to ensure good cognitive development—9-11 hours of sleep, less than two hours of recreational screen time, and at least an hour of physical activity every day.
  4. cc65 -- a complete cross development package for 65(C)02 systems, including a powerful macro assembler, a C compiler, linker, librarian, and several other tools. cc65 has C and runtime library support for many of the old 6502 machines. That's right, you can print "Hello, World" on your C64 (and Atari 2600 and Apple ][+ and NES and ...).

Continue reading Four short links: 28 September 2018.

Categories: Technology

Why it’s hard to design fair machine learning models

O'Reilly Radar - Thu, 2018/09/27 - 04:50

The O’Reilly Data Show Podcast: Sharad Goel and Sam Corbett-Davies on the limitations of popular mathematical formalizations of fairness.

In this episode of the Data Show, I spoke with Sharad Goel, assistant professor at Stanford, and his student Sam Corbett-Davies. They recently wrote a survey paper, “A Critical Review of Fair Machine Learning,” where they carefully examined the standard statistical tools used to check for fairness in machine learning models. It turns out that each of the standard approaches (anti-classification, classification parity, and calibration) has limitations, and their paper is a must-read tour through recent research in designing fair algorithms. We talked about their key findings, and, most importantly, I pressed them to list a few best practices that analysts and industrial data scientists might want to consider.

Continue reading Why it’s hard to design fair machine learning models.

Categories: Technology

Four short links: 27 September 2018

O'Reilly Radar - Thu, 2018/09/27 - 04:00

Calendar Fallacies, Data Lineage, Firefox Monitor, and Glitch Handbook

  1. Your Calendrical Fallacy is... -- odds are high that if a programmer is sobbing into their keyboard, it's because of these pesky realities.
  2. Smoke: Fine-Grained Lineage at Interactive Speed -- lineage queries over the workflow: backward queries return the subset of input records that contributed to a given subset of output records while forward queries return the subset of output records that depend on a given subset of input records. (via Morning Paper)
  3. Introducing Firefox Monitor -- proactive alerting of your presence on HaveIBeenPwned. Introduced here.
  4. Glitch Employee Handbook -- fascinating to see how openly they operate. (via their very nicely done "come work for us" site)

Continue reading Four short links: 27 September 2018.

Categories: Technology

The future of work goes beyond the predictions of the World Economic Forum

O'Reilly Radar - Thu, 2018/09/27 - 03:00

The World Economic Forum’s 2018 jobs report limits research to a narrow range of the workforce.

The World Economic Forum (WEF) has released a 147-page report on the future of jobs. With this document, one of the leading think tanks has weighed in on the great debate of our time: Will machines take over our work? And more broadly, how can society make sure that automation benefits more than a handful of managers and tech experts?

In this article, while summarizing the WEF's key recommendation, I'll point out its limitations, springing from the constraints that the authors place on themselves. This will lead to a list of topics worth further investigation: the prospects for the emergence of new industries, the importance of the non-profit sector, and impacts on the informal economy. I'll also touch on geographic impacts and the willingness of companies to tolerate the innovative employees who supposedly will drive change.

Message to employers: Train your staff

The WEF firmly endorses the optimistic side of the current debate: they predict that the massive job loss caused by automation and big data will be offset by even more jobs created in new, creative, intellectually demanding fields. This is encouraging (although I was disappointed at first, hoping to live off a monthly bank deposit and devote myself to the piano and poetry), but WEF also highlights the resulting "reskilling imperative": the importance of retraining workers and training young people to be agile. Table 3 on page 9 of the report divides roles into those that are stable, new, and redundant ("pick up your check on the way out"). Most people today can find their occupations in the redundant column.

Are corporations up to the task? The WEF points out that they currently offer training opportunities to their best-trained and most privileged staff (page 14), and asks the corporations to offer training to the people who need it most. I worry that managers won't do this. We can already see the mad scramble for scarce staff in fields as different as statistics and nursing. Everyone loses in such a situation--even the highly sought employees experience the stress of overwork.

The report doesn't address society's role in education. Many countries have excellent educational systems, and some (especially in Scandinavia) are noted for retraining adults. The kinds of jobs being created in the automated economy call for lots of training—how long does it take to become a data scientist or robotics engineer?—but companies expect only 10% of their employees to require more than one year of reskilling (page 13). Apparently, society will have to pick up the task. I will return to the question of education later in this article.

Where the WEF stopped short

It's important to see the constraints that WEF placed on themselves when producing this report. They just asked the executives of current companies in 12 categories—mostly big companies—to talk about the jobs they expect to hire or train for. This is why the report failed to address public education. The direction chosen also made it impossible to consider new companies that will arise. Finally, the WEF didn't try to inquire about new job opportunities that might arise totally outside the current corporate structure: nonprofits and the notorious "informal sector." I think this is where people without corporate prospects will gravitate, and society needs to support that.

Thus, the number of new jobs required in an automation economy could be far greater than even the optimistic numbers predicted by the WEF. The following sections of the article look at each area.

We must remember that the economy can be disrupted by unexpected events such as war, along with the completely expected blows of climate change. As I write this, Moody's Analytics predicts at least $17 billion of damage from our current disaster, Hurricane Florence, and costs may rise as high as $50 billion. With current catastrophes ranging from drought in Europe to mudslides in the Philippines, we have to worry that our economy is in for serious trouble, along with all humans and other forms of life. But in this article I will do what most economists do, and just project current trends forward.

New companies

Areas that barely existed a decade ago, such as analytics for health care and 3D printing, are now teeming with promising new ventures. Some are quickly absorbed by large, established companies, while others survive by offering services to those companies. But job growth will happen in this area. Start-ups exacerbate the skills gap, though, because they lack the resources that large companies have to retrain people. Furthermore, established companies will buy or contract with the new companies to avoid spending money on reskilling their own employees. This way of sidestepping their responsibilities could reinforce the dreaded layoffs and exits from the workforce we all want to avoid.

Nonprofits and quality-of-life jobs

If automation truly relieves humans of repetitive and unpleasant work, we can expect a growth in "quality of life" occupations: wellness centers, adult education, travel, the arts, and so on. Some of these occupations are revenue-generating and can support themselves, but a lot of them—particularly education, health care, and the arts—require subsidization. Governments and companies may invest directly, as they do with primary and secondary education and with health care, or the investment may come through complex channels like donations to universities and arts centers. But however investment happens, the non-profit sector requires it. This is not addressed by the WEF.

Two factors make these issues particularly relevant to the future of work. First, the automated economy requires education and mental health services: education for the technical skills workers will need, and mental health to prepare them for delicate interactions with other people. Workers will also need to learn how to interact with robots (this is not a joke).

And these necessities raise serious problems because elites today fail to appreciate the importance of funding the non-profit sector. This skepticism is partly rational, because one can point to plenty of poorly conceived projects and job padding. But I believe that the private sector contains lots of waste as well. We must not punish the non-profit sector in general for its lapses.

Travel and the arts, which may be for-profit or non-profit, are important for a different reason. They represent an income transfer from the affluent to the less affluent, and many areas of the world depend on that income. A healthy economy has to take them into account, even though they are non-productive.

The WEF looked at prospects for jobs in different countries and regions of the world, but did not break down geography further. One of the pressing trends throughout the world is the concentration of jobs in a few entrepreneurial cities, with other areas emptying out or suffering "epidemics of despair." The WEF didn't offer remedies for the well-known drain of talent from rural and inner regions to coastal cities with attractive living conditions. Automation will exacerbate these effects unless society's leaders explicitly counteract them. We'll see more of this issue in the following section.

The informal sector

The term "informal" covers a huge range of employment, such as many of the house cleaners and handymen employed by readers of this article. The informal economy is already estimated to cover some 30% to 80% of all employment. There can be many ways to view the informal sector from both an ethical and a regulatory standpoint, as recognized by the WEF itself.

Aside from blatantly illegal forms of employment, such as drug dealing and the manufacture of knock-off fashion products, much of the informal economy consists of enterprises that create value and help under-served populations through innovative activities that the formal sector doesn't think of doing. But informal companies tend to be in geographic and economic areas that can't take advantage of the advanced analytics and robotics diffusing through the formal sector. If the formal sector becomes radically more efficient, it may put the informal sector out of business, and the urgency of reskilling will be even greater.

Can companies make the shift?

I've talked a lot about the barriers to reskilling, whose importance lay at the center of the WEF report. In their industry profiles section they note a great deal of worry about finding skilled personnel. But there's another looming problem: an enormous number of industries have trouble making the digital shift because they admit they don't understand the opportunities.

I also wonder whether corporations will recognize the need for the new, creative roles—and even more important, whether their need for control and old-fashioned hierarchy will allow them to nurture and make positive use of people in such roles. Seventy percent of change programs fail to achieve their goals, largely due to employee resistance and lack of management support. Naturally, the topic is highly popular in the business literature. As an article in the Harvard Business Review points out, people don't tend to resist technical changes, but do resist changes in social relationships. I take this to indicate that creating empowered and independent-minded staff is extremely threatening to management, who are likely to resist the changes needed for innovation.

So I see the future of work as uncertain. We all know where we'd like to go: a world of self-realized individuals contributing to a better life for all. The WEF report has seriously underestimated the hurdles that lie in our path. The WEF recognizes that corporations see reskilling as a burden, but the difficulty goes much farther—many managers will see it as a threat. Other forces in society may also be suspicious of empowered individuals.

Can society reap the benefits of automation? It will require more than an appeal to management to do a modest amount of reskilling. Advocates must reach out and present automation as a vision that’s more appealing to the general public than hanging on to old jobs and ways of relating. Touting the technical benefits of automation will not be enough to overcome fears, nor will it produce the kind of automation that goes beyond replacing workers. A change of such historic social impact must be presented as a social movement—and a non-partisan one. When ordinary people demand a new relation to technology, education, and the workplace, they may be able to redeem from the cocktail of analytics, devices, and big data a life of meaning and productive contributions.

Continue reading The future of work goes beyond the predictions of the World Economic Forum.

Categories: Technology

Four short links: 26 September 2018

O'Reilly Radar - Wed, 2018/09/26 - 03:50

Walmart's Blockchain, Machine Learning and Text Adventures, Algorithmic Decision-Making, and Networked Brains

  1. Walmart Requires Lettuce, Spinach Suppliers to Join Blockchain (WSJ Blog) -- built on Hyperledger, by way of IBM. I read IBM's brief but still can't figure out the benefits over, say, Walmart running their own APIed database app, but I suspect it has to do with "this way, EVERY blockchain participant has to buy a big app from IBM, instead of just Walmart buying something to run for others to contribute to." (via Dan Hon)
  2. Inform 7 and Machine Learning (Emily Short) -- TextWorld’s authors feel we’re not yet ready to train a machine agent to solve a hand-authored IF game like Zork—and they’ve documented the challenges here much more extensively than my rewording above. What they have done instead is to build a sandbox environment that does a more predictable subset of text adventure behavior. TextWorld is able to automatically generate games containing a lot of the standard puzzles.
  3. Litigating Algorithms: Challenging Government Use of Algorithmic Decision Systems -- session notes from a day-long workshop the EFF ran with the Center on Race, Inequality, and the Law.
  4. BrainNet: A Multi-Person Brain-to-Brain Interface for Direct Collaboration Between Brains -- Five groups of three subjects successfully used BrainNet to perform the Tetris task, with an average accuracy of 0.813. Furthermore, by varying the information reliability of the senders by artificially injecting noise into one sender's signal, we found that receivers are able to learn which sender is more reliable based solely on the information transmitted to their brains. Our results raise the possibility of future brain-to-brain interfaces that enable cooperative problem solving by humans using a "social network" of connected brains.

Continue reading Four short links: 26 September 2018.

Categories: Technology

Four short links: 25 September 2018

O'Reilly Radar - Tue, 2018/09/25 - 04:00

Software Engineering, ML Hardware Trends, Time Series, and Eng Team Playbooks

  1. Notes to Myself on Software Engineering -- Code isn’t just meant to be executed. Code is also a means of communication across a team, a way to describe to others the solution to a problem. Readable code is not a nice-to-have; it is a fundamental part of what writing code is about. A solid list of advice/lessons learned.
  2. Machine Learning Shifts More Work To FPGAs, SoCs -- compute power used for AI/ML is doubling every 3.5 months. FPGAs and ASICs are already predicted to be 25% of the market for machine learning accelerators in 2018. Why? FPGAs and ASICs use far less power than GPUs, CPUs, or even the 75 watts per hour Google’s TPU burns under heavy load. [...] They can also deliver a performance boost in specific functions chosen by customers that can be changed along with a change in programming.
  3. Time Series Forecasting -- one of those "three surprising things" articles. The three surprising things: You need to retrain your model every time you want to generate a new prediction; sometimes you have to do away with train/test splits; and the uncertainty of the forecast is just as important as, or even more so, than the forecast itself.
  4. Health Monitor -- Atlassian's measures of whether your team is doing well. Their whole set of playbooks is great reading for engineering managers.

Continue reading Four short links: 25 September 2018.

Categories: Technology

Handling real-time data operations in the enterprise

O'Reilly Radar - Mon, 2018/09/24 - 04:17

Getting DataOps right is crucial to your late-stage big data projects.

At Strata 2017, I premiered a new diagram to help teams understand why teams fail and when:

Early on in projects, management and developers are responsible for the success of a project. As the project matures, the operations team is jointly responsible for the success.

I've taught in situations where the operations team members complain that no one wants to do the operational side of things. They're right. Data science is the sexy thing companies want. The data engineering and operations teams don't get much love. The organizations don’t realize that data science stands on the shoulders of DataOps and data engineering giants.

What we need to do is give these roles a sexy title. Let's call these operational teams that focus on big data: DataOps teams.

What does the Ops say?

Companies need to understand there is a different level of operational requirements when you're exposing a data pipeline. A data pipeline needs love and attention. For big data, this isn't just making sure cluster processes are running. A DataOps team needs to do that and keep an eye on the data.

With big data, we're often dealing with unstructured data or data coming from unreliable sources. This means someone needs to be in charge of validating the data in some fashion. This is where organizations get into the garbage-in-garbage-out downward cycle that leads to failures. If this dirty data proliferates and propagates to other systems, we open Pandora’s box of unintended consequences. The DataOps team needs to watch out for data issues and fix them before they get copied around.

These data quality issues bring a new level of potential problems for real-time systems. Worst case, the data engineering team didn’t handle a particular issue correctly and you have a cascading failure on your hands. The DataOps team will be at the forefront of figuring out if a problem is data or code related.

Shouldn't the data engineering team be responsible for this? Data engineers are software developers at heart. I've taught many and interacted with even more. I wouldn't let 99% of data engineers I’ve met near a production system. There are several reasons why—such as a lack of operational knowledge, a lack of operational mindset, and being a bull in your production china shop. Sometimes, there are compliance issues where there has to be a separation of concerns between the development and production data. The data engineering team isn’t the right team to handle that.

That leaves us with the absolute need for a team that understands big data operations and data quality. They know how to operate the big data frameworks. They’re able to figure out the difference between a code issue and a data quality issue.

Real-time: The turbo button of big data

Now let's press the turbo button and expand this to include batch and real-time systems.

Outages and data quality issues are painful for batch systems. With batch systems, you generally aren't losing data. You're falling behind in processing or acquiring data. You'll eventually catch up and get back to your steady state of data coming in and being processed on time.

Then there's real time. An outage for real-time systems brings a new level of pain. You're dealing with the specter of permanently losing data. In fact, this pain during down time is how I figure out if a company really, really needs real-time systems. If I tell them they’ll need a whole new level of service level agreement (SLA) for real time and they disagree, that probably means they don’t need real time. Having operational downtime for your real-time cluster should be so absolutely painful that you will have done everything in your power to prevent an outage. An outage of your real-time systems for six hours should be a five-alarm fire.

All of this SLA onus falls squarely on the DataOps team. They won’t just be responsible for fixing things when they go wrong; they’ll be an active part of the design of the system. DataOps and data engineering will be choosing technologies that design with the expectation of failure. The DataOps team will be making sure that data moves, preferably automatically, to disaster recovery or active active clusters. This is how you avoid six-hour downtimes.

Busting out real-time technologies and SLA levels comes at the expense of conceptual and operational complexity. When I mentor a team on their real-time big data journey, I make sure management understands that the architects and developers aren’t the only ones who need new skills. The operations teams will need new skills and to learn the operations of new technologies.

There isn’t an “I” in DataOps, either

In my experience, the leap in complexity from small data to real-time big data is 15x. Once again, this underscores the need for DataOps. It will be difficult for a single person to keep up with all of the changes in both small data and big data technologies. The DataOps team will need to specialize in big data technologies and keep up with the latest issues associated with them.

As I mentored more teams on their transition to real-time systems, I saw common problems across organizations. It was because the transition to real-time data pipelines brought cross-functional changes.

With a REST API, for example, the operations team can keep their finger on the button. They have fine-grained control over who accesses the REST endpoint, how, and why. This becomes more difficult with a real-time data pipeline. The DataOps team will need to be monitoring the real-time data pipeline usage. First and foremost, they’ll need to make sure all data is encrypted and that access requires a login.

A final important facet of DataOps is dealing with data format changes. With real-time systems, there will be changes to the data format. This will be a time when the data engineering and DataOps teams need to work together. The data engineering team will deal with the development and schema sides of the problem. The DataOps team will need to deal with production issues arising from these changes and triage processing that fails due to a format change.

If you still aren’t convinced, let me give it one last shot

Getting DataOps right is crucial to your late-stage big data projects. This is the team that keeps your frameworks running and your data quality high. DataOps adds to the virtuous upward cycle of good data. As you begin a real-time or batch journey, make sure your operations team is ready for the challenges that lay ahead.

This post is part of a collaboration between O'Reilly and Mesosphere. See our statement of editorial independence.

Continue reading Handling real-time data operations in the enterprise.

Categories: Technology

Four short links: 24 September 2018

O'Reilly Radar - Mon, 2018/09/24 - 04:15

Continuous Delivery, Turing Complete Powerpoint, ARPA-E, and Observability

  1. Drone -- a continuous delivery platform built on Docker, written in Go. A continuous delivery system built on container technology. Drone uses a simple YAML configuration file, a superset of docker-compose, to define and execute pipelines inside Docker containers.
  2. On the Turing Completeness of Powerpoint (YouTube) -- Video highlighting my research on PowerPoint Turing Machines for CMU's SIGBOVIK 2017. (via Andy Baio)
  3. ARPA-E: Successful, and Struggling -- In Cory Doctorow's words, ARPA-E is a skunkworks project that gives out grants for advanced sustainable energy research that's beyond the initial phases but still too nascent to be commercialized. They've focused on long-term energy storage (a key piece of the picture with renewables) and the portfolio of inventions that have emerged from their funding is mind-bogglingly cool. Reminds me of Doing Innovation in the Capitalist Economy, by Bill Janeway, who argues that the state funds early research until VCs have commercialization opportunities (this explains why VCs are heavy in biotech and internet...they've been foci of state-funded research for decades). Such a good book, by the way.
  4. Structured Logs vs. Events (Twitter) -- Charity Majors drops some great clue bombs about observability. The most effective way to structure your instrumentation, so you get the maximum bang for your buck, is to emit a single arbitrarily wide event per request per service hop. We're talking wide. We usually see 200-500 dimensions in a mature app. But just one write. [...] All of it. In one fat structured blob. Not sprinkled around your code in functions like satanic fairy dust. You will crush your logging system that way, and you'd need to do exhaustive post-processing to recreate the shared context by joining on request-id (if you're lucky).

Continue reading Four short links: 24 September 2018.

Categories: Technology

Pages

Subscribe to LuftHans aggregator - Technology