You are here

Feed aggregator

Chaos Day: When reliability reigns

O'Reilly Radar - Wed, 2018/10/03 - 13:00

Tammy Butow explains how companies can use Chaos Days to focus on controlled chaos engineering.

Continue reading Chaos Day: When reliability reigns.

Categories: Technology

Critical path-driven development

O'Reilly Radar - Wed, 2018/10/03 - 13:00

Jaana Dogan explains why Google teaches its tracing tools to new employees and how it helps them learn about Google-scale systems end to end.

Continue reading Critical path-driven development.

Categories: Technology

Why marketing matters

O'Reilly Radar - Wed, 2018/10/03 - 13:00

Michael Bernstein offers an unflinching look at some of the fallacies that developers believe about marketing.

Continue reading Why marketing matters.

Categories: Technology

Best design practices to get the most out of your API

O'Reilly Radar - Wed, 2018/10/03 - 04:05

Practical techniques to ensure developers can actually do the things you want them to do using your API.

In the previous chapters, we gave an overview of various approaches for transmitting data via your web API. Now that you're familiar with the landscape of transport and have an understanding of how to choose between various patterns and frameworks, we want to provide some tactical best practices to help your developers get the most out of your API.

Designing for Real-Life Use Cases

When designing an API, it’s best to make decisions that are grounded in specific, real-life use cases. Let’s dig into this idea a bit more. Think about the developers who are using your API. What tasks should they be able to complete with your API? What types of apps should developers be able to build? For some companies, this is as targeted as “developers should be able to charge customer credit cards.” For other companies, the answer can be more open-ended: “developers should be able to create a full suite of interactive consumer-quality applications.”

After you have your use cases defined, make sure that developers can actually do the things you want them to do using your API.

Quite often APIs are designed based on the internal architecture of the application, leaking details of the implementation. This leads to confusion for third-party developers and a bad developer experience. That’s why it’s so important to focus not on exposing on your company’s internal infrastructure but on the experience that an outside developer should have when interacting with your API. For a concrete example of how to define key use cases, see the section “Outline Key Use Cases.”

When you get started with a design, it’s easy to imagine many “what-ifs” before implementation and testing. Although these questions are useful during the brainstorming phase, they can lead a design astray by tempting you to try and solve too many problems at once. By picking a specific workflow or use case, you will be able to focus on one design and then test whether it works for your users.

Expert Advice

When we asked Ido Green, developer advocate at Google, what makes an API good, his top answer was focus:

"The API should enable developers to do one thing really well. It’s not as easy as it sounds, and you want to be clear on what the API is not going to do as well.”

Designing for a Great Developer Experience

Like we spend time thinking about the user experience delivered via a user interface, it's important to think about the developer experience delivered via an API. Developers have a low bar for abandoning APIs, so bad experiences result in attrition. By the same token, usability is the bare minimum for keeping a developer using your API. Good experiences get love from developers: they will in turn, become the most creative innovators using your API as well as evangelists for your API.

Make It Fast and Easy to Get Started

It’s important for developers to be able to understand your API and to get up and running quickly. Developers may be using your API to avoid having to build out a secondary product suite to support their main product. Don’t make them regret that decision with an API that's opaque and difficult to use.

Expert Advice

No matter how carefully we design and build our core API, developers continue to create products we’d never expect. We give them the freedom to build what they like.

Designing an API is much like designing a transportation network. Rather than prescribing an end state or destination, a good API expands the very notion of what’s possible for developers.

—Romain Huet, head of developer relations at Stripe

Documentation can go a long way toward helping developers get started. In addition to documents that outline the specifications of an API, it can be helpful to have tutorials or Getting Started guides. A tutorial is an interactive interface to teach developers about your API. You might have developers answer questions or fill in “code” in an input area. A guide is a more contextual document than a specification. It provides information for developers at a certain point in time—typically when getting started, but sometimes when updating or converting from one version or feature to another.

In some cases, you can supplement the ease of use by providing interactive documentation online, where developers have a sandbox to test out your API. Oftentimes, developers can use these interfaces to test code and preview results without having to implement authentication.

Figure 1-1. Developers can try the Stripe API without signing up

In addition to interactive documentation, tools such as software development kits (SDKs) can go a long way toward helping developers use your API. These code packages are designed to help developers get up and running quickly with their projects by simplifying some of the transactional layers and setup of an application.

For an ideal experience, developers should be able to try out your APIs without logging in or signing up. If you cannot avoid that, you should provide a simple signup or application creation flow that captures the minimum required information. If your API is protected by OAuth, you should provide a way for developers to generate access tokens in the UI. Implementing OAuth is cumbersome for developers, and in the absence of easy ways to generate these tokens, you will see a significant drop-off rate at this point.

Work Toward Consistency

You want your API to be intuitively consistent. That should be reflected in your endpoint names, input parameters, and output responses. Developers should be able to guess parts of your API even without reading the documentation. Unless you are making a significant version bump or large release, it's best to work toward consistency when designing new aspects of an existing API.

For example, you might have previously named a group of resources “users” and named your API endpoints accordingly, but you now realize that it makes more sense to call them “members.” It can be very tempting to work toward the “correctness” of the new world rather than focus on consistency with the old. But if the objects are the same, it could be very confusing to developers to sometimes refer to them as “users” and other times as “members” in URI components, request parameters, and elsewhere. For the majority of incremental changes, consistency with the existing design patterns will work best for your users.

As another example, if in some places you have a response field called user and sometimes its type is an integer ID but sometimes its type is an object, each and every developer receiving those two response payloads needs to check whether user is an int ID or an object. This logic leads to code bloat in developers’ code bases, which is a suboptimal experience.

This can show up in your own code as well. If you have SDKs that you’re maintaining, you will need to add more and more logic to handle these inconsistencies and to make a seamless interface for developers. You might as well do this at the API level by maintaining consistency instead of introducing new names for the same things.

Consistency generally means that there are a number of patterns and conventions repeated throughout your API, in such a way that developers can begin to predict how to use your API without seeing the documentation. That could include anything from data access patterns to error handling to naming. The reason consistency is important is that it reduces the cognitive load on developers who are trying to figure out your API. Consistency helps your existing developers in adapting new features by reducing forks in their code, and it helps new developers hit the ground running with everything you’ve built on your API. In contrast, with less consistency, different developers will need to reimplement the same logic over and over again.

Make Troubleshooting Easy

Another best practice for designing APIs is making troubleshooting easy for developers. This can be done through returning meaningful errors as well as by building tooling.

Meaningful errors

What’s in an error? An error can occur in many places along your code path, from an authorization error during an API request, to a business logic error when a particular entity doesn’t exist, to a lower-level database connection error. When designing an API, it is helpful to make troubleshooting as easy as possible by systematically organizing and categorizing errors and how they are returned. Incorrect or unclear errors are frustrating and can negatively affect adoption of your APIs. Developers can get stuck and just give up.

Meaningful errors are easy to understand, unambiguous, and actionable. They help developers to understand the problem and to address it. Providing these errors with details leads to a better developer experience. Error codes that are machine-readable strings allow developers to programmatically handle errors in their code bases.

In addition to these strings, it is useful to add longer-form errors, either in the documentation or somewhere else in the payload. These are sometimes referred to as human-readable errors. Even better, personalize these errors per developer. For instance, with the Stripe API, when you use a test key in your live mode, it returns an error such as:

No such token tok_test_60neARX2. A similar object exists in test mode, but a live mode key was used to make this request. Table 1-1. Example error codes for different situations Situation Recommended Not recommended Authentication failed because token is revoked token_revoked invalid_auth Value passed for name exceeded max length name_too_long invalid_name Credit card has expired expired_card invalid_card Cannot refund because a charge has already been refunded charge_already_refunded cannot_refund

To begin designing your system of errors, you might map out your backend architecture along the code path of an API request. The goal of this is not to expose your backend architecture but to categorize the errors that happen and to identify which ones to expose to developers. From the moment an API request is made, what are the critical actions that are taken to fulfill the request? Map out the various high-level categories of errors that occur during the course of an API request, from the beginning of the request to any service boundaries within your architecture.

Table 1-2. Group errors into high-level categories Error category Examples System-level error

Database connection issue

Backend service connection issue

Fatal error

Business logic error

Rate-limited

Request fulfilled, but no results were found

Business-related reason to deny access to information

API request formatting error

Required request parameters are missing

Combined request parameters are invalid together

Authorization error

OAuth credentials are invalid for request

Token has expired

After grouping your error categories throughout your code path, think about what level of communication is meaningful for these errors. Some options include HTTP status codes and headers, as well as machine-readable “codes” or more verbose human-readable error messages returned in the response payload. Keep in mind that you’ll want to return an error response in a format consistent with your non-error responses. For example, if you return a JSON response on a successful request, you should ensure that the error is returned in the same format.

You might also want a mechanism to bubble up errors from a service boundary to a consistent format from your API output. For example, a service you depend on might have a variety of connection errors. You would let the developer know that something went wrong and that they should try again.

In most cases, you want to be as specific as possible to help your developers take the correct next course of action. Other times, however, you might want to occlude the original issue by returning something more generic. This might be for security reasons. For example, you probably don’t want to bubble up your database errors to the outside world and reveal too much information about your database connections.

Table 1-3. Organize your errors into status codes, headers, machine-readable codes, and human-readable strings Error category HTTP status HTTP headers Error code (machine-readable) Error message (human-readable) System-level error 500 -- -- -- Business logic error 429 Retry-After rate_limit_exceeded “You have been rate-limited. See Retry-After and try again.” API request formatting error 400 -- missing_required_parameter “Your request was missing a {user} parameter.” Auth error 401 -- invalid_request “Your ClientId is invalid.”

As you begin to organize your errors, you might recognize patterns around which you can create some automatic messaging. For example, you might define the schema for your API to require specific parameters and to have a library that automatically checks for these at the beginning of the request. This same library could format the verbose error in the response payload.

You’ll want to create a way to document these errors publicly on the web. You can build this into your API description language or documentation mechanism. Think about the various layers of errors before writing the documents, because it can become complicated to describe multiple factors if there are many different types of errors. You might also want to consider using verbose response payloads to link to your public documentation. This is where you’ll give developers more information on the error they received as well as how to recover from it.

For even more structured and detailed recommendations on meaningful errors and problem details for HTTP APIs, see RFC 7807.

Build tooling

In addition to making troubleshooting easy for developers, you should make it easy for yourself by building internal and external tools.

Logging on HTTP statuses, errors and their frequencies, and other request metadata is valuable to have, for both internal and external use, when it comes to troubleshooting developer issues. There are many off-the-shelf logging solutions available. However, when implementing one, before you troubleshoot real-time traffic, be sure to respect customer privacy by redacting any personally identifiable information (PII).

Figure 1-2. The Stripe API dashboard with request logs

Besides logging, when building an API, it’s helpful to create dashboards to help developers analyze aggregate metadata on API requests. For example, you could use an analytics platform to rank the most-used API endpoints, identify unused API parameters, triage common errors, and define success metrics.

Like for logging, many analytics platforms are available off the shelf. You can present the information in high-level dashboards that provide a visual display in a time-based manner. For example, you might want to show the number of errors per hour over the past week. Additionally, you might want to provide developers complete request logs with details about the original request, whether it succeeded or failed, and the response returned.

Make Your API Extensible

No matter how well you’ve designed your API, there will always be a need for change and growth as your product evolves and developer adoption increases. This means that you need to make your API extensible by creating a strategy for evolving it. This enables you as the API provider and your developer ecosystem to innovate. Additionally, it can provide a mechanism to deal with breaking changes. Let’s dive into the idea of extensibility and explore how to incorporate early feedback, versioning an API, and maintaining backward compatibility.

Expert Advice

APIs should provide primitives that can enable new workflows and not simply mirror the workflows of your application. The creation of an API acts as a gate for what the API’s users can do. If you provide too low-level access, you could end up with a confusing integration experience and you push too much work on the integrators. If you provide too high-level access, you could end up with most integrations simply mirroring what your own application does. You need to find the right balance to enable workflows you hadn’t considered either as part of your application or within the API itself in order to enable innovation. Consider what your own engineers would want in an API to build the next interesting feature for your application and then make that a part of your public API.

—Kyle Daigle, director of ecosystem engineering at GitHub

One aspect of extensibility is ensuring that you have created an opportunity for feedback with your top partners. You need a way to release certain features or fields and to give certain privileged developers the option to test these changes without releasing the changes to the public. Some would call this a “beta” or “early adopter” program. This feedback is extremely valuable in helping you decide whether your API has been designed in a way that achieves its goals. It gives you a chance to make changes before adoption has become prevalent and before significant changes require a lot of communication or operational overhead.

In some cases, you might want to version your API. Building a versioning system is easier if it’s baked into the design at an early stage. The longer you wait to implement versioning, the more complicated it becomes to execute. That's because it becomes more and more difficult as time goes on to update your code base’s dependency patterns so that old versions maintain backward compatibility. The benefit of versioning is that it allows you to make breaking changes with new versions while maintaining backward compatibility for old versions. A breaking change is a change that, when made, would stop an existing app from continuing to function as it was functioning before using your APIs.

Storytime: Slack's Translation Layer

In 2017 Slack launched its Enterprise Grid product, which was a federated model of its previous offering. As a result of this federation, Slack had to fundamentally change its user data model so that users could belong to multiple “workspaces.”

In the API, users previously had only a single ID. However, in the new federated model, each user had a main (global) user ID for the Enterprise Grid and a local user ID for each workspace. When existing teams migrated to the Enterprise Grid product, their user IDs were slated to change. This would have broken any third-party apps relying on a fixed user ID in the API.

When Slack's engineering team realized this problem, it went back to the drawing board to figure out what could be done to maintain backward compatibility for third-party developers. That’s when the team decided to create a translation layer. This additional infrastructure would silently translate user IDs to be consistent with the ones that developers had previously received.

Although the decision to build this translation layer delayed the Enterprise Grid product launch by several months, it was mission-critical for Slack to ensure that its API remained backward compatible.

For companies and products that businesses rely on, maintaining backward-compatible versions is a difficult requirement. That’s especially true for apps that don’t experience a high rate of change. For a lot of enterprise software, there isn’t somebody dedicated to updating versions, and there’s no incentive for a company to invest in updating versions just because you've released a new one. Many internet-connected hardware products also use APIs, but hardware does not always have a mechanism to update its software. Plus, hardware can be around for a long time—think about how long you owned your last TV or router. For those reasons, it is sometimes imperative that you maintain backward compatibility with previous API versions.

That said, maintaining versions does have a cost. If you don’t have the capacity to support old versions for years, or if you anticipate very few changes to your API, by all means skip the versions and adopt an additive change strategy that also maintains backward compatibility in a single, stable version.

If you anticipate major breaking changes and updates at any time in your future, we strongly recommend setting up a versioning system. Even if it takes years to get to your first major version change, at least you’ve got the system ready to go. The overhead of creating a system of version management at the beginning is much lower than that of adding it in later, when it's urgently needed.

Storytime: Deprecating an API at Twitch

In 2018, online video streaming platform Twitch decided to deprecate an API and provide a new API. After it announced the old API's deprecation and end of life (shutdown), Twitch received a lot of feedback from developers who said that they needed more time to handle the breaking change or their integrations would be broken. Because of that feedback, Twitch decided to extend the end of life of the old API to give developers ample time to move their code to the new one.

Closing Thoughts

Meeting the needs of your users is at the core of solid API design. In this chapter, we covered a number of best practices to help you achieve a great developer experience.

As you build your API and developer ecosystem, you might discover more best practices specific to your company, your product, and your users.

Continue reading Best design practices to get the most out of your API.

Categories: Technology

Four short links: 3 October 2018

O'Reilly Radar - Wed, 2018/10/03 - 03:55

Positive Chatbot, Inside Serverless, TimBL's Next Project, and Voting Machines

  1. Ixy -- chat with a bot that helps you not descend into irate internet madness. Nifty idea! (via Evan Prodromou)
  2. Peeking Behind the Curtains of Serverless Platforms -- interesting implementation details. We characterize performance in terms of scalability, coldstart latency, and resource efficiency, with highlights including that AWS Lambda adopts a bin-packing-like strategy to maximize VM memory utilization, that severe contention between functions can arise in AWS and Azure, and that Google had bugs that allowed customers to use resources for free.
  3. Solid -- Tim Berners-Lee's new open source project (and startup), building apps from linked data.
  4. DEFCON Voting Machines Report -- tl;dr: online voting is a disaster-in-waiting, a calamity of vulnerabilities that shabby-suited shysters would be afraid to peddle but which our local and central governments have embraced. Those who are willing to trade the integrity of their democracy for the false promise of increased voter turnout deserve neither. It is noteworthy that this year the defenses of the virtual election office were fortified using Israeli military defense software, while attack tools were limited to what is available with Kali Linux

Continue reading Four short links: 3 October 2018.

Categories: Technology

115+ live online training courses opened for October, November, and December

O'Reilly Radar - Wed, 2018/10/03 - 03:00

Get hands-on training in machine learning, Python, Kubernetes, blockchain, security, and many other topics.

Learn new topics and refine your skills with more than 115 live online training courses we opened up for October, November, and December on our learning platform.

Artificial intelligence and machine learning

Beginning Data Analysis with Python and Jupyter, October 17-18

Managed Machine Learning Systems and Internet of Things, November 1-2

Essential Machine Learning and Exploratory Data Analysis with Python and Jupyter Notebook, November 5-6

Deep Learning Fundamentals, November 6

Deep Reinforcement Learning, November 14

Getting Started with Machine Learning, November 15

Hands-On with Google Cloud AutoML, November 16

Deploying Machine Learning Models to Production: A Toolkit for Real-World Success, December 3-4

Hands-on Machine Learning with Python: Clustering, Dimension Reduction, and Time Series Analysis, December 4

Blockchain

Blockchain Applications and Smart Contracts, November 15

Business

How to Give Great Presentations, October 22

Emotional Intelligence in the Workplace, November 6

60 Minutes with Barry O’Reilly: 10 Steps to Digital Transformation, November 8

Introduction to Time Management Skills, November 9

Introduction to Leadership Skills, November 12

Introduction to Project Management, November 12

Introduction to Critical Thinking, November 15

Your First 30 Days as a Manager, November 20

Managing Your Manager, November 28

Giving a Powerful Presentation, November 28

Mastering Usability Testing, December 3

Managing Team Conflict, December 4

Data science and data tools

Programming with Data: Python and Pandas, October 16

Advanced SQL Series: Proximal and Linear Interpolations, November 7

Apache Hadoop, Spark, and Big Data Foundations, November 7

Beginning Machine Learning with scikit-learn, November 7

SQL for Any IT Professional, November 8

Advanced SQL Series: Window Functions, November 13

Beginning R Programming, November 13-14

Programming with Data: Python and Pandas, November 14

Intermediate Machine Learning with scikit-learn, November 16

Hands-On Introduction to Apache Hadoop and Spark Programming, November 19-20

Python Data Handling: A Deeper Dive, November 20

Programming

Linux in 3 Hours, October 19

Scalable Concurrency with the Java Executor Framework, October 29

Scala Core Programming: Methods, Classes, and Traits, November 2

Getting Started with Python’s Pytest, November 5

Design Patterns Boot Camp, November 5-6

Beyond Python Scripts: Logging, Modules, and Dependency Management, November 7

Beyond Python Scripts: Exceptions, Error Handling, and Command-Line Interfaces, November 8

Java 11 for the Impatient, November 8

SOLID Principles of Object-Oriented and Agile Design, November 9

Clean Code, November 12

Linux Troubleshooting, November 12

An Introduction to Go for Systems Programmers and Web Developers, November 12-13

Python: The Next Level, November 13-14

Design Patterns in Java, November 13-14

Git Fundamentals, November 14-15

Scaling Python with Generators, November 15

Pythonic Design Patterns, November 16

Modern Application Development with C#, November 19-20

Learn Linux in 3 Hours, November 26

Reactive Spring Boot, November 26

Functional Programming in Java, November 26-27

OCA Java SE 8 Programmer Certification Crash Course Java Cert, November 26-28

Spring Boot and Kotlin, November 27

What's New In Java, November 29

Modern Java Exception Handling, November 30

Test-Driven Development in Python, December 4

Security

CompTIA Security+ SY0-501 Crash Course, October 17-18

CompTIA Security+ SY0-501 Certification Practice Questions and Exam Strategies, October 24

Cybersecurity Offensive and Defensive Techniques in 3 Hours, November 1

Intense Introduction to Hacking Web Applications, November 2

CCNA Cyber Ops SECFND 210-250 Crash Course, November 8

CCNA Cyber Ops SECOPS Crash Course, November 12

CCNA Routing and Switching Exam Prep, November 13

CompTIA Network+ Crash Course, November 13-15

Amazon Web Services (AWS) Security Crash Course, November 14

AWS Advanced Security with Config, GuardDuty, and Macie, November 14

Introduction to Digital Forensics and Incident Response (DFIR), November 16

Architecture for Continuous Delivery, November 19

Introduction to Ethical Hacking and Penetration Testing, November 19-20

Comparing Service-Based Architectures, November 20

CompTIA PenTest+ Crash Course, November 26-27

Security Operation Center (SOC) Best Practices, November 27

CISSP Crash Course, November 27-28

Systems engineering and operations

Managing Complexity in Network Engineering, October 25

Chaos Engineering: Planning and Running Your First Game Day, November 1

Ansible in 3 Hours, November 5

AWS Certified SysOps Administrator (Associate) Crash Course, November 5-6

Kubernetes in 3 Hours, November 6

AWS Certified Cloud Practitioner Crash Course, November 6-7

9 Steps to Awesome with Kubernetes, November 7

Implementing and Troubleshooting TCP/IP, November 7

Google Cloud Platform (GCP) for AWS Professionals, November 12

Learn Serverless Application Development with Webtask, November 13

Introduction to Google Cloud Platform, November 14-15

IP Subnetting from Beginning to Mastery, November 15-16

Getting Started with OpenStack, November 16

Getting Started with Continuous Integration (CI), November 19

Chaos Engineering: Planning, Designing, and Running Automated Chaos Experiments, November 26

Continuous Deployment to Kubernetes, November 26-27

Istio on Kubernetes: Enter the Service Mesh, November 27

Red Hat Certified System Administrator (RHCSA) Crash Course, November 27-30

Quality of Service (QoS) for Cisco Routers and Switches, November 28

Automating with Ansible, December 3

Amazon Web Services (AWS) Technical Essentials, December 4

Web programming

How the Internet Really Works, November 1

Bootstrap Responsive Design and Development, November 7-9

Building APIs with Django REST Framework, November 12

Using Redux to Manage State in Complex React Applications, November 13

Continue reading 115+ live online training courses opened for October, November, and December.

Categories: Technology

Highlights from the O'Reilly Velocity Conference in New York 2018

O'Reilly Radar - Tue, 2018/10/02 - 13:00

Watch highlights from expert talks covering DevOps, SRE, security, machine learning, and more.

People from across the systems engineering world came together in New York for the O'Reilly Velocity Conference. Below you'll find links to highlights from the event.

Continuous disintegration

Anil Dash asks: How could our processes and tools be designed to undo the biggest bugs and biases of today’s tech?

Securing the edge: Understanding and managing security events

Laurent Gil shares the latest cybersecurity research findings based on real-world security operations.

The programmer's mind

Jessica McKellar draws parallels between the free and open source software movement and the work to end mass incarceration.

O’Reilly Radar: Systems engineering tool trends

Roger Magoulas shares insights from O'Reilly's online learning platform that point toward shifts in the systems engineering ecosystem.

Test, measure, iterate: Balancing “good enough” and “perfect” in the critical path

Kris Beevers examines the trade-offs between risk and velocity faced by any high-growth, critical path technology business.

ML on code: Machine learning will change programming

Francesc Campoy Flores explores ways machine learning can help developers be more efficient.

How do DevOps and SRE relate? Hint: They're best friends

Dave Rensin explains why DevOps and SRE make each other better.

Practical performance theory

Kavya Joshi says performance theory offers a rigorous and practical approach to performance tuning and capacity planning.

Chaos Day: When reliability reigns

Tammy Butow explains how companies can use Chaos Days to focus on controlled chaos engineering.

Critical path-driven development

Jaana Dogan explains why Google teaches its tracing tools to new employees and how it helps them learn about Google-scale systems end to end.

Why marketing matters

Michael Bernstein offers an unflinching look at some of the fallacies that developers believe about marketing.

Practical ethics

Laura Thomson shares Mozilla’s approach to data ethics, review, and stewardship.

Continue reading Highlights from the O'Reilly Velocity Conference in New York 2018.

Categories: Technology

Continuous disintegration

O'Reilly Radar - Tue, 2018/10/02 - 13:00

Anil Dash asks: How could our processes and tools be designed to undo the biggest bugs and biases of today’s tech?

Continue reading Continuous disintegration.

Categories: Technology

Test, measure, iterate: Balancing “good enough” and “perfect” in the critical path

O'Reilly Radar - Tue, 2018/10/02 - 13:00

Kris Beevers examines the trade-offs between risk and velocity faced by any high-growth, critical path technology business.

Continue reading Test, measure, iterate: Balancing “good enough” and “perfect” in the critical path.

Categories: Technology

The programmer's mind

O'Reilly Radar - Tue, 2018/10/02 - 13:00

Jessica McKellar draws parallels between the free and open source software movement and the work to end mass incarceration.

Continue reading The programmer's mind.

Categories: Technology

ML on code: Machine learning will change programming

O'Reilly Radar - Tue, 2018/10/02 - 13:00

Francesc Campoy Flores explores ways machine learning can help developers be more efficient.

Continue reading ML on code: Machine learning will change programming.

Categories: Technology

Securing the edge: Understanding and managing security events

O'Reilly Radar - Tue, 2018/10/02 - 13:00

Laurent Gil shares the latest cybersecurity research findings based on real-world security operations.

Continue reading Securing the edge: Understanding and managing security events.

Categories: Technology

How do DevOps and SRE relate? Hint: They're best friends

O'Reilly Radar - Tue, 2018/10/02 - 13:00

Dave Rensin explains why DevOps and SRE make each other better.

Continue reading How do DevOps and SRE relate? Hint: They're best friends.

Categories: Technology

Practical performance theory

O'Reilly Radar - Tue, 2018/10/02 - 13:00

Kavya Joshi says performance theory offers a rigorous and practical approach to performance tuning and capacity planning.

Continue reading Practical performance theory.

Categories: Technology

Four short links: 2 October 2018

O'Reilly Radar - Tue, 2018/10/02 - 03:25

Apple MDM, Source Explorer, Verification-Aware Programming, and Superstar Economics

  1. MicroMDM -- open source mobile device management system (IT department lingo for "rootkit") for Apple devices.
  2. Sourcegraph Open Sourced -- Code search and intelligence, self-hosted and scalable.
  3. Dafny -- a verification-aware programming language. Verification (proving software correct) is a critical research area for the future of software, imho.
  4. The Economics of Superstars -- The key difference between this technology and public goods is that property rights are legally assigned to the seller: there are no issues of free riding due to nonexclusion; customers are excluded if they are unwilling to pay the appropriate admission fee. The implied scale economy of joint consumption allows relatively few sellers to service the entire market. And fewer are needed to serve it the more capable they are. When the joint consumption technology and imperfect substitution features of preferences are combined, the possibility for talented persons to command both very large markets and very large incomes is apparent. (via Hacker News)

Continue reading Four short links: 2 October 2018.

Categories: Technology

Four short links: 1 October 2018

O'Reilly Radar - Mon, 2018/10/01 - 04:25

DARPA History, Probabilistic Programming, Superstar Macroeconomics, and Interactive Narrative

  1. 60 Years of Challenges and Breakthroughs (DARPA) -- a short interesting history video about the internet, TCP/IP, Licklider, and more.
  2. Introduction to Probabilistic Programming -- a first-year graduate-level introduction to probabilistic programming. It not only provides a thorough background for anyone wishing to use a probabilistic programming system, but also introduces the techniques needed to design and build these systems. It is aimed at people who have an undergraduate-level understanding of either or, ideally, both probabilistic machine learning and programming languages. Probabilistic methods are a way of automating inference, and of use as we try to make software smarter.
  3. The Macroeconomics of Superstars (PDF download) -- We describe superstars as arising from digital innovations, which replace a fraction of the tasks in production with information technology that requires a fixed cost but can be reproduced at zero marginal cost. This generates a form of increasing returns to scale. To the extent that the digital innovations are excludable, it also provides the innovator with market power. Our paper studies the implications of superstar technologies for factor shares, for inequality, and for the efficiency properties of the superstar economy. (via Hacker News)
  4. Inform: Past, Present, Future (Emily Short) -- Graham Nelson's talk about how Inform came to be what it is, and where it's going. Inform is the amazing compiler that lets you write Infocom adventures...but is so much more than that. Anyone interested in programming language design, literate programming, or AR/VR interactive fiction should read this.

Continue reading Four short links: 1 October 2018.

Categories: Technology

Y2K and other disappointing disasters

O'Reilly Radar - Fri, 2018/09/28 - 04:10

How risk reduction makes sure bad things happen as rarely as possible.

Continue reading Y2K and other disappointing disasters.

Categories: Technology

Four short links: 28 September 2018

O'Reilly Radar - Fri, 2018/09/28 - 04:00

Observing Kubernetes, Ada Lovelace, Screen Time, and 6502 C

  1. kubespy -- Tools for observing Kubernetes resources in real time.
  2. Ada Lovelace's Note G -- a very readable explanation of what she did and why it's notable and remarkable, complete with loops and versions of her program in C and Pascal. (via Chris Palmer)
  3. Limiting Children’s Screen Time to Less Than Two Hours a Day Linked to Better Cognition (Neuroscience News) -- a summary of a paper in Lancet, the leading British medical journal. Taken individually, limited screen time and improved sleep were associated with the strongest links to improved cognition, while physical activity may be more important for physical health. However, only one in 20 U.S. children aged between 8-11 years meet the three recommendations advised by the Canadian 24-hour Movement Guidelines to ensure good cognitive development—9-11 hours of sleep, less than two hours of recreational screen time, and at least an hour of physical activity every day.
  4. cc65 -- a complete cross development package for 65(C)02 systems, including a powerful macro assembler, a C compiler, linker, librarian, and several other tools. cc65 has C and runtime library support for many of the old 6502 machines. That's right, you can print "Hello, World" on your C64 (and Atari 2600 and Apple ][+ and NES and ...).

Continue reading Four short links: 28 September 2018.

Categories: Technology

Why it’s hard to design fair machine learning models

O'Reilly Radar - Thu, 2018/09/27 - 04:50

The O’Reilly Data Show Podcast: Sharad Goel and Sam Corbett-Davies on the limitations of popular mathematical formalizations of fairness.

In this episode of the Data Show, I spoke with Sharad Goel, assistant professor at Stanford, and his student Sam Corbett-Davies. They recently wrote a survey paper, “A Critical Review of Fair Machine Learning,” where they carefully examined the standard statistical tools used to check for fairness in machine learning models. It turns out that each of the standard approaches (anti-classification, classification parity, and calibration) has limitations, and their paper is a must-read tour through recent research in designing fair algorithms. We talked about their key findings, and, most importantly, I pressed them to list a few best practices that analysts and industrial data scientists might want to consider.

Continue reading Why it’s hard to design fair machine learning models.

Categories: Technology

Four short links: 27 September 2018

O'Reilly Radar - Thu, 2018/09/27 - 04:00

Calendar Fallacies, Data Lineage, Firefox Monitor, and Glitch Handbook

  1. Your Calendrical Fallacy is... -- odds are high that if a programmer is sobbing into their keyboard, it's because of these pesky realities.
  2. Smoke: Fine-Grained Lineage at Interactive Speed -- lineage queries over the workflow: backward queries return the subset of input records that contributed to a given subset of output records while forward queries return the subset of output records that depend on a given subset of input records. (via Morning Paper)
  3. Introducing Firefox Monitor -- proactive alerting of your presence on HaveIBeenPwned. Introduced here.
  4. Glitch Employee Handbook -- fascinating to see how openly they operate. (via their very nicely done "come work for us" site)

Continue reading Four short links: 27 September 2018.

Categories: Technology

Pages

Subscribe to LuftHans aggregator