You are here

Feed aggregator

The enterprise data cloud

O'Reilly Radar - Wed, 2019/05/01 - 08:00

Mick Hollison describes why hybrid and multi-cloud is the future for organizations that want to capitalize on machine learning and AI.

Continue reading The enterprise data cloud.

Categories: Technology

Sustaining machine learning in the enterprise

O'Reilly Radar - Wed, 2019/05/01 - 08:00

Drawing insights from recent surveys, Ben Lorica analyzes important trends in machine learning.

Continue reading Sustaining machine learning in the enterprise.

Categories: Technology

Finding your North Star

O'Reilly Radar - Wed, 2019/05/01 - 08:00

Cait O’Riordan discusses the North Star metric the Financial Times uses across the organization to drive subscriber growth.

Continue reading Finding your North Star.

Categories: Technology

160+ live online training courses opened for May and June

O'Reilly Radar - Wed, 2019/05/01 - 06:25

Get hands-on training in machine learning, blockchain, cloud native, PySpark, Kubernetes, and many other topics.

Learn new topics and refine your skills with more than 160 new live online training courses we opened up for May and June on the O'Reilly online learning platform.

AI and machine learning

Real-Time Streaming Analytics and Algorithms for AI Applications, May 15

Inside Unsupervised Learning: Anomaly Detection using Dimensionality Reduction, June 4

Beginning Machine Learning with Scikit-learn , June 5

Getting started with Machine Learning , June 10

AI for Product Managers, June 11

Deep Learning with TensorFlow, June 12

Intermediate Machine Learning with Scikit-learn, June 12

Hands-on Adversarial Machine Learning , June 13

Inside Unsupervised Learning: Group Segmentation using Clustering, June 13

Reinforcement Learning: Building Recommender Systems, June 17

Inside Unsupervised Learning: Feature Extraction using Autoencoders and Semi-Supervised Learning, June 17

Fundamentals of Machine Learning with AWS, June 19

Building Machine Learning Models with AWS Sagemaker, June 20

A Practical Introduction to Machine Learning , June 25

Inside Unsupervised Learning: Generative Models and Recommender Systems, June 26

Artificial Intelligence: AI for Business, July 2

Blockchain

Spotlight on Cloud: The Hidden Costs of Kubernetes with Bridget Lane, June 6

Spotlight on Innovation: Blockchain as a Service with Ed Featherston, June 12

Spotlight on Data: Caching Big Data for Machine Learning at Uber with Zhenxiao Luo, June 17

Introducing Blockchain, June 21

Blockchain and Cryptocurrency Essentials , June 27

Introduction to Distributed Ledger Technology for Enterprise, July 10

Business

Spotlight on Cloud: Becoming Cloud Native with Jon Collins, May 2

Core Agile, May 9

Spotlight on Data: Data as an Asset with Friederike Schüür and Jen van der Meer, May 20

Spotlight on Learning from Failure: Solving Cryptocurrency’s Volatility Crisis with Wayne Chang, May 28

Introduction to Strategic Thinking Skills , June 4

Foundations of Microsoft Excel, June 5

Fundamentals of Learning: Learn faster and better using neuroscience, June 6

Succeeding with Project Management , June 10

Introduction to Employee Performance Management, June 10

Introduction to Leadership Skills, June 11

Building Your LinkedIn Network, June 11

60 minutes to Better User Stories and Backlog Management, June 13

Product Management in 90 Minutes , June 14

60 Minutes to Better Email, June 18

Developing Your Coaching Skills, June 18

Managing Team Conflict, June 18

Empathy at Work, June 18

Agile for Everybody, June 19

60 Minutes to Designing a Better PowerPoint Slide, June 24

Introduction to Critical Thinking, June 25

Fundamentals of Cognitive Biases, June 25

Applying Critical Thinking, June 26

Salary Negotiation Fundamentals, June 27

Managing Your Manager, June 27

Your first 30 days as a manager, July 1

Leading Innovative Teams, July 2

Python-Powered Excel, July 8

Unlock Your Potential, July 9

60 Minutes to Better Product Metrics , July 10

Business Fundamentals, July 10

Core Agile, July 10

Product Roadmaps From the Ground Up , July 11

Building Resiliency, July 11

Introduction to Time Management Skills , July 12

Data science and data tools

Practical Linux Command Line for Data Engineers and Analysts, May 20

First Steps in Data Analysis, May 20

Inferential Statistics Using R, May 24

Data Analysis Paradigms in the Tidyverse , May 30

Data Visualization with Matplotlib and Seaborn, June 4

Apache Hadoop, Spark and Big Data Foundations, June 5

Getting Started with PySpark, June 5

Introduction to DAX Using Power BI , June 7

Real-time Data Foundations: Kafka, June 11

SQL Fundamentals for Data , June 12-13

Real-time Data Foundations: Spark, June 13

Introduction to Statistics for Data Analysis with Python, June 17

IoT Fundamentals, June 17-18

Data Pipelining with Luigi and Spark, June 19

Visualization and Presentation of Data , June 20

Real-time data foundations: Flink, June 25

Fraud Analytics using Python, June 25

Managing Enterprise Data Strategies with Hadoop, Spark, and Kafka, June 25

Hands-On Algorithmic Trading with Python , July 2

Design and product management

From User Experience Designer to Digital Product Designer, June 5

Programming

Essentials of JVM Threading, May 13

Python Full Throttle with Paul Deitel, May 30

Java Full Throttle with Paul Deitel: A Code-Intensive One-Day Course, June 3

Programming with Data: Foundations of Python and Pandas, June 4

Introduction to TypeScript Programming , June 6

Concurrency in Python, June 7

Advanced SQL Series: Relational Division , June 10

Modern Java Exception Handling, June 10

Foundational Data Science with R, June 10-11

Getting Started with React.js, June 11

Advanced SQL Series: Proximal and Linear Interpolations, June 12

Building Web Apps with Vue.js, June 12

Advanced TypeScript Programming, June 12

Rethinking REST: A Hands-on Guide to GraphQL and Queryable APIs, June 13

Getting Started with Pandas, June 17

Python Data Handling: A Deeper Dive, June 17

Getting Started with Python 3 , June 17-18

Hands-on Introduction to Apache Hadoop and Spark Programming, June 17-18

Basic Android Development, June 17-18

Mastering Pandas, June 18

Getting Started with Node.js, June 19

Data Structures in Java, June 19

Kotlin Fundamentals, June 20

Applied Cryptography with Python, June 20

Bash Shell Scripting in 4 Hours, June 20

Python Full Throttle with Paul Deitel , June 24

Functional Programming in Java , June 25-26

Introduction to the Bash Shell, June 26

Introduction to Python Programming , June 27

Beginner’s Guide to Writing AWS Lambda Functions in Python, June 28

Introduction to the Go Programming Language, July 1

Mastering Python’s Pytest, July 1

Design Patterns Boot Camp , July 1-2

Reactive Spring and Spring Boot, July 9

What's New In Java, July 10

Security

Linux, Python, and Bash Scripting for Cybersecurity Professionals, June 3

Cybersecurity Offensive and Defensive Techniques in 3 Hours, June 4

Certified Ethical Hacker (CEH) Crash Course, June 5-6

CISSP Crash Course, June 12-13

CISSP Certification Practice Questions and Exam Strategies, June 13

Expert Transport Layer Security (TLS), June 13

CCNA Security Crash Course , June 20-21

Introduction to Ethical Hacking and Penetration Testing, June 20-21

Intense Introduction to Hacking Web Applications, June 27

Systems engineering and operations

IP Subnetting from Beginning to Mastery, May 8-9

Systems Design for Site Reliability Engineers: How To Build A Reliable System in Three Hours, May 14

Practical Software Design from Problem to Solution, May 17

Cloud Computing Governance, May 29

Ansible in 4 Hours, June 3

AWS CloudFormation Deep Dive, June 3-4

Automating with Ansible, June 6

Network DevOps, June 6

Introduction to Istio, June 6

AWS Design Fundamentals , June 10-11

Deploying Container-Based Microservices on AWS, June 10-11

Practical Docker, June 11

Cloud Computing on the Edge, June 11

Designing Serverless Architecture with AWS Lambda, June 11-12

Red Hat Certified Engineer (RHCE) Crash Course, June 11-14

Exam AZ-103: Microsoft Azure Administrator Crash Course, June 12-13

Software Architecture Foundations: Characteristics and Tradeoffs, June 14

Microservices Caching Strategies, June 17

From developer to software architect, June 17-18

Software Architecture by Example, June 18

Modern streaming architectures , June 18-19

AWS core architecture concepts , June 18-19

Moving to the Cloud: What Your Company Needs to Know, June 19

Docker: Up and Running , June 19-20

Kubernetes Serverless with Knative , June 20

Getting started with continuous integration , June 20

AWS Monitoring Strategies , June 20-21

Mastering SELinux, June 21

Istio on Kubernetes: Enter the Service Mesh , June 21

Continuous Delivery with Jenkins and Docker, June 24

Kafka Fundamentals, June 24-25

Next Level Git - Master Your Workflow , June 25

Google Cloud Platform Professional Cloud Architect Certification Crash Course , June 25-26

Automating Architectural Governance Using Fitness Functions, June 26

Git Fundamentals, June 26-27

AWS Certified Developer Associate Crash Course, June 26-27

Architecture for Continuous Delivery , June 27

AWS Account Setup Best Practices, June 27

Comparing Service-Based Architectures , July 1

From Developer to Software Architect, July 1-2

Introduction to Docker Compose, July 9

Introduction to Knative, July 9

Building Data APIs with GraphQL, July 9

Microservice Fundamentals, July 10

Building a Deployment Pipeline with Jenkins 2, July 10-11

Microservice Collaboration, July 11

Continue reading 160+ live online training courses opened for May and June.

Categories: Technology

Four short links: 1 May 2019

O'Reilly Radar - Wed, 2019/05/01 - 04:30

Intermediate Vim, Newsletter Numbers, Automating Assessment, and Financial Modeling

  1. Intermediate Vim -- a few tips to level up your editing skills from beginner to intermediate.
  2. Newsletters Can Be Profitable (Buzzfeed) -- Substack’s 12 top-earning writers make an average of more than $160,000 each, the company told BuzzFeed News. And more than 40,000 people are paying for Substack newsletters today.
  3. Final Draft to Assess Against Bechdel Test (NYT) -- In an update announced Thursday, Final Draft—software that writers use to format scripts—said it will now include a proprietary “Inclusivity Analysis” feature, allowing filmmakers “to quickly assign and measure the ethnicity, gender, age, disability, or any other definable trait of the characters,” including race, the company said in a statement. It also will enable users to determine if a project passes the Bechdel Test, measuring whether two female characters speak to each other about anything other than a man. The faster people get feedback, the more effect it has. I expect this to have significant effect on scripts. (via Marginal Revolution)
  4. How Pharmaceutical Industry Financial Modelers Think About Your Rare Disease -- Today in "how to see with the eyes of a specialist." To me, the biggest lesson from playing with this model has been to observe just how profoundly the lag time and probability of success shape the financial picture of drug development.

Continue reading Four short links: 1 May 2019.

Categories: Technology

May 9th's meeting brings Intro to Crypto part 3

PLUG - Tue, 2019/04/30 - 09:43
This month we will get Anthony Kosednar's third installment of his Intro to Cryptography series, "Intro to Cryptography - Quantum & Post-Quantum Crypto"

Description:
Cryptography is at the heart of modern day privacy and security. We use it every day from sending an email to making important financial transactions.

With the advent of Quantum computing and the abilities it has brought, our security landscape has changed. Previously secure methods are becoming obsolete. Come learn about Qubits, Shor's Algorithm, and ways to keep information secure in a post-quantum world.

Before attending, it is recommended you watch the two previous talks in this series to have a better baseline.

Part 1: Intro to Cryptography - Crypto Basics
Part 2: Intro to Cryptography - Modern Crypto
About Anthony:
Anthony Kosednar is a multi-disciplined technology leader with a deep understanding in delivering cyber security and technology solutions. He works in the industry as a security engineer for enterprises. He holds a GIAC Exploit Research and Advance Penetration certificate (GXPN) as well as several certificates in Cybersecurity for Industrial Control Systems from DHS.

Looking Back on the O’Reilly Artificial Intelligence Conference

O'Reilly Radar - Tue, 2019/04/30 - 04:00

More than anything else, O'Reilly's AI Conference was about making the leap to AI 2.0.

At the start of O'Reilly's Artificial Intelligence Conference in New York this year, Intel's Gadi Singer made a point that resonated through the conference: "Machine learning and deep learning are being put to work now." They're no longer experimental; they're being put to use in key business applications. The only real question is what we're going to get out of it. Will we be able to put it to use effectively? Will we find appropriate uses for it?

Now that AI is moving out of the laboratory and into offices and homes, a number of questions are more important than ever. What kinds of tools will make it easier to build AI and ML systems? How will we make AI safe for humans? And what kinds of systems will augment human capabilities, rather than replace them? In short, as Aleksander Madry said in his talk, we are now at AI 1.0. How do we get to AI 2.0?

Madry emphasized the importance of making AI ready for us: secure, reliable, ethical, and understandable. It's easy to see the shortcomings of AI now. Madry showed how easy it was to turn a pig into a jet by adding noise, or to become a movie star by changing your glasses. Getting to the next step won't be easy: training models will probably become more difficult, and those models may be more complex. We might need even more training data than we need now; and currently, one of the biggest barriers to widespread use of AI is the lack of training data. But the work that it takes to get to AI 2.0 will benefit us. We'll never have AI systems that don't make mistakes; but mistakes will be fewer, and they'll be more like the mistakes that humans make, rather than mistakes that are nonsensical. No more flying pigs. And that commonality might make it easier for those systems to work alongside us.

We saw many new tools for building AI systems: tools designed to make building these systems easier, allowing subject experts to play a bigger role. Danielle Dean of Microsoft showed how they built a recommendation system for machine learning pipelines; it sampled the space of possible pipelines and made recommendations about which to try. This approach drastically reduced the "trial and error" loop that characterizes a lot of AI development.

Stanford's Chris Ré demonstrated Snorkel, an open source tool for automating the process of tagging training data. An AI system has three components: a model, training data, and hardware. Advanced hardware for building AI systems is getting faster and cheaper; it's becoming a commodity. So are models: systems like Dean's, or like Intel's Nauta, simplify and democratize the task of building models. Training data is the one component that stubbornly resists commoditization. Acquiring and labeling data is labor intensive. Researchers have used low-cost labor from Mechanical Turk (or grad students) to label data, or gathered pre-labeled data from online sites like Flickr. But those approaches won't work in industry. Can we use AI to eliminate most of the work of tagging and turn it into a relatively simple programming problem? It looks like we can; if Ré is right, Snorkel and tools like it are a big step toward AI 2.0.

We saw many glimpses of the future. Olga Troyanska showed how deep learning is helping to decode the most difficult parts of the human genome: the logic that controls gene expression and, hence, cell differentiation. There are many diseases we know are genetic; we just don't know what parts of the genome are responsible. We will only be able to diagnose and treat those diseases when we understand how the language of DNA works.

CMU's Martial Hebert's lab is taking AI into the human world by building systems that can reason about intent. If we want robots that can assist people, they need to be able to understand and predict human behavior in real time. He demonstrated how an AI system can help a paralyzed person perform tasks that would otherwise be impossible—but only by reasoning about intent. Without this understanding, without knowing the goal was to pick up something or to open a door, the system was only able to twitch uselessly. All of this reasoning has to happen in hard real time: an autonomous vehicle needs to be able to predict whether a person will stand on the curb or run into the street, and it needs to do so with enough time to apply the brakes if needed.

Any conference on AI needs to recognize the extraordinary messes and problems that automation can create. Sean Gourley of Primer talked about the arms race in disinformation. In the past year, we've gained the ability to create realistic images of fake people, and we've made tremendous progress in generating realistic language. We won't be able to handle these growing threats without the assistance of AI. Andrew Zaldivar talked about work at Google Jigsaw that tries to detect online abuse and harassment. Kurt Mehmel from Dataiku talked about progress toward ethical artificial intelligence, a goal we will only reach if we build teams that are radically inclusive. The saying goes, "with enough eyes, all bugs are shallow"; but that's only true if those eyes are all different eyes, looking at problems in different ways. The solution isn't to build better technology; rather, it's making sure the people most likely to be impacted by the technology are included at all steps of the process.

The conference sessions covered everything from advanced AI techniques in reinforcement learning and natural language processing to business applications, to deploying AI applications at scale. AI is quickly moving beyond the hype: it's becoming part of the everyday working world. But as Aleksander Madry said, we need to make AI human-ready. We need to get to AI 2.0. More than anything else, O'Reilly's AI Conference was about making that leap.

Continue reading Looking Back on the O’Reilly Artificial Intelligence Conference.

Categories: Technology

Four short links: 30 April 2019

O'Reilly Radar - Tue, 2019/04/30 - 03:25

AI Engines, Self-Grading Labs, Software Project Heroes, and Embedded Rust

  1. Artificial Intelligence Engines -- In this richly illustrated book, key neural network learning algorithms are explained informally first, followed by detailed mathematical analyses. (via Tom Stafford)
  2. CMU Self-Grading Labs -- an excellent next step in CS education. Feedback is more effective the closer to the moment of error it is. (via Hacker News)
  3. Why Software Projects Need Heroes: Lessons Learned from 1,100+ Projects -- A "hero" project is one where 80% or more of the contributions are made by 20% of the developers. In the literature, such projects are deprecated since they might cause bottlenecks in development and communication. However, there is little empirical evidence on this matter. Further, recent studies show that such hero projects are very prevalent. Accordingly, this paper explores the effect of having heroes in project, from a code quality perspective. We identify the hero developer communities in 1100+ open source GitHub projects. Based on the analysis, we find that (a) hero projects are majorly all projects; and (b) the commits from "hero developers" (who contribute most to the code) result in far fewer bugs than other developers. That is, contrary to the literature, heroes are standard and a very useful part of modern open source projects. Extrapolation to your own software team is done at your own risk. As someone on Hacker News said, "Of course, nearly every GitHub project is going to have heroes—we call them "maintainers."
  4. The Embedded Rust Book -- An introductory book about using the Rust programming language on "bare metal" embedded systems, such as Microcontrollers.

Continue reading Four short links: 30 April 2019.

Categories: Technology

How companies adopt and apply cloud native infrastructure

O'Reilly Radar - Tue, 2019/04/30 - 03:00

Survey results reveal the path organizations face as they integrate cloud native infrastructure and harness the full power of the cloud.

How Companies Adopt and Apply Cloud Native Infrastructure

O'Reilly survey results reveal the path organizations face as they integrate cloud native infrastructure.

 

By Roger Magoulas and Nikki McDonald

-->

Driven by the need for agility, scaling, and resiliency, organizations have spent more than a decade moving from “trying out the cloud” to a deeper, more sustained commitment to the cloud, including adopting cloud native infrastructure. This shift is an important part of a trend we call the Next Architecture, with organizations embracing the combination of cloud, containers, orchestration, and microservices to meet customer expectations for availability, features, and performance.

To learn more about the motivations and challenges companies face adopting cloud native infrastructure, we conducted a survey of 590 practitioners, managers, and CxOs from across the globe.[1]

Key findings from the survey include:

  • Nearly 50% of respondents cited lack of skills as the top challenge their organizations face in adopting cloud native infrastructure. Given the industry is both new and rapidly evolving, engineers struggle to keep up-to-date on new tools and technologies.
  • 40% of respondents use a hybrid cloud architecture. The hybrid approach can accommodate data that can’t be on a public cloud, and can serve as an interim architecture for organizations migrating to a cloud native architecture.
  • 48% of respondents rely on a multi-cloud strategy that involves two or more vendors, helping organizations avoid lock-in to any one cloud provider and providing access to proprietary features that each of the major cloud vendors provide.
  • 47% of respondents working in organizations that have adopted cloud native said DevOps teams are responsible for their organizations’ cloud native infrastructures, signaling a tight bond between DevOps and cloud native concepts.
  • Among respondents whose organizations have adopted cloud native infrastructure, 88% use containers and 69% use orchestration tools like Kubernetes. These signals align with the Next Architecture’s hypothesis that cloud native infrastructure best meets the demands put on an organization’s digital properties.

In our analysis, we assigned experience levels to our respondents for some of the survey questions. New respondents work at organizations that have been cloud native for less than one year; early respondents’ organizations have been cloud native for one to three years; and sophisticated respondents work at organizations that have been cloud native for more than three years.

What respondents do and where they work Figure 1. Roles of survey respondents.

The results in Figure 1 aren’t surprising, given that more developers are making technology decisions. If you combine software practitioners (engineers, developers, and admins) with leads/architects, that’s nearly 70% of respondents with technical roles. We see cloud native creating an increasing need for technical and architectural leadership, and more overlap between lead roles and engineering work. We expect those in technical roles to exert more influence over tool selection and development in the months and years ahead.

Figure 2. Time in current role of survey respondents.

Around half of respondents have been in their current position for three years or less (Figure 2). This points to the shifting nature of jobs as the industry responds to new technologies and workflows.

Cloud native has played a role in this shift. Important cloud native tools like Docker (released in 2013) and Kubernetes (released in 2015) are both relatively new. Containers, like Docker, and orchestration tools, like Kubernetes, are essential for creating the runway organizations need to transform their digital properties from monolith to microservice architectures—a task many companies are still in the early days of performing. With 72% of respondents having adopted cloud native in the last three years (see Figure 9, below), we see the tools and infrastructure changing quickly, and people learning new skills on the job.

Figure 3. Industry of survey respondents, broken down by cloud native experience level.

Survey respondents whose organizations adopted cloud native three-plus years ago—referred to in this analysis as "sophisticated"—typically work at software companies (Figure 3). Respondents in the finance and banking industry show the opposite pattern, with a larger share of those new to cloud native and a smaller share of sophisticated respondents. A legacy of outdated back office applications and regulatory and security concerns makes the migration to cloud native for those in finance that much more difficult. However, competitive pressure from fintech startups and increased competition from payment providers and pay services like Apple, Google, and Amazon create the imperative to transition to a cloud native infrastructure.

Media and entertainment companies show a notable share of experienced cloud native respondents, with nearly all falling into the early adopter or sophisticated categories. This isn’t too surprising, given the essential role microservices play in that industry. Netflix is a good media company case study, with its early adoption of microservices, its contributions to chaos engineering, and its open source, multi-cloud continuous delivery platform Spinnaker (which Netflix and Google recently donated to the newly launched Continuous Delivery Foundation (CDF)). Cloud native adoption will likely increase in the media and entertainment space, as the distributed systems that make cord-cutting possible require cloud native infrastructures.

Which cloud providers and cloud types are most popular Figure 4. Types of cloud infrastructure used by survey respondents. Figure 5. Types of cloud infrastructure used by survey respondents, broken down by cloud native experience level.

When asking about the types of cloud platforms respondents use (Figure 4), we define public cloud as services offered solely through a third-party provider; hybrid cloud as using a mix of private and third-party cloud services; and private cloud as a secure, on-premises cloud infrastructure with restricted access.

More than 40% of respondents indicate their organizations use a hybrid cloud infrastructure (Figure 4). We expect hybrid to remain a popular option for accommodating data that can’t be stored on a public cloud and for serving as an interim architecture for organizations progressively transitioning their services and applications to a cloud native architecture

A close read of the responses in Figure 5 shows the more sophisticated the respondent’s cloud implementation, the more likely they are to use public cloud providers. Those new to the cloud are more likely to use hybrid cloud options. A smaller share of respondents report using only private cloud. This could be reflective of industries that can’t put their data on a public cloud due to laws and regulations (finance and government, for example) and the complexity of migrating legacy systems.

Figure 6. Public cloud providers used by survey respondents, broken down by cloud native experience level. Figure 7. Cloud provider share among survey respondents.

Amazon AWS leads across experience levels (Figure 6), but we also see some interesting cloud provider combinations among respondents (Figure 7):

  • AWS and Microsoft Azure (18%)
  • AWS, Azure, and Google Cloud (15%)
  • AWS and Google Cloud (14%)

With 48% of respondents having adopted a multi-cloud architecture (Figure 7), we see a competitive marketplace, with organizations sampling from a mix of vendors. We suspect organizations are “kicking the tires” on cloud providers, looking to test proprietary features or trying to lower costs by using more than one vendor. And, the strategy helps keep vendors honest, as containers allow an easy switch between platforms. Most organizations also want to maximize performance and resiliency, and a multi-cloud strategy allows for scaling and redundancy to help mitigate the risk of costly downtime and potential data loss in the case of an incident.

How organizations view their adoption of cloud native infrastructure Figure 8. Cloud native infrastructure adoption among survey respondents.

While 68% of respondents said their organizations have adopted, or at least have begun to adopt, cloud native infrastructure, more than 30% of self-selecting respondents to a cloud native survey let us know they haven’t adopted cloud native (Figure 8). The results show both the great interest in cloud native, and the great potential—one of the reasons we expect cloud native to be of interest for many years to come. The cloud native ecosystem is not yet settled, and computing is not quite a utility, so we anticipate much change as trends around cloud native coalesce.

The charts that follow show results from the 68% of survey respondents whose organizations have adopted cloud native infrastructure (Figure 8).

Figure 9. Cloud native adoption history among survey respondents.

The results in Figure 9 reflect the newness of the cloud native space, with 72% of respondents adopting cloud native in the last three years.

New attention to cloud native mirrors what we’ve seen on the O’Reilly online learning platform. Cloud native topics are among the fastest growing areas. In search and usage on the O’Reilly platform, Kubernetes is the fastest growing large topic, with the major cloud vendors and other cloud native tools all growing strongly as well. While the cloud has been around for more than 10 years, the acceleration in interest we see on the platform shows the cloud an ascendent topic with strong legs and a good foundation for sustained growth. In particular, patterns around cloud native components helped drive support for the Next Architecture, as we note in this report.

Figure 10. Rating of success of cloud native adoption among survey respondents, broken down by cloud native experience level.

The levels of success noted by sophisticated adopters reflect how experience with cloud native infrastructure pays off (Figure 10). Sophisticated respondents had by far the largest share of extreme success with their cloud native adoption. More than 90% of sophisticated respondents rated their cloud native implementations as mostly successful or better, and no sophisticated respondents felt their implementations were unsuccessful.

Figure 11. Cloud native challenges faced by survey respondents. Figure 12. Cloud native challenges faced by survey respondents, broken down by cloud native experience level.

Adopting a cloud native infrastructure is both complex and difficult, which is made clear from the survey results showing at least 40% of respondents citing challenges with finding skilled engineers, migrating from legacy architecture, responding to security and compliance demands, managing technical infrastructure, and transforming their corporate culture (Figure 11).

That lack of skills is a much bigger factor than hiring shows respondents are trying to adopt cloud native on their own—a sign that organizations are fundamentally structuring themselves around cloud native architectures, not looking to hire that skill from outside. We see in Figure 12 that early respondents struggle significantly more with lack of skills and company culture than the other categories, suggesting these are issues organizations should consider tackling first when adopting cloud native.

In that same vein, as organizations restructure internal processes and pipelines to adopt a cloud native architecture, they struggle to put a corporate culture in place that supports this new way of working. Cloud native adoption isn’t just about using the right tools or having the right technical infrastructure (though, those are important). Adopting a DevOps workflow that breaks down the barriers between teams and embraces a culture of collaboration is essential to implementing a successful cloud native architecture that relies on continuous integration, delivery, and improvement.

And finally, security and compliance hurdles are not going away, even if they are less prevalent than a few years ago, when health data and financial regulations limited what many organizations could even consider placing in the cloud. While some of those hurdles have been resolved, our respondents tell us that security and compliance continue to require attention when considering cloud native implementations.

The teams and tools that manage cloud native infrastructure Figure 13. Teams that manage cloud native infrastructure among survey respondents. Figure 14. Teams that manage cloud native infrastructure among survey respondents, broken down by cloud native experience level.

DevOps was by far the top choice for who manages the respondents’ cloud native infrastructures (Figure 13). This is evidence that adopting a DevOps culture is critical to meeting the market demands—agility, speed to market, scaling, and reliability—that cloud native also addresses.

It’s telling, though, when looking at the responses in the context of maturity, the more sophisticated cohort said they depend on site reliability engineering (SRE) to manage their cloud native infrastructure (Figure 14). SRE is a practice started at Google where engineers take on both development and operations responsibilities to release software faster and more reliably. SREs split their time between developing and maintaining infrastructure by automating repetitive tasks (toil) to allow time to build new features. While DevOps is a set of principles that loosely define how teams should work together to remove silos and collaborate effectively, SRE serves as an implementation of DevOps and is a job role that will see increased demand as organizations continue their transitions to cloud native.

Figure 15. Cloud tools used by survey respondents. Figure 16. Cloud tools used by survey respondents, broken down by cloud native experience level.

It’s no surprise that containers show as the most popular tool (Figure 15), as containers serve a key cloud native infrastructure role, improving software developer productivity, deployment speed and flexibility, platform independence, and enabling effective scaling. In addition, the high number of responses for orchestration and management underscores the strength of Kubernetes in this field. Orchestration and containers, combined with the cloud and microservices, form the technical foundation of the Next Architecture.

The percentages around service mesh and serverless were lower, likely because these technologies are relatively immature (Figure 15). However, it’s worth briefly outlining how they work and why they’re poised for adoption because we expect both tools to play large future roles in the cloud native space.

A service mesh is a configurable, low-latency infrastructure layer designed to ease the complexity of networking microservices and managing communication between them in large, distributed networks. Istio, arguably the most popular service mesh, was developed by Google, Lyft, and IBM as an open source solution, and only reached version 1.0 in July 2018. We’ll see more organizations turning to service mesh infrastructure as the tools mature and expanding organizations seek solutions for scaling their systems and managing the increasing complexity that comes with a global, distributed network.

While service mesh provides networking infrastructure, serverless provides another layer of abstraction for cloud developers. Despite the name, serverless does not mean there are no servers. Serverless architecture simply puts the onus of managing backend infrastructure on the cloud provider so developers can focus on building applications rather than the software that powers them. Serverless architecture is generally stateless, provisions resources on demand, and only incurs cost for the resources actually used, potentially enabling organizations to scale rapidly while saving money. However, the technology still has significant drawbacks, as outlined in a recent study by the University of California at Berkeley, including inadequate storage, and performance and security concerns. But you can expect to see these issues resolved within the next decade. As serverless matures, Berkeley researchers predict it will “become the default computing paradigm for the Cloud Era, largely replacing serverful computing and thereby bringing an end to the Cloud-Server Era.”

The survey results also reveal a story around cloud native experience and tool adoption (Figure 16). Sophisticated cloud native organizations are in a position to iteratively improve and move into tools like service mesh and serverless. Organizations new to cloud native may need to wait and learn before they can fully harness some of these tools.

Why companies haven't adopted cloud native infrastructure

The charts in this section show results from the 32% of survey respondents whose organizations have not adopted cloud native infrastructure (see Figure 8, above).

Figure 17. Reasons why survey respondents have not adopted cloud native.

The interesting story here is that the same challenges were identified whether respondents’ organizations have or have not moved to cloud native (compare Figure 11 and Figure 17). Lack of skills, company culture, microservices migration, and security and compliance were shared challenges among adopters and non-adopters alike.

Figure 18. When survey respondents whose organizations have not adopted cloud native infrastructure expect to implement cloud native.

In Figure 18, it’s surprising to see 27% of respondents work at organizations with no plans for cloud native adoption. While we can’t determine exactly why they have no plans, it seems prudent to investigate cloud native even if the adoption timeline for your organization will be long. This result may also reflect the newness of the space, as some organizations might not yet realize how important cloud native is and will be.

Recommendations and concluding thoughts

When evaluating responses from the sophisticated cohort, a few lessons emerge for organizations considering cloud native and those that are in the early stages of cloud native implementations:

  • Cloud native success comes empirically. Don’t try to overhaul your entire architecture at once. Start small and focus on high-impact services that show clear value to build internal support for investing in the ongoing transition.
  • It’s best to manage expectations. Focus on learning from, and building on, your early cloud native efforts.
  • Take advantage of training opportunities, including social learning via conferences where you can gain access to best practices from those with the most cloud native experience.

More generally, as companies rise to meet the increasingly real-time demands of users and customers, and the need to respond to nimble competitors, we expect organizations to find the cloud native approach a necessity. Cloud native makes it possible for companies to deploy features faster, more affordably, more reliably, and with less risk. As the cloud native market develops, we see many opportunities for tools and training to help ease the transition to new architectures and to bridge the cloud native skills gap.

[1] The cloud native infrastructure adoption survey ran from February 27-March 27, 2019.

Continue reading How companies adopt and apply cloud native infrastructure.

Categories: Technology

Four short links: 29 April 2019

O'Reilly Radar - Mon, 2019/04/29 - 03:50

Technology Radar, Influencers Dropping, Stuff That Matters, and Reverse Engineering

  1. Thoughtworks Technology Radar (PDF) -- an interesting rating system for a bunch of different tools, techniques, platforms, and languages/frameworks.
  2. Influencers are Abandoning the Instagram Look (The Atlantic) -- According to Fohr, 60% of influencers in his network with more than 100,000 followers are actually losing followers month over month. “It’s pretty staggering,” he says. “If you’re an influencer [in 2019] who is still standing in front of Instagram walls, it’s hard.”
  3. Lunch with Alan Kay -- What it comes down to is: are you trying to do science? Are you trying to invent a good future for humanity? Alan’s definition of science is still too large to fit into my head, but I can see his reverence for it and the pioneering scientists of our past. [...] For me this lunch felt like a reckoning. It was as if (to be clear: this didn’t really happen), Alan clapped his hands loudly in my face, shouting “Wake up! Wake up!”, and then turned me away from the flame everyone else was transfixed by and onto a helicopter ride to give me a glimpse of all the other perspectives that I should consider.
  4. Jane Manchun Wong -- writeups of details of app features, using knowledge gained by reverse engineering those apps. Fascinating!

Continue reading Four short links: 29 April 2019.

Categories: Technology

Four short links: 26 April 2019

O'Reilly Radar - Fri, 2019/04/26 - 03:50

Simplify Gmail, Watch Web Pages, Easy Debugging, and Sleep Deprivation

  1. Simplify -- A Chrome extension that brings the simplicity of Google Inbox to Gmail. (via FastCompany)
  2. WatchMe -- WatchMe can watch for changes to an entire page, or a specific section of it. It's appropriate for research use cases where you want to track changes in one or more pages over time. WatchMe also comes with psutils [Python system and process utilities] (system tasks) built in to allow for monitoring of system resources. Importantly, it is a tool that implements reproducible monitoring, as all your watches are stored in a configuration file that can easily be shared with others to reproduce your watching protocol.
  3. PySnooper -- instead of carefully crafting the right print lines, you just add one decorator line to the function you're interested in. You'll get a play-by-play log of your function, including which lines ran and when, and exactly when local variables were changed.
  4. Need for Sleep -- we found that a single night of sleep deprivation leads to a reduction of 50% in the quality of the implementations. There is notable evidence that the developers’ engagement and their prowess to apply TFD [test-first development] are negatively impacted. Our results also show that sleep-deprived developers make more fixes to syntactic mistakes in the source code.

Continue reading Four short links: 26 April 2019.

Categories: Technology

Why companies are in need of data lineage solutions

O'Reilly Radar - Thu, 2019/04/25 - 04:15

The O’Reilly Data Show Podcast: Neelesh Salian on data lineage, data governance, and evolving data platforms.

In this episode of the Data Show, I spoke with Neelesh Salian, software engineer at Stitch Fix, a company that combines machine learning and human expertise to personalize shopping. As companies integrate machine learning into their products and systems, there are important foundational technologies that come into play. This shouldn’t come as a shock, as current machine learning and AI technologies require large amounts of data—specifically, labeled data for training models. There are also many other considerations—including security, privacy, reliability/safety—that are encouraging companies to invest in a suite of data technologies. In conversations with data engineers, data scientists, and AI researchers, the need for solutions that can help track data lineage and provenance keeps popping up.

There are several San Francisco Bay Area companies that have embarked on building data lineage systems—including Salian and his colleagues at Stitch Fix. I wanted to find out how they arrived at the decision to build such a system and what capabilities they are building into it.

Continue reading Why companies are in need of data lineage solutions.

Categories: Technology

Stablecoins: Solving the cryptocurrency volatility crisis

O'Reilly Radar - Thu, 2019/04/25 - 04:00

Resolving the volatility problem will unlock the groundwork needed for blockchain-based global payment systems.

For cryptocurrency enthusiasts, the long game of blockchain ecosystems is to create open platforms controlled by no single authority, inviting open participation from anyone. This, of course, is in an effort to move forward the culture of open source, from static code branches sitting in source trees to living and evolving useful systems ready for live interaction, still egalitarian and open access in nature.

At the heart of this vision lie practically immutable accounting systems that store the widely accepted state of the world to ensure global integrity. Large visions are rarely achieved in one fell swoop and are instead typically realized through the emergence of solutions for related and meaningful short-term problems. The volatility problem is one such important candidate.

Due to the nascency of cryptocurrency markets, volatility has remained a staple for both outside observers and users. Although traders may fare well from the conditions, those wishing to use the assets in particular and unlock the true value of these accounting systems may have a different experience altogether. An asset bought as a medium of exchange (MoE) may lose its value before any value from spending that asset has truly been captured.

The volatility problem is important to solve because its resolution in the short-term unlocks the financial and technical engineering groundwork necessary for cheap, swift, secure, and disintermediated global payment systems based on blockchains. Users of these systems will be able to transact seamlessly with peers at a far cheaper rate and with a dramatic increase of security due to the nature of these cryptographic systems. In the long term, solving this problem will vastly reduce the barrier to entry for use of smart contract-based services on public, consortium, and private blockchains—truly unlocking the automation value these technologies have to offer.

A cryptocurrency-based contender that has appeared in order to solve this crisis is called a stablecoin. A stablecoin is a cryptocurrency that is pegged by various means to a traditional fiat currency to maintain its price or is backed by precious metals and other commodities. To date, there have been stablecoins backed by fiat currencies in reserve, gold, and even cryptocurrency itself. Although some have failed and lost their desired peg, recent iterations have proven quite successful so far in their stability.

Value opportunity: Digital money

The digital money industry is booming, driven in recent years especially by the Asia-Pacific region. In 2017, the revenue opportunities exceeded $1.9 trillion, trending double-digit percentage growth per year. While cryptocurrencies have many desirable properties for digital payments, such as low transaction fees, asset transfer simplicity, and auditability, the price volatility of mainstream cryptocurrencies is currently untenable for production use cases. Wild price swings are not good for buying groceries and paying rent. Simple tasks become unreasonably complex, as accounting loses its stability on a multitude of scales.

For example, cross-border payments are easy to facilitate with cryptocurrencies but their value fluctuation makes the process unreasonably complicated. If a farmer in an emerging economy wishes to send their family cryptocurrency, they must account for price fluctuations that may devalue the asset at a double-digit percentage before it can be liquidated for a local currency. Stablecoins have the opportunity to reap the benefits of cryptocurrencies while mitigating price volatility to acceptable levels.

Value opportunity: Open platforms

Smart contracting platforms have the potential to solve problems “once and for all,” in the same vein as cloud service providers touting infrastructural block storage (Amazon S3, Google Cloud Storage, Azure Blob) and computing power (Amazon EC2, Google Compute Engine, Azure Compute). Due to their open nature, smart contracts may easily extend to the application layer and offer turnkey solutions to digital money, trade financing, supply chain management, and sharing economies.

Smart contracts can run at cost without intermediaries carving out high margins, a paradigm similar to well-run public utilities. However, the dominant way to pay for smart contract-based goods and services has been with volatile cryptocurrencies, adding currency risk to businesses that operate on more stable national currencies such as dollars, yuan, or euros. Alternatively, businesses may choose to purchase exact amounts of volatile cryptocurrencies as necessary, minimizing currency exposure but incurring new transaction fees with each subsequent transaction.

With the rise of stablecoins, businesses incur smaller currency risks and transaction fees. Due to a guarantee on the price of the asset being processed, there is minimal volatility risk and users and businesses alike can benefit from this stability. This can, in turn, drive adoption for smart contract systems due to reduced risk and improved ease of use. Businesses feel safer due to the benefits of automation and lower transaction costs, and users feel safer in knowing their personal assets won’t be devalued.

Stablecoin history

Stablecoins aren’t a new revelation in the cryptocurrency space, as attempts at their implementations have been in motion since 2014. The first two attempts at creating stablecoins came from both BitUSD of BitShares, and NuBits—both of which were crypto collateralized iterations. Although both failed due to collateralization instability, they paved the way for new iterations that learned from their mistakes as well as strengthened the case for a price-stable asset represented as a cryptocurrency.

The first implementation of a reserved back stablecoin came in late 2014 from Tether, which was initially built on Bitcoin through the Omni layer. Although Tether offered the creation and redemption process typically associated with reserve backed stablecoins, its lack of transparency paved the way for transparent reserve backed stablecoins such as CENTRE’s USDC and Gemini’s GUSD.

There are, in fact, three types of stablecoin implementations: reserve backed, crypto collateralized, and algorithmic, each with its own associated risks and tradeoffs. While reserve backed implementations provide transparency and absolute stability from natural arbitrage opportunities, there is added counterparty risk from the centralized controller. Crypto collateralized implementations forego counterparty risk at the expense of the volatility from their underlying collateral. Algorithmic stablecoins are dependent on incentive mechanisms and speculation, and haven’t had a proven implementation yet. However, all three are interesting in their approaches and allow users to gauge their comfort in using their choice of asset.

Stablecoin implementation types Reserve backed

A reserve backed stablecoin is one that is typically issued by a central provider and is backed 1:1 to a fiat currency by means of both a tokenization and redemption process. To generate new tokens, customers send collateral to the provider, and the provider then mints or creates new tokens. The provider typically has the underlying collateral under custody with regular attestation reports, and “burns” or removes the tokens from circulation once they are redeemed.

Recent examples of reserve backed stablecoins include Gemini’s GUSD, CENTRE’s USDC, and Paxos’ PAX. Legacy reserve backed stablecoin Tether (USDT) has suffered much scrutiny due to the provider’s lack of regular auditing of the underlying collateral. However, the latest class of reserve backed stablecoins have been consistently providing audit reports from reputable firms.

Reserve backed stablecoins typically retain their peg from arbitrage cycles. For example, if Gemini’s GUSD is trading under $1, arbitrageurs are incentivized to buy the asset until it stabilizes and redeem the tokens for the underlying collateral, thereby making a profit. If the asset is trading over $1, arbitrageurs are incentivized to send collateral to Gemini in order to generate new tokens and sell them for the higher rate.

Typically in these systems, there’s an increased counterparty risk due to the ability of the central provider to freeze assets at any time. This issue came up recently with Gemini customers having redemption issues.

Crypto collateralized

A crypto collateralized stablecoin is one that has cryptocurrency as its underlying collateral and uses price feeds associated with the collateral as a means of keeping the stablecoin at one dollar. In the case of crypto collateralized stablecoins, plenty of projects have made the attempt at stabilizing a system, but none have succeeded as much as MakerDAO.

Originally launched in 2017, DAI, the crypto collateralized stablecoin from the MakerDAO project, has remained relatively stable with continued development interest. Currently, DAI is collateralized by Ether (ETH), the native currency of the Ethereum blockchain, and the project plans on utilizing other assets for collateral in the future. Users wishing to obtain DAI must first lock ETH in a “CDP,” or collateralized debt position. Due to the high volatility of ETH, DAI is typically overcollateralized at rates well over 100%.

If an individual’s CDP is ever close to being insolvent, the system triggers the sale of the user’s underlying collateral. If the system ever becomes insolvent due to a market crash in the price of the underlying collateral, Maker or “MKR,” the other token in the MakerDAO system, acts as a buyer of last resort where new MKR tokens are minted, effectively diluting current holders, and sold on the open market as a means of stabilizing the system.

MKR holders are responsible for voting on resolutions and regulating the entire system through MakerDAO’s governance mechanisms. A fee in MKR is also paid to open CDPs to acquire DAI, effectively reducing the supply as more are opened.

Currently, the key centralized aspect with the most risk in MakerDAO’s system is its oracles that generate price feeds for the underlying collateral. Other centralized actors include “keepers,” or automated market makers that keep DAI around its target price. Considering limited arbitrage opportunities on a redemption process such as DAIs, external mechanisms such as these must be consulted to maintain stability.

Algorithmic

The last major implementation of stablecoins is based on algorithms that trigger supply inflation and deflation in relation to the stablecoin’s target price. Algorithmic stablecoins aren’t backed by collateral, but rather have speculators involved with associated secondary and tertiary assets to keep the system balanced.

Algorithmic stablecoins are typically reliant on an elastic supply scheme building on Robert Sam’s Seigniorage Shares, where a base stable-asset is produced, but secondary and even tertiary assets generated and redeemed to ensure system stability. Seigniorage generally involves profiting from the creation of currency; the difference in the cost of the production of money and the money itself. In an algorithmic system, speculation on the secondary and tertiary assets to keep an algorithmic system stable is designed to yield a profit. One recent example of an attempt at creating such a system was Basis, a project seeking to build an algorithmic model that recently refunded investors.

The Basis model included three tokens: a stablecoin to retain its peg at one dollar, and “bonds” and “shares” that act as the secondary and tertiary assets. Bonds are created by the system during periods of price decline under the desired peg and are purchased using the stablecoin to deflate its supply and increase its price. The bonds are immediately redeemed once the price reaches above its target of $1. Shares act as a hedge on a healthy system, as shareholders are granted newly minted stablecoins if all bonds have been redeemed and the stablecoin continues to trade above $1. The inflation mechanism of the stablecoin is meant to drive the price down, as shareholders are granted tokens from it.

To date, no algorithmic stablecoin has launched successfully. The next attempt at an algorithmic stablecoin launch will likely come from Carbon, and their CUSD stablecoin. At the moment, Carbon is fully collateralized by U.S. dollars held in bank accounts, but they plan on eventually transitioning to an alternative system.

A plethora of implementations

There is no shortage of projects wishing to issue stablecoins, as it has become quite a hot topic in recent times. To date, there are more than a dozen launched stablecoin projects with more continually being developed to serve different purposes. Most of them have been pegged to the U.S. dollar, but there are a few examples being collateralized by other fiat currencies and commodities. The most notable of which is Digix—which is backed by gold held in a custodial vault. Being able to leverage cost savings internally to an existing ecosystem is one of the primary use cases for institutions and enterprise companies to partner with or create their own stablecoin.

Last year, in particular, saw a dramatic rise in the number of reserved backed stablecoins that launched. Businesses such as Paxos, Gemini, and Circle saw the benefits of launching these assets and have since been building lofty ecosystems around their respective assets. They have also set the standard in regular auditing, providing attestation reports for each asset on a regular basis. This further lowers the risk profile of these assets while providing a safety mechanism to encourage originally skeptic outsiders to finally enter and interface with these technologies.

Volatility mitigation in production

In order to reap the benefits of stablecoins, decentralized finance (DeFi) services are beginning to accept stablecoins as a means of interfacing with services. DeFi services are those that provide traditional financial services while ensuring transactions are protected, peer to peer, and that users retain custody of their assets at all times. Examples of these services include exchanges, automated asset management funds, and debt platforms.

One example of such a service is Dharma —a suite of decentralized lending products. Dharma’s main product involves peer-to-peer lending where users can collateralize their cryptocurrencies in exchange for Ethereum or USDC—a stablecoin. This would allow risk to be averted, as the recipient would be able to avoid market volatility upon receiving a stablecoin. Even BlockFi, a competitor to Dharma, recently announced the integration of GUSD on their platform to help users avoid volatility.

Decentralized exchanges have also taken note of stablecoins and have continually added them as an available asset for trading. For example, Ethfinex, DDex, and Kyber Network have all listed DAI on their exchanges in order to better assist new traders and those who need to exchange assets and avoid risk after liquidation. Even seasoned traders can now use DAI in order to hedge risk in short-term market fluctuations and benefit from its stability.

There are a number of additional projects in the process of launching this year, including MelonPort, an asset management protocol, and Origin, a marketplace generation protocol that will also accept stablecoins as a way to mitigate volatility risk. The increase in stablecoin adoption is a positive signaling mechanism that is demonstrating a natural fit. As more decentralized applications and protocols continue to be developed, stablecoins will naturally serve as a transactional asset to greatly benefit end users.

Mass proliferation, or “One to Rule Them All”

The recent announcement of the JPM Coin, a stablecoin by J.P. Morgan (which clearly demonstrates an institutional interest in the space), is going to usher in a wave of stablecoins in the next few years. These assets will range from other sovereign-backed fiat currencies such as the Yuan, Yen, and Euro, to other traditional commodities such as precious metals and rare earth minerals. However, their implementations will definitely take note of current iterations that provide a regular price peg free from value fluctuation (if a fiat currency), the least amount of risk for users.

The reasoning is that in the short to medium term, stablecoins provide an entrance toward a new financial system. Through this stable onramp into blockchain systems, users need not worry about market volatility when thinking about traditional payments, nor do they need to worry about open platform access being rescinded due to a market downturn. They provide predictability in price and provide a way to sideline funds during market swings without having to pay additional fees moving back into fiat.

It is unlikely in the long term that there will be as many stablecoins as are currently available and being developed. While there won’t be a single controlling stablecoin due to the myriad of assets they could be pegged to, there will be a basket that commands a majority of the trading volume. Even with all of its historical issues, Tether still commands a vast majority of trading volume relative to other stablecoins simply because of its social relativity and continued redemption and minting arbitrage opportunities. Competition will drive pricing down and force the technology to grow quickly, but will also result in low or negative margins for the firms producing these but not operating at efficient enough scale.

The other inherent risk is to consider if and when cryptocurrencies reach a sufficiently high market capitalization and trading interest. Once enough volume reaches traditional medium of exchange (MoE) cryptocurrencies, prices may begin to plateau and stabilize due to a lack of illiquidity. If Bitcoin were to reach over a sufficient amount of trading volume, having a daily price fluctuation of 10%+ would be highly unlikely, lessening the value proposition of stablecoins. However, stablecoins for the foreseeable future will continue to provide a key stability mechanism by way of the policy and monetary controls we are starting to see implemented in some popular stablecoins today.

Allowing stablecoins to have monetary controls in place allows not just users but regulators and institutional entities to have a comfortable experience with a traditionally volatile asset class. Enterprises didn’t jump right into cloud computing, and to this day most still have at least some of their processes run locally—this will most likely be the same case for stablecoins and their adoption. Facebook has recently been noted to be working on a stablecoin with their WhatsApp messenger and have been testing it in India. This allows them to govern the monetary policy inside of their apps and build pathways into and out of their ecosystems. Not only does this allow them to save on transaction fees associated with more traditional systems, but it also allows them to guarantee stability in their walled gardens while using cryptocurrency. Facebook being an already digitally native company allows them to test the cost savings aspects while also complying with various regulatory requirements for their users across geographic lines.

As the stablecoin ecosystem continues to grow between enterprise players and existing blockchain ecosystem players such as Gemini and Coinbase, users will continue to have the freedom and flexibility to transition between fiat and crypto seamlessly, and more importantly, be granted the stability needed to avoid cryptocurrency’s current volatility crisis.

Continue reading Stablecoins: Solving the cryptocurrency volatility crisis.

Categories: Technology

Four short links: 25 April 2019

O'Reilly Radar - Thu, 2019/04/25 - 03:50

Values Risk, Brain Interface, Hacking Scooters, and Behavioral Change

  1. Fastly S-1 (SEC) -- Our dedication to our values may negatively influence our financial results. We have taken, and may continue to take, actions that we believe are in the best interests of our customers and our business, even if those actions do not maximize financial results in the short term. For instance, we do not knowingly allow our platform to be used to deliver content from groups that promote violence or hate, and that conflict with our values like strong ethical principles of integrity and trustworthiness, among others. However, this approach may not result in the benefits that we expect or may result in negative publicity, in which case our business could be harmed. (via Anil Dash)
  2. Brain Implant Can Say What You’re Thinking (IEEE Spectrum) -- a new type of BCI, powered by neural networks, that might enable individuals with paralysis or stroke to communicate at the speed of natural speech—an average of 150 words per minute. The technology works via a unique two-step process: first, it translates brain signals into movements of the vocal tract, including the jaw, larynx, lips, and tongue. Second, it synthesizes those movements into speech. The system, which requires a palm-size array of electrodes to be placed directly on the brain, provides a proof of concept that it is possible to reconstruct natural speech from brain activity, the authors say.
  3. Australian Lime Scooters Hacked To Say Sexual Things To Riders -- And while this was just audio files, there have been concerns about scooter hacks that might be more dangerous. Researchers at the security firm Zimperium recently demonstrated that they could force a scooter to accelerate and brake by using a Bluetooth-enabled app from up to 100m away. But Lime doesn’t operate the scooter model that was used in Zimperium’s hack demonstration. Users are hacking scooters around the world to max out their speed and get free rides. But other people are simply interested in adding a little chaos to the world. People have been placing stickers over the QR codes used to start a ride, smashed the scooters in the street, and sometimes simply set them on fire.
  4. The Behavioural Change Stairway Model -- Active Listening; Empathy; Rapport; Influence; Behavioural Change. [...] Though the stakes of business negotiations are usually not as high as that of a hostage negotiation, the psychological basis for diffusing conflict are related between the two contexts. The manager who is negotiating with a frustrated employee or client will be well served by walking with his or her counterpart up the “Behavioral Change Stairway.”. (via Simon Willison)

Continue reading Four short links: 25 April 2019.

Categories: Technology

Four short links: 24 April 2019

O'Reilly Radar - Wed, 2019/04/24 - 03:55

Control is a Shrug, Glitch Languages, Streaming Media Server, and CRISPR's New Model Organisms

  1. Users Want Control is a Shrug (Ian Bicking) -- Making the claim “users want control” is the same as saying you don’t know what users want, you don’t know what is good, and you don’t know what their goals are.
  2. Language Support on Glitch: A List --a write-up of getting languages running in the Glitch environment. (via Simon Willison)
  3. Ant Media Server -- open source streaming media server, supports RTMP, RTSP, WebRTC, and Adaptive Bitrate. It can also record videos in MP4, HLS, and FLV.
  4. CRISPR Gene-editing Creates Wave of Exotic Model Organisms (Nature) -- Biologists have embraced CRISPR’s ability to quickly and cheaply modify the genomes of popular model organisms, such as mice, fruit flies, and monkeys. Now they are trying the tool on more-exotic species, many of which have never been reared in a lab or had their genomes analyzed. “We finally are ready to start expanding what we call a model organism,” says Tessa Montague, a molecular biologist at Columbia University in New York City.

Continue reading Four short links: 24 April 2019.

Categories: Technology

Four short links: 23 April 2019

O'Reilly Radar - Tue, 2019/04/23 - 05:25

Worker-run Gig Factories, Persistence of Firefighting, Discriminating Systems, and Activation Atlas

  1. When Workers Control the Code (Wired) -- workers form co-ops to code and run gig economy apps, and make decent rates because there's no rent-seeker platform in the middle. A great counter for rising prices and plummeting driver pay post-IPO. (via BoingBoing)
  2. The Persistence of Firefighting in Product Development -- The most important result of our studies is that product development systems have a tipping point. In models of infectious diseases, the tipping point represents the threshold of infectivity and susceptibility beyond which a disease becomes an epidemic. Similarly, in product development systems there exists a threshold for problem-solving activity that, when crossed, causes firefighting to spread rapidly from a few isolated projects to the entire development system. Our analysis also shows that the location of the tipping point, and therefore the susceptibility of the system to the firefighting phenomenon, is determined by resource utilization in steady state.
  3. Discriminating Systems -- headlines from the major findings: There is a diversity crisis in the AI sector across gender and race. The AI sector needs a profound shift in how it addresses the current diversity crisis. The overwhelming focus on "women in tech" is too narrow and likely to privilege white women over others. Fixing the "pipeline" won’t fix AI’s diversity problems. The use of AI systems for the classification, detection, and prediction of race and gender is in urgent need of re-evaluation. Also comes with recommendations.
  4. Activation Atlas -- By using feature inversion to visualize millions of activations from an image classification network, we create an explorable activation atlas of features the network has learned which can reveal how the network typically represents some concepts. Beautiful.

Continue reading Four short links: 23 April 2019.

Categories: Technology

0x66: The End of Hellwig vs. VMware

FAIF - Mon, 2019/04/22 - 14:41

Bradley and Karen discuss the details of the completion of the lawsuit (which Conservancy supported) between Christoph Hellwig and VMware in Germany.

Show Notes: Segment 0 (00:37) Segment 1 (09:26) Segment 2 (33:01)
  • In the next episode, Karen will discuss the Kernel Enforcement Statement Additional Permission, and the Red Hat “Cooperation Commitment”. (35:40)
  • Send feedback and comments on the cast to <oggcast@faif.us>. You can keep in touch with Free as in Freedom on our IRC channel, #faif on irc.freenode.net, and by following Conservancy on on Twitter and and FaiF on Twitter.

    Free as in Freedom is produced by Dan Lynch of danlynch.org. Theme music written and performed by Mike Tarantino with Charlie Paxson on drums.

    The content of this audcast, and the accompanying show notes and music are licensed under the Creative Commons Attribution-Share-Alike 4.0 license (CC BY-SA 4.0).

    Categories: Free Software

    Four short links: 22 April 2019

    O'Reilly Radar - Mon, 2019/04/22 - 04:55

    GANs via Spreadsheet, Open Source Chat, Sandboxing Libraries, and Flat Robot Sales

    1. Spacesheet -- Interactive Latent Space Exploration through a Spreadsheet Interface. (via Flowing Data)
    2. Tchap -- the French government's open source secure encrypted chat tool, built off the open source Riot. (via ZDNet)
    3. Sandboxed API -- Google open-sourced their tool for automatically generating sandboxes for C/C++ libraries. (via Google Blog)
    4. Industrial Robot Sales Flat (Robohub) -- It was only up 1% over 2017. Important note: No information was given about service and field robotics. (which may well be booming)

    Continue reading Four short links: 22 April 2019.

    Categories: Technology

    Four short links: 19 April 2019

    O'Reilly Radar - Fri, 2019/04/19 - 02:00

    AI Music, Mind-Controlled Robot Hands, Uber's Repo Tools, and Career Resilience

    1. AI and Music (The Verge) -- total legal clusterf*ck.
    2. A Robot Hand Controlled with the Mind -- student uses open source hand and trains brain-machine interface, and holy crap we live in an age when these kinds of things are relatively easy to do rather than requiring massive resources.
    3. Keeping Master Green -- This paper presents the design and implementation of SubmitQueue. It guarantees an always green master branch at scale: all build steps (e.g., compilation, unit tests, UI tests) successfully execute for every commit point. SubmitQueue has been in production for over a year and can scale to thousands of daily commits to giant monolithic repositories. Uber's tech. (via Adrian Colyer)
    4. Early Career Setback and Future Career Impact -- Our analyses reveal that an early-career near miss has powerful, opposing effects. On one hand, it significantly increases attrition, with one near miss predicting more than a 10% chance of disappearing permanently from the NIH system. Yet, despite an early setback, individuals with near misses systematically outperformed those with near wins in the longer run, as their publications in the next 10 years garnered substantially higher impact. We further find that this performance advantage seems to go beyond a screening mechanism, whereby a more selected fraction of near-miss applicants remained than the near winners, suggesting that early-career setback appears to cause a performance improvement among those who persevere. Overall, the findings are consistent with the concept that "what doesn't kill me makes me stronger."

    Continue reading Four short links: 19 April 2019.

    Categories: Technology

    Computational propaganda

    O'Reilly Radar - Thu, 2019/04/18 - 13:00

    Sean Gourley considers the repercussions of AI-generated content that blurs the line between what's real and what's fake.

    Continue reading Computational propaganda.

    Categories: Technology

    Pages

    Subscribe to LuftHans aggregator