You are here

Feed aggregator

Technology Trends for 2023

O'Reilly Radar - Wed, 2023/03/01 - 04:44

This year’s report on the O’Reilly learning platform takes a detailed look at how our customers used the platform. Our goal is to find out what they’re interested in now and how that changed from 2021—and to make some predictions about what 2023 will bring.

A lot has happened in the past year. In 2021, we saw that GPT-3 could write stories and even help people write software; in 2022, ChatGPT showed that you can have conversations with an AI. Now developers are using AI to write software. Late in 2021, Mark Zuckerberg started talking about “the metaverse,” and fairly soon, everyone was talking about it. But the conversation cooled almost as quickly as it started. Back then, cryptocurrency prices were approaching a high, and NFTs were “a thing”…then they crashed.

What’s real, and what isn’t? Our data shows us what O’Reilly’s 2.8 million users are actually working on and what they’re learning day-to-day. That’s a better measure of technology trends than anything that happens among the Twitterati. The answers usually aren’t found in big impressive changes; they’re found in smaller shifts that reflect how people are turning the big ideas into real-world products. The signals are often confusing: for example, interest in content about the “big three” cloud providers is slightly down, while interest in content about cloud migration is significantly up. What does that mean? Companies are still “moving into the cloud”—that trend hasn’t changed—but as some move forward, others are pulling back (“repatriation”) or postponing projects. It’s gratifying when we see an important topic come alive: zero trust, which reflects an important rethinking of how security works, showed tremendous growth. But other technology topics (including some favorites) are hitting plateaus or even declining.

While we don’t discuss the economy as such, it’s always in the background. Whether or not we’re actually in a recession, many in our industry perceive us to be so, and that perception can be self-fulfilling. Companies that went on a hiring spree over the past few years are now realizing that they made a mistake—and that includes both giants that do layoffs in the tens of thousands and startups that thought they had access to an endless stream of VC cash. In turn, that reality influences the actions individuals take to safeguard their jobs or increase their value should they need to find a new one.

Methodology

This report is based on our internal “units viewed” metric, which is a single metric across all the media types included in our platform: ebooks, of course, but also videos and live training courses. We use units viewed because it measures what people actually do on our platform. But it’s important to recognize the metric’s shortcomings; as George Box (almost)1 said, “All metrics are wrong, but some are useful.” Units viewed tends to discount the usage of new topics: if a topic is new, there isn’t much content, and users can’t view content that doesn’t exist. As a counter to our focus on units viewed, we’ll take a brief look at searches, which aren’t constrained by the availability of content. For the purposes of this report, units viewed is always normalized to 1, where 1 is assigned to the greatest number of units in any group of topics.

It’s also important to remember that these “units” are “viewed” by our users. Whether they access the platform through individual or corporate accounts, O’Reilly members are typically using the platform for work. Despite talk of “internet time,” our industry doesn’t change radically from day to day, month to month, or even year to year. We don’t want to discount or undervalue those who are picking up new ideas and skills—that’s an extremely important use of the platform. But if a company’s IT department were working on its ecommerce site in 2021, they were still working on that site in 2022, they won’t stop working on it in 2023, and they’ll be working on it in 2024. They might be adding AI-driven features or moving it to the cloud and orchestrating it with Kubernetes, but they’re not likely to drop React (or even PHP) to move to the latest cool framework.

However, when the latest cool thing demonstrates a few years of solid growth, it can easily become one of the well-established technologies. That’s happening now with Rust. Rust isn’t going to take over from Java and Python tomorrow, let alone in 2024 or 2025, but that’s a movement that’s real. Finally, it’s wise to be skeptical about “noise.” Changes of one or two percentage points often mean little. But when a mature technology that’s leading its category stops growing, it’s fair to wonder whether it’s hit a plateau and is en route to becoming a legacy technology.

The Biggest Picture

We can get a high-level view of platform usage by looking at usage for our top-level topics. Content about software development was the most widely used (31% of all usage in 2022), which includes software architecture and programming languages. Software development is followed by IT operations (18%), which includes cloud, and by data (17%), which includes machine learning and artificial intelligence. Business (13%), security (8%), and web and mobile (6%) come next. That’s a fairly good picture of our core audience’s interests: solidly technical, focused on software rather than hardware, but with a significant stake in business topics.

Total platform usage grew by 14.1% year over year, more than doubling the 6.2% gain we saw from 2020 to 2021. The topics that saw the greatest growth were business (30%), design (23%), data (20%), security (20%), and hardware (19%)—all in the neighborhood of 20% growth. Software development grew by 12%, which sounds disappointing, although in any study like this, the largest categories tend to show the least change. Usage of resources about IT operations only increased by 6.9%. That’s a surprise, particularly since the operations world is still coming to terms with cloud computing.

O’Reilly learning platform usage by topic year over year

While this report focuses on content usage, a quick look at search data gives a feel for the most popular topics, in addition to the fastest growing (and fastest declining) categories. Python, Kubernetes, and Java were the most popular search terms. Searches for Python showed a 29% year-over-year gain, while searches for Java and Kubernetes are almost unchanged: Java gained 3% and Kubernetes declined 4%. But it’s also important to note what searches don’t show: when we look at programming languages, we’ll see that content about Java is more heavily used than content about Python (although Python is growing faster).

Similarly, the actual use of content about Kubernetes showed a slight year-over-year gain (4.4%), despite the decline in the number of searches. And despite being the second-most-popular search term, units viewed for Kubernetes were only 41% of those for Java and 47% of those for Python. This difference between search data and usage data may mean that developers “live” in their programming languages, not in their container tools. They need to know about Kubernetes and frequently need to ask specific questions—and those needs generate a lot of searches. But they’re working with Java or Python constantly, and that generates more units viewed.

The Go programming language is another interesting case. “Go” and “Golang” are distinct search strings, but they’re clearly the same topic. When you add searches for Go and Golang, the Go language moves from 15th and 16th place up to 5th, just behind machine learning. However, change in use of the search term was relatively small: a 1% decline for Go, a 8% increase for Golang. Looking at Go as a topic category, we see something different: usage of content about Go is significantly behind the leaders, Java and Python, but still the third highest on our list, and with a 20% gain from 2021 to 2022.

Looking at searches is worthwhile, but it’s important to realize that search data and usage data often tell different stories.

Top searches on the O’Reilly learning platform year over year

Searches can also give a quick picture of which topics are growing. The top three year-over-year gains were for the CompTIA Linux+ certification, the CompTIA A+ certification, and transformers (the AI model that’s led to tremendous progress in natural language processing). However, none of these are what we might call “top tier” search terms: they had ranks ranging from 186 to 405. (That said, keep in mind that the number of unique search terms we see is well over 1,000,000. It’s a lot easier for a search term with a few thousand queries to grow than it is for a search term with 100,000 queries.)

The sharpest declines in search frequency were for cryptocurrency, Bitcoin, Ethereum, and Java 11. There are no real surprises here. This has been a tough year for cryptocurrency, with multiple scandals and crashes. As of late 2021, Java 11 was no longer the current long-term support (LTS) release of Java; that’s moved on to Java 17.

What Our Users Are Doing (in Detail)

That’s a high-level picture. But where are our users actually spending their time? To understand that, we’ll need to take a more detailed look at our topic hierarchy—not just at the topics at the top level but at those in the inner (and innermost) layers.

Software Development

The biggest change we’ve seen is the growth in interest in coding practices; 35% year-over-year growth can’t be ignored, and indicates that software developers are highly motivated to improve their practice of programming. Coding practices is a broad topic that encompasses a lot—software maintenance, test-driven development, maintaining legacy software, and pair programming are all subcategories. Two smaller categories that are closely related to coding practices also showed substantial increases: usage of content about Git (a distributed version control system and source code repository) was up 21%, and QA and testing was up 78%. Practices like the use of code repositories and continuous testing are still spreading to both new developers and older IT departments. These practices are rarely taught in computer science programs, and many companies are just beginning to put them to use. Developers, both new and experienced, are learning them on the job.

Going by units viewed, design patterns is the second-largest category, with a year-over-year increase of 13%. Object-oriented programming showed a healthy 24% increase. The two are closely related, of course; while the concept of design patterns is applicable to any programming paradigm, object-oriented programming (particularly Java, C#, and C++) is where they’ve taken hold.

It’s worth taking a closer look at design patterns. Design patterns are solutions to common problems—they help programmers work without “reinventing wheels.” Above all, design patterns are a way of sharing wisdom. They’ve been abused in the past by programmers who thought software was “good” if it used “design patterns,” and jammed as many into their code as possible, whether or not it was appropriate. Luckily, we’ve gotten beyond that now.

What about functional programming? The “object versus functional” debates of a few years ago are over for the most part. The major ideas behind functional programming can be implemented in any language, and functional programming features have been added to Java, C#, C++, and most other major programming languages. We’re now in an age of “multiparadigm” programming. It feels strange to conclude that object-oriented programming has established itself, because in many ways that was never in doubt; it has long been the paradigm of choice for building large software systems. As our systems are growing ever larger, object-oriented programming’s importance seems secure.

Leadership and management also showed very strong growth (38%). Software developers know that product development isn’t just about code; it relies heavily on communication, collaboration, and critical thinking. They also realize that management or team leadership may well be the next step in their career.

Finally, we’d be remiss not to mention quantum computing. It’s the smallest topic category in this group but showed a 24% year-over-year gain. The first quantum computers are now available through cloud providers like IBM and Amazon Web Services (AWS). While these computers aren’t yet powerful enough to do any real work, they make it possible to get a head start on quantum programming. Nobody knows when quantum computers will be substantial enough to solve real-world problems: maybe two years, maybe 20. But programmers are clearly interested in getting started.

Year-over-year growth for software development topics Software architecture

Software architecture is a very broad category that encompasses everything from design patterns (which we also saw under software development) to relatively trendy topics like serverless and event-driven architecture. The largest topic in this group was, unsurprisingly, software architecture itself: a category that includes books on the fundamentals of software architecture, systems thinking, communication skills, and much more—almost anything to do with the design, implementation, and management of software. Not only was this a large category, but it also grew significantly: 26% from 2021 to 2022. Software architect has clearly become an important role, the next step for programming staff who want to level up their skills.

For several years, microservices has been one of the most popular topics in software architecture, and this year is no exception. It was the second-largest topic and showed 3.6% growth over 2021. Domain-driven design (DDD) was the third-most-commonly-used topic, although smaller; it also showed growth (19%). Although DDD has been around for a long time, it came into prominence with the rise of microservices as a way to think about partitioning an application into independent services.

Is the relatively low growth of microservices a sign of change? Have microservices reached a peak? We don’t think so, but it’s important to understand the complex relationship between microservices and monolithic architectures. Monoliths inevitably become more complex over time, as bug fixes, new business requirements, the need to scale, and other issues need to be addressed. Decomposing a complex monolith into a complex set of microservices is a challenging task and certainly one that can’t be underestimated: developers are trading one kind of complexity for another in the hope of achieving increased flexibility and scalability long-term. Microservices are no longer a “cool new idea,” and developers have recognized that they’re not the solution to every problem. However, they are a good fit for cloud deployments, and they leave a company well-positioned to offer its services via APIs and become an “as a service” company. Microservices are unlikely to decline, though they may have reached a plateau. They’ve become part of the IT landscape. But companies need to digest the complexity trade-off.

Web APIs, which companies use to provide services to remote client software via the web’s HTTP protocol, showed a very healthy increase (76%). This increase shows that we’re moving even more strongly to an “API economy,” where the most successful companies are built not around products but around services accessed through web APIs. That, after all, is the basis for all “software as a service” companies; it’s the basis on which all the cloud providers are built; it’s what ties Amazon’s business empire together. RESTful APIs saw a smaller increase (6%); the momentum has clearly moved from the simplicity of REST to more complex APIs that use JSON, GraphQL, and other technologies to move information.

The 29% increase in the usage of content about distributed systems is important. Several factors drive the increase in distributed systems: the move to microservices, the need to serve astronomical numbers of online clients, the end of Moore’s law, and more. The time when a successful application could run on a single mainframe—or even on a small cluster of servers in a rack—is long gone. Modern applications run across hundreds or thousands of computers, virtual machines, and cloud instances, all connected by high-speed networks and data buses. That includes software running on single laptops equipped with multicore CPUs and GPUs. Distributed systems require designing software that can run effectively in these environments: software that’s reliable, that stays up even when some servers or networks go down, and where there are as few performance bottlenecks as possible. While this category is still relatively small, its growth shows that software developers have realized that all systems are distributed systems; there is no such thing as an application that runs on a single computer.

Year-over-year growth for software architecture and design topics

What about serverless? Serverless looks like an excellent technology for implementing microservices, but it’s been giving us mixed signals for several years now. Some years it’s up slightly; some years it’s down slightly. This year, it’s down 14%, and while that’s not a collapse, we have to see that drop as significant. Like microservices, serverless is no longer a “cool new thing” in software architecture, but the decrease in usage raises questions: Are software developers nervous about the degree of control serverless puts in the hands of cloud providers, spinning up and shutting down instances as needed? That could be a big issue. Cloud customers want to get their accounts payable down, cloud providers want to get their accounts receivable up, and if the provider tweaks a few parameters that the customer never sees, that balance could change a lot. Or has serverless just plunged into the “trough of disillusionment” from which it will eventually emerge into the “plane of productivity”? Or maybe it’s just an idea whose time came and went? Whatever the reason, serverless has never established itself convincingly. Next year may give us a better idea…or just more ambiguity.

Programming languages

The stories we can tell about programming languages are little changed from last year. Java is the leader (with 1.7% year-over-year growth), followed by Python (3.4% growth). But as we look down the chart, we see some interesting challengers to the status quo. Go’s usage is only 20% of Java’s, but it’s seen 20% growth. That’s substantial. C++ is hardly a new language—and we typically expect older languages to be more stable—but it had 19% year-over-year growth. And Rust, with usage that’s only 9% of Java, had 22% growth from 2021 to 2022. Those numbers don’t foreshadow a revolution—as we said at the outset, very few companies are going to take infrastructure written in Java and rewrite it in Go or Rust just so they can be trend compliant. As we all know, a lot of infrastructure is written in COBOL, and that isn’t going anywhere. But both Rust and Go have established themselves in key areas of infrastructure: Docker and Kubernetes are both written in Go, and Rust is establishing itself in the security community (and possibly also the data and AI communities). Go and Rust are already pushing older languages like C++ and Java to evolve. With a few more years of 20% growth, Go and Rust will be challenging Java and Python directly, if they aren’t challenging them already for greenfield projects.

JavaScript is an anomaly on our charts: total usage is 19% of Java’s, with a 4.6% year-over-year decline. JavaScript shows up at, or near, the top on most programming language surveys, such as RedMonk’s rankings (usually in a virtual tie with Java and Python). However, the TIOBE Index shows more space between Python (first place), Java (fourth), and JavaScript (seventh)—more in line with our observations of platform usage. We attribute JavaScript’s decline partly to the increased influence of TypeScript, a statically typed variant of JavaScript that compiles to JavaScript (12% year-over-year increase). One thing we’ve noticed over the past few years: while programmers had a long dalliance with duck typing and dynamic languages, as applications (and teams) grew larger, developers realized the value of strong, statically typed languages (TypeScript certainly, but also Go and Rust, though these are less important for web development). This shift may be cyclical; a decade from now, we may see a revival of interest in dynamic languages. Another factor is the use of frameworks like React, Angular, and Node.js, which are undoubtedly JavaScript but have their own topics in our hierarchy. However, when you add all four together, you still see a 2% decline for JavaScript, without accounting for the shift from JavaScript to TypeScript. Whatever the reason, right now, the pendulum seems to be swinging away from JavaScript. (For more on frameworks, see the discussion of web development.)

The other two languages that saw a drop in usage are C# (6.3%) and Scala (16%). Is this just noise, or is it a more substantial decline? The change seems too large to be a random fluctuation. Scala has always been a language for backend programming, as has C# (though to a lesser extent). While neither language is particularly old, it seems their shine has worn off. They’re both competing poorly with Go and Rust for new users. Scala is also competing poorly with the newer versions of Java, which now have many of the functional features that initially drove interest in Scala.

Year-over-year growth for programming languages Security

Computer security has been in the news frequently over the past few years. That unwelcome exposure has both revealed cracks in the security posture of many companies and obscured some important changes in the field. The cracks are all too obvious: most organizations do a bad job of the basics. According to one report, 91% of all attacks start with a phishing email that tricks a user into giving up their login credentials. Phishes are becoming more frequent and harder to detect. Basic security hygiene is as important as ever, but it’s getting more difficult. And cloud computing generates its own problems. Companies can no longer protect all of their IT systems behind a firewall; many of the servers are running in a data center somewhere, and IT staff has no idea where they are or even if they exist as physical entities.

Given this shift, it’s not surprising that zero trust, an important new paradigm for designing security into distributed systems, grew 146% between 2021 and 2022. Zero trust abandons the assumption that systems can be protected on some kind of secure network; all attempts to access any system, whether by a person or software, must present proper credentials. Hardening systems, while it received the least usage, grew 91% year over year. Other topics with significant growth were secure coding (40%), advanced persistent threats (55%), and application security (46%). All of these topics are about building applications that can withstand attacks, regardless of where they run.

Governance (year-over-year increase of 72%) is a very broad topic that includes virtually every aspect of compliance and risk management. Issues like security hygiene increasingly fall under “governance,” as companies try to comply with the requirements of insurers and regulators, in addition to making their operations more secure. Because almost all attacks start with a phish or some other kind of social engineering, just telling employees not to give their passwords away won’t help. Companies are increasingly using training programs, password managers, multifactor authentication, and other approaches to maintaining basic hygiene.

Year-over-year growth for security topics

Network security, which was the most heavily used security topic in 2022, grew by a healthy 32%. What drove this increase? Not the use of content about firewalls, which only grew 7%. While firewalls are still useful for protecting the IT infrastructure in a physical office, they’re of limited help when a substantial part of any organization’s infrastructure is in the cloud. What happens when an employee brings their laptop into the office from home or takes it to a coffee shop where it’s more vulnerable to attack? How do you secure WiFi networks for people working from home as well as in the office? The broader problem of network security has only become more difficult, and these problems can’t be solved by corporate firewalls.

Use of content about penetration testing and ethical hacking actually decreased by 14%, although it was the second-most-heavily-used security topic in our taxonomy (and the most heavily used in 2021).

Security certifications

Security professionals love their certifications. Our platform data shows that the most important certifications were CISSP (Certified Information Systems Security Professional) and CompTIA Security+. CISSP has long been the most popular security certification. It’s a very comprehensive certification oriented toward senior security specialists: candidates must have at least five years’ experience in the field to take the exam. Usage of CISSP-related content dropped 0.23% year over year—in other words, it was essentially flat. A change this small is almost certainly noise, but the lack of change may indicate that CISSP has saturated its market.

Compared to CISSP, the CompTIA Security+ certification is aimed at entry- or mid-level security practitioners; it’s a good complement to the other CompTIA certifications, such as Network+. Right now, the demand for security exceeds the supply, and that’s drawing new people into the field. This fits with the increase in the use of content to prepare for the CompTIA Security+ exam, which grew 16% in the past year. The CompTIA CSA+ exam (recently renamed the CYSA+) is a more advanced certification aimed specifically at security analysts; it showed 37% growth.

Year-over-year growth for security certifications

Use of content related to the Certified Ethical Hacker certification dropped 5.9%. The reasons for this decline aren’t clear, given that demand for penetration testing (one focus of ethical hacking) is high. However, there are many certifications specifically for penetration testers. It’s also worth noting that penetration testing is frequently a service provided by outside consultants. Most companies don’t have the budget to hire full-time penetration testers, and that may make the CEH certification less attractive to people planning their careers.

CBK isn’t an exam; it’s the framework of material around which the International Information System Security Certification Consortium, more commonly known as (ISC)², builds its exams. With a 31% year-over-year increase for CBK content, it’s another clear sign that interest in security as a profession is growing. And even though (ISC)²’s marquee certification, CISSP, has likely reached saturation, other (ISC)² certifications show clear growth: CCSP (Certified Cloud Security Professional) grew 52%, and SSCP (Systems Security Certified Practitioner) grew 67%. Although these certifications aren’t as popular, their growth is an important trend.

Data

Data is another very broad category, encompassing everything from traditional business analytics to artificial intelligence. Data engineering was the dominant topic by far, growing 35% year over year. Data engineering deals with the problem of storing data at scale and delivering that data to applications. It includes moving data to the cloud, building pipelines for acquiring data and getting data to application software (often in near real time), resolving the issues that are caused by data siloed in different organizations, and more.

Apache Spark, a platform for large-scale data processing, was the most widely used tool, even though the use of content about Spark declined slightly in the past year (2.7%). Hadoop, which would have led this category a decade ago, is still present, though usage of content about Hadoop dropped 8.3%; Hadoop has become a legacy data platform.

Microsoft Power BI has established itself as the leading business analytics platform; content about Power BI was the most heavily used, and achieved 31% year-over-year growth. NoSQL databases was second, with 7.6% growth—but keep in mind that NoSQL was a movement that spawned a large number of databases, with many different properties and designs. Our data shows that NoSQL certainly isn’t dead, despite some claims to the contrary; it has clearly established itself. However, the four top relational databases, if added together into a single “relational database” topic, would be the most heavily used topic by a large margin. Oracle grew 18.2% year over year; Microsoft SQL Server grew 9.4%; MySQL grew 4.7%; and PostgreSQL grew 19%.

Use of content about R, the widely used statistics platform, grew 15% from 2021. Similarly, usage of content about Pandas, the most widely used Python library for working with R-like data frames, grew 20%. It’s interesting that Pandas and R had roughly the same usage. Python and R have been competing (in a friendly way) for the data science market for nearly 20 years. Based on our usage data, right now it looks like a tie. R has slightly more market share, but Pandas has better growth. Both are staples in academic research: R is more of a “statistician’s workbench” with a comprehensive set of statistical tools, while Python and Pandas are built for programmers. The difference has more to do with users’ tastes than substance though: R is a fully capable programming language, and Python has excellent statistical and array-processing libraries.

Usage for content about data lakes and about data warehouses was also just about equal, but data lakes usage had much higher year-over-year growth (50% as opposed to 3.9%). Data lakes are a strategy for storing an organization’s data in an unstructured repository; they came into prominence a few years ago as an alternative to data warehouses. It would be useful to compare data lakes with data lakehouses and data meshes; those terms aren’t in our taxonomy yet.

Year-over-year growth for data analysis and database topics Artificial intelligence

At the beginning of 2022, who would have thought that we would be asking an AI-driven chat service to explain source code (even if it occasionally makes up facts)? Or that we’d have AI systems that enable nonartists to create works that are on a par with professional designers (even if they can’t match Degas and Renoir)? Yet here we are, and we don’t have ChatGPT or generative AI in our taxonomy. The one thing that we can say is that 2023 will almost certainly take AI even further. How much further nobody knows.

For the past two years, natural language processing (NLP) has been at the forefront of AI research, with the release of Open AI’s popular tools GPT-3 and ChatGPT along with similar projects from Google, Meta, and others that haven’t been released. NLP has many industrial applications, ranging from automated chat servers to code generation (e.g., GitHub Copilot) to writing tools. It’s not surprising that NLP content was the most viewed and saw significant year-over-year growth (42%). All of this progress is based on deep learning, which was the second-most-heavily-used topic, with 23% growth. Interest in reinforcement learning seems to be off (14% decline), though that may turn around as researchers try to develop AI systems that are more accurate and that can’t be tricked into hate speech. Reinforcement learning with human feedback (RLHF) is one new technique that might lead to better-behaved language models.

There was also relatively little interest in content about chatbots (a 5.8% year-over-year decline). This reversal seems counterintuitive, but it makes sense in retrospect. The release of GPT-3 was a watershed event, an “everything you’ve done so far is out-of-date” moment. We’re excited about what will happen in 2023, though the results will depend a lot on how ChatGPT and its relatives are commercialized, as ChatGPT becomes a fee-based service, and both Microsoft and Google take steps towards chat-based search.

Year-over-year growth for artificial intelligence topics

Our learning platform gives some insight into the tools developers and researchers are using to work with AI. Based on units viewed, scikit-learn was the most popular library. It’s a relatively old tool, but it’s still actively maintained and obviously appreciated by the community: usage increased 4.7% over the year. While usage of content about PyTorch and TensorFlow is roughly equivalent (PyTorch is slightly ahead), it’s clear that PyTorch now has momentum. PyTorch increased 20%, while TensorFlow decreased 4.8%. Keras, a frontend library that uses TensorFlow, dropped 40%.

It’s disappointing to see so little usage of content on MLOps this year, along with a slight drop (4.0%) from 2021 to 2022. One of the biggest problems facing machine learning and artificial intelligence is deploying applications into production and then maintaining them. ML and AI applications need to be integrated into the deployment processes used for other IT applications. This is the business of MLOps, which presents a set of problems that are only beginning to be solved, including versioning for large sets of training data and automated testing to determine when a model has become stale and needs retraining. Perhaps it’s still too early, but these problems must be addressed if ML and AI are to succeed in the enterprise.

No-code and low-code tools for AI don’t appear in our taxonomy, unfortunately. Our report AI Adoption in the Enterprise 2022 argues that AutoML in its various incarnations is gradually gaining traction. This is a trend worth watching. While there’s very little training available on Google AutoML, Amazon AutoML, IBM AutoAI, Amazon SageMaker, and other low-code tools, they’ll almost certainly be an important force multiplier for experienced AI developers.

Infrastructure and Operations

Containers, Linux, and Kubernetes are the top topics within infrastructure and operations. Containers sits at the top of the list (with 2.5% year-over-year growth), with Docker, the most popular container, in fifth place (with a 4.4% decline). Linux, the second most used topic, grew 4.4% year over year. There’s no surprise here; as we’ve been saying for some time, Linux is “table stakes” for operations. Kubernetes is third, with 4.4% growth.

The containers topic is extremely broad: it includes a lot of content that’s primarily about Docker but also content about containers in general, alternatives to Docker (most notably Podman), container deployment, and many other subtopics. It’s clear that containers have changed the way we deploy software, particularly in the cloud. It’s also clear that containers are here to stay. Docker’s small drop is worth noting but isn’t a harbinger of change. Kubernetes deprecated direct Docker support at the end of 2020 in favor of the Container Runtime Interface (CRI). That change eliminated a direct tie between Kubernetes and Docker but doesn’t mean that containers built by Docker won’t run on Kubernetes, since Docker supports the CRI standard. A more convincing reason for the drop in usage is that Docker is no longer new and developers and other IT staff are comfortable with it. Docker itself may be a smaller piece of the operations ecosystem, and it may have plateaued, but it’s still very much there.

Content about Kubernetes was the third most widely viewed in this group, and usage grew 4.4% year over year. That relatively slow growth may mean that Kubernetes is close to a plateau. We increasingly see complaints that Kubernetes is overly complex, and we expect that, sooner or later, someone will build a container orchestration platform that’s simpler, or that developers will move toward “managed” solutions where a third party (probably a cloud provider) manages Kubernetes for them. One important part of the Kubernetes ecosystem, the service mesh, is declining; content about service mesh showed a 28% decline, while content about Istio (the service mesh implementation most closely tied to Kubernetes) declined 42%. Again, service meshes (and specifically Istio) are widely decried as too complex. It’s indicative (and perhaps alarming) that IT departments are resorting to “roll your own” for a complex piece of infrastructure that manages communications between services and microservices (including services for security). Alternatives are emerging. HashiCorp’s Consul and the open source Linkerd project are promising service meshes. UC Berkeley’s RISELab, which developed both Ray and Spark, recently announced SkyPilot, a tool with goals similar to Kubernetes but that’s specialized for data. Whatever the outcome, we don’t believe that Kubernetes is the last word in container orchestration.

Year-over-year growth for infrastructure and operations topics

If there’s any tool that defines “infrastructure as code,” it’s Terraform, which saw 74% year-over-year growth. Terraform’s goals are relatively simple: You write a simple description of the infrastructure you want and how you want that infrastructure configured. Terraform gathers the resources and configures them for you. Terraform can be used with all of the major cloud providers, in addition to private clouds (via OpenStack), and it’s proven to be an essential tool for organizations that are migrating to the cloud.

We took a separate look at the “continuous” methodologies (also known as CI/CD): continuous integration, continuous delivery, and continuous deployment. Overall, this group showed an 18% year-over-year increase in units viewed. This growth comes largely from a huge (40%) increase in the use of content about continuous delivery. Continuous integration showed a 22% decline, while continuous deployment had a 7.1% increase.

What does this tell us? The term continuous integration was first used by Grady Booch in 1991 and popularized by the Extreme Programming movement in the late 1990s. It refers to the practice of merging code changes into a single repository frequently, testing at each iteration to ensure that the project is always in a coherent state. Continuous integration is tightly coupled to continuous delivery; you almost always see CI/CD together. Continuous delivery is a practice that was developed at the second-generation web companies, including Flickr, Facebook, and Amazon, which radically changed IT practice by staging software updates for deployment several times daily. With continuous delivery, deployment pipelines are fully automated, requiring only a final approval to put a release into production. Continuous deployment is the newest (and smallest) of the three, emphasizing completely automated deployment to production: updates go directly from the developer into production, without any intervention. These methodologies are closely tied to each other. CI/CD/CD as a whole (and yes, nobody ever uses CD twice) is up 18% for the year. That’s a significant gain, and even though these topics have been around for a while, it’s evidence that growth is still possible.

Year-over-year growth for continuous methodologies IT and operations certifications

The leading IT certification is clearly CompTIA, which showed a 41% year-over-year increase. The CompTIA family (Network+, A+, Linux+, and Security+) dominates the certification market. (The CompTIA Network+ showed a very slight decline (0.32%), which is probably just random fluctuation.) The Linux+ certification experienced tremendous year-over-year growth (47%). That growth is easy to understand. Linux has long been the dominant server operating system. In the cloud, Linux instances are much more widely used than the alternatives, though Windows is offered on Azure (of course) along with macOS. In the past few years, Linux’s market penetration has gone even deeper. We’ve already seen the role that containers are playing, and containers almost always run Linux as their operating system. In 1995, Linux might have been a quirky choice for people devoted to free and open source software. In 2023, Linux is mandatory for anyone in IT or software development. It’s hard to imagine getting a job or advancing in a career without demonstrating competence.

Year-over-year growth for IT certifications

It’s surprising to see the Cisco Certified Network Associate (CCNA) certification drop 18% and the Cisco Certified Network Professional (CCNP) certification drop 12%, as the Cisco certifications have been among the most meaningful and prestigious in IT for many years. (The Cisco Certified Internet Expert (CCIE) certification, while relatively small compared to the others, did show 70% growth.) There are several causes for this shift. First, as companies move workloads to the cloud or to colocation providers, maintaining a fleet of routers and switches becomes less important. Network certifications are less valuable than they used to be. But why then the increase in CCIE? While CCNA is an entry-level certification and CCNP is middle tier, CCIE is Cisco’s top-tier certification. The exam is very detailed and rigorous and includes hands-on work with network hardware. Hence the relatively small number of people who attempt it and study for it. However, even as companies offload much of their day-to-day network administration to the cloud, they still need people who understand networks in depth. They still have to deal with office networks, and with extending office networks to remote employees. While they don’t need staff to wrangle racks of data center routers, they do need network experts who understand what their cloud and colocation providers are doing. The need for network staff might be shrinking, but it isn’t going away. In a shrinking market, attaining the highest level of certification will have the most long-term value.

Cloud

We haven’t seen any significant shifts among the major cloud providers. Amazon Web Services (AWS) still leads, followed by Microsoft Azure, then Google Cloud. Together, this group represents 97% of cloud platform content usage. The bigger story is that we saw decreases in year-over-year usage for all three. The decreases are small and might not be significant: AWS is down 3.8%, Azure 7.5%, and Google Cloud 2.1%. We don’t know what’s responsible for this decline. We looked industry by industry; some were up, some were down, but there were no smoking guns. AWS showed a sharp drop in computers and electronics (about 27%), which is a relatively large category, and a smaller drop in finance and banking (15%), balanced by substantial growth in higher education (35%). There was a lot of volatility among industries that aren’t big cloud users—for example, AWS was up about 250% in agriculture—but usage among industries that aren’t major cloud users isn’t high enough to account for that change. (Agriculture accounts for well under 1% of total AWS content usage.) The bottom line is, as they say in the nightly financial news, “Declines outnumbered gains”: 16 out of 28 business categories showed a decline. Azure was similar, with 20 industries showing declines, although Azure saw a slight increase for finance and banking. The same was true for Google Cloud, though it benefited from an influx of individual (B2C) users (up 9%).

Over the past year, there’s been some discussion of “cloud repatriation”: bringing applications that have moved to the cloud back in-house. Cost is the greatest motivation for repatriation; companies moving to the cloud have often underestimated the costs, partly because they haven’t succeeded in using the cloud effectively. While repatriation is no doubt responsible for some of the decline, it’s at most a small part of the story. Cloud providers make it difficult to leave, which ironically might drive more content usage as IT staff try to figure out how to get their data back. A bigger issue might be companies that are putting cloud plans on hold because they hear of repatriation or that are postponing large IT projects because they fear a recession.

Of the smaller cloud providers, IBM showed a huge year-over-year increase (135%). Almost all of the change came from a significant increase in consulting and professional services (200% growth year over year). Oracle showed a 36% decrease, almost entirely due to a drop in content usage from the software industry (down 49%). However, the fact that Oracle is showing up at all demonstrates that it’s grown significantly over the past few years. Oracle’s high-profile deal to host all of TikTok’s data on US residents could easily solidify the company’s position as a significant cloud provider. (Or it could backfire if TikTok is banned.)

We didn’t include two smaller providers in the graph: Heroku (now owned by Salesforce) and Cloud Foundry (originally VMware, handed off to the company’s Pivotal subsidiary and then to the Cloud Foundry Foundation; now, multiple providers run Cloud Foundry software). Both saw fairly sharp year-over-year declines: 10% for Heroku, 26% for Cloud Foundry. As far as units viewed, Cloud Foundry is almost on a par with IBM. But Heroku isn’t even on the charts; it appears to be a service whose time has passed. We also omitted Tencent and Alibaba Cloud; they’re not in our subject taxonomy, and relatively little content is available.

Year-over-year growth for cloud providers

Cloud certifications followed a similar pattern. AWS certifications led, followed by Azure, followed by Google Cloud. We saw the same puzzling year-over-year decline here: 13% for AWS certification, 10% for Azure, and 6% for Google Cloud. And again, the drop was smallest for Google Cloud.

While usage of content about specific cloud providers dropped from 2021 to 2022, usage for content about other cloud computing topics grew. Cloud migration, a fairly general category for content about building cloud applications, grew 45%. Cloud service models also grew 41%. These increases may help us to understand why usage of content about the “big three” clouds decreased. As cloud usage moves beyond early adopters and becomes mainstream, the conversation naturally focuses less on individual cloud providers and more on high-level issues. After a few pilot projects and proofs of concept, learning about AWS, Azure, and Google Cloud is less important than planning a full-scale migration. How do you deploy to the cloud? How do you build services in the cloud? How do you integrate applications you have moved to the cloud with legacy applications that are staying in-house? At this point, companies know the basics and have to go the rest of the way.

Year-over-year growth for cloud certifications

With this in mind, it’s not at all surprising that our customers are very interested in hybrid clouds, for which content usage grew 28% year over year. Our users realize that every company will inevitably evolve toward a hybrid cloud. Either there’ll be a wildcat skunkworks project on some cloud that hasn’t been “blessed” by IT, or there’ll be an acquisition of a company that’s using a different provider, or they’ll need to integrate with a business partner using a different provider, or they don’t have the budget to move their legacy applications and data, or… The reasons are endless, but the conclusion is the same: hybrid is inevitable, and in many companies it’s already the reality.

The increase in use of content about private clouds (37%) is part of the same story. Many companies have applications and data that have to remain in-house (whether that’s physically on-premises or hosted at a data center offering colocation). It still makes sense for those applications to use APIs and deployment toolchains equivalent to those used in the cloud. “The cloud” isn’t the exception; it has become the rule.

Year-over-year growth for cloud architecture topics Professional Skills

In the past year, O’Reilly users have been very interested in upgrading their professional and management skills. Every category in this relatively small group is up, and most of them are up significantly. Project management saw 47% year-over-year growth; professional development grew 37%. Use of content about the Project Management Professional (PMP) certification grew 36%, and interest in product management grew similarly (39%). Interest in communication skills increased 26% and interest in leadership grew by 28%. The two remaining categories that we tracked, IT management and critical thinking, weren’t as large and grew by somewhat smaller amounts (21% and 20%, respectively).

Several factors drive these increases. For a long time, software development and IT operations were seen as solo pursuits dominated by “neckbeards” and antisocial nerds, with some “rock stars” and “10x programmers” thrown in. This stereotype is wrong and harmful—not just to individuals but to teams and companies. In the past few years, we’ve heard a lot less about 10x developers and more about the importance of good communication, leadership, and mentoring. Our customers have realized that the key to productivity is good teamwork, not some mythical 10x developer. And there are certainly many employees who see positions in management, as a “tech lead,” as a product manager, or as a software architect, as the obvious next step in their careers. All of these positions stress the so-called “soft skills.” Finally, talk about a recession has been on the rise for the past year, and we continue to see large layoffs from big companies. While software developers and IT operations staff are still in high demand, and there’s no shortage of jobs, many are certainly trying to acquire new skills to improve their job security or to give themselves better options in the event that they’re laid off.

Year-over-year growth for professional skills topics Web Development

The React and Angular frameworks continue to dominate web development. The balance is continuing to shift toward React (10% year-over-year growth) and away from Angular (a 17% decline). Many frontend developers feel that React offers better performance and is more flexible and easier to learn. Many new frameworks (and frameworks built on frameworks) are in play (Vue, Next.js, Svelte, and so on), but none are close to becoming competitors. Vue showed a significant year-over-year decline (17%), and the others didn’t make it onto the chart.

PHP is still a contender, of course, with almost no change (a decline of 1%). PHP advocates claim that 80% of the web is built on it: Facebook is built on PHP, for instance, along with millions of WordPress sites. Still, it’s hard to look at PHP and say that it’s not a legacy technology. Ruby on Rails grew 6.6%. Content usage for Ruby on Rails is similar to PHP, but Rails usage has been declining for some years. Is it poised for a comeback?

The use of content about JavaScript showed a slight decline (4.6%), but we don’t believe this is significant. In our taxonomy, content can only be tagged with one topic, and everything that covers React or Angular is implicitly about JavaScript. In addition, it’s interesting to see usage of TypeScript increasing (12%); TypeScript is a strongly typed variant of JavaScript that compiles (the right word is actually “transpiles”) to JavaScript, and it’s proving to be a better tool for large complex applications.

One important trend shows up at the bottom of the graph. WebAssembly is still a small topic, but it saw 74% growth from 2020 to 2021. And Blazor, Microsoft’s implementation of C# and .NET for WebAssembly, is up 59%. That’s a powerful signal. These topics are still small, but if they can maintain that kind of growth, they won’t be small for long. WebAssembly is poised to become an important part of web development.

Year-over-year growth for web development topics Design

The heaviest usage in the design category went to user experience and related topics. User experience grew 18%, user research grew 5%, interface design grew 92%, and interaction design grew 36%. For years, we expected software to be difficult and uncomfortable to use. That’s changed. Apple made user interface design a priority early in the early 2000s, forcing other companies to follow if they wanted to remain competitive. The design thinking movement may no longer be in the news, but it’s had an effect: software teams think about design from the beginning. Even software developers who don’t have the word “design” in their job title need to think about and understand design well enough to build decent user interfaces and pleasant user experiences.

Usability, the only user-centric topic to show a decline, was only down 2.6%. It’s also worth noting that use of content about accessibility has grown 96%. Accessibility is still a relatively small category, but that kind of growth shows that accessibility is an aspect of user experience that can no longer be ignored. (The use of alt text for images is only one example: it’s become common on Twitter and is almost universal on Mastodon.)

Information architecture was down significantly (a 17% drop). Does that mean that interest has shifted from designing information flow to designing experiences, and is that a good thing?

Use of content about virtual and augmented reality is relatively small but grew 83%. The past year saw a lot of excitement around VR, Web3, the metaverse, and related topics. Toward the end of the year, that seemed to cool off. However, an 83% increase is noteworthy. Will that continue? It may depend on a new generation of VR products, both hardware and software. If Apple can make VR glasses that are comfortable and that people can wear without looking like aliens, 83% growth might seem small.

Year-over-year growth for design topics The Future

We started out by saying that this industry doesn’t change as much from year to year as most people think. That’s true, but that doesn’t mean there’s no change. There are signals of important new trends—some completely new, some continuations of trends that started years ago. So what small changes are harbingers of bigger changes in the years to come?

The Go and Rust programming languages have shown significant growth both in the past year and for the last few years. There’s no sign that this growth will stop. It will take a few more years, but before long they’ll be on a par with Java and Python.

It’s no surprise that we saw huge gains for natural language processing and deep learning. GPT-3 and its successor ChatGPT are the current stars of the show. While there’s been a lot of talk about another “AI winter,” that isn’t going to happen. The success of ChatGPT (not to mention Stable Diffusion, Midjourney, and many projects going on at Meta and Google) will keep winter away, at least for another year. What will people build on top of ChatGPT and its successors? What new programming tools will we see? How will the meaning of “computer programming” change if AI assistants take over the task of writing code? What new research tools will become available, and will our new AI assistants persist in “making stuff up”? For several years now, AI has been the most exciting area in software. There’s lots to imagine, lots to build, and infinite space for innovation. As long as the AI community provides exciting new results, no one will be complaining and no one need fear the cold.

We’ve also seen a strong increase in interest in leadership, management, communication, and other “soft skills.” This interest isn’t new, but it’s certainly growing. Whether the current generation of programmers is getting tired of coding or whether they perceive soft skills as giving them better job security during a recession isn’t for us to say. It’s certainly true that better communication skills are an asset for any project.

Our audience is slightly less interested in content about the “big three” cloud providers (AWS, Azure, and Google Cloud), but they’re still tremendously interested in migrating to the cloud and taking advantage of cloud offerings. Despite many reports claiming that cloud adoption is almost universal (and I confess to writing some of them), I’ve long believed that we’re only in the early stages of cloud adoption. We’re now past the initial stage, during which a company might claim that it was “in the cloud” on the basis of a few trial projects. Cloud migration is serious business. We expect to see a new wave of cloud adoption. Companies in that wave won’t make naive assumptions about the costs of using the cloud, and they’ll have the tools to optimize their cloud usage. This new wave may not break until fears of a recession end, but it will come.

While the top-level security category grew 20%, we’d hoped to see more. For a long time, security was an afterthought, not a priority. That’s changing, but slowly. However, we saw huge gains for zero trust and governance. It’s unfortunate that these gains are driven by necessity (and the news cycle), but perhaps the message is getting through after all.

What about augmented and virtual reality (AR/VR), the metaverse, and other trendy topics that dominated much of the trade press? Interest in VR/AR content grew significantly, though what that means for 2023 is anyone’s guess. Long-term, the category probably depends on whether or not anyone can make AR glasses a fashion accessory that everyone needs to have. A bigger question is whether anyone can build a next-generation web that’s decentralized, and that fosters immediacy and collaboration without requiring exotic goggles. That’s clearly something that can be done: look no further than Figma (for collaboration), Mastodon (for decentralization), or Petals (for a cloud-less cloud).

Will these be the big stories for 2023? February is only just beginning; we have 11 months to find out.

Footnotes

1. Box said “models”; a metric is a kind of model, isn’t it?

Categories: Technology

What’s the Killer App for Web3?

O'Reilly Radar - Tue, 2023/02/21 - 04:36

(Dear readers: this is a scaled-down excerpt from a larger project I’m working on. I’ll let you know when that effort is ready for broad distribution.)

Every technology is good for something. But there are use cases, and then there are Use Cases. The extremely compelling applications of the technology. Those that lead to widespread adoption and increased legitimacy, almost becoming synonymous with the technology itself.

Do people still use the term “killer app?” It’s not my favorite—I (unfairly?) associate it with Dot-Com business-bro culture—but I have to admit that it captures the spirit of that dominant use case. So I’ll hold my nose and use it here.

If you reflect on the emerging-tech landscape, you see the following killer apps:

  • Early-day internet: E-commerce. Hands-down.
  • Cloud: The legion of SaaS tool startups, on its first go-round; then AI for its victory lap.
  • Data science/ML/AI: Advertising. Advertising. Advertising.

And then there’s the new kid, web3. I’ve noticed that people are more inclined to ask me “what’s it good for?” rather than “what is it?” Which is fair. Every technology has to pull its weight, and sometimes What It Enables People To Do counts more than What It Actually Is Under The Hood. (Hence, my usual crack that machine learning is just linear algebra with better marketing. But I’ll save that for a different article.)

While I can walk those people through a few use cases, I still haven’t figured out what web3’s killer app is. That’s not for a lack of trying. I’ve been exploring the topic for a couple of years now, which is what led me to launch the Block & Mortar newsletter so I could share more of my research in public.

Why It’s Tough

Sorting out web3’s killer app(s) has proven difficult for a number of reasons, including:

  • Mixed bag/layer cake: The term “web3” is as slippery as “AI,” which has already changed names a few times. Both are umbrella terms for several different concepts. Today we have the three-layer cake that is blockchain-cryptocurrency-NFTs, plus this “metaverse” term that is itself very fuzzy. We may add more to that list as the field grows.

    So when we talk about a use case for “web3,” we first need to decide which of those concepts we mean. (It’s sort of like how  “internet” sometimes means “the underlying network connectivity layer,” and other times, “the web.”)
  • Rearview mirror: We usually notice killer apps after the fact. The technology is built to do X (and it may do a middling job of that) but someone else realizes that it would revolutionize Y.

    Bitcoin—the most recognized name in this space—has been around since 2009, but the wider web3 ecosystem is maybe half that age. As it’s still developing, we’re still in that phase of throwing it at everything to see what sticks. That’s probably what will uncover the killer app, but we won’t know until something really takes off.
  • Deja vu, all over again: A common reaction to web3 use cases is, “we already have that.” Or even, “crypto is a terrible version of that.” Both of which are usually true. Blockchain is an absolutely terrible replacement for a relational database. But so was MongoDB. And Hadoop. And every other non-relational data store that’s come along. The point is to notice where a relational database doesn’t work so well, when it’s creaking at the edges, and then see how another tool would do in its place.

    (Do you have one entity in charge of managing all the data? You’re pretty safe to default to a relational database. Do you have several peers, all of whom need to see and validate the data, and none of whom want to trust one member with all the keys? Blockchain is your friend.)

    We had search engines before Google. Social networks before Twitter,  and physical stores before e-commerce. “Why would I need to boot up my computer to go shopping? I can just hop in my car and browse in-person.” How long did it take merchants to see the value in a web-based storefront, backed by a warehouse-and-shipping infrastructure? And why’d it take consumers so long to realize that it’s nicer to click around a website at 3AM from the comfort of their couch?

    The new way of doing things is often convenience masked as discomfort with the unfamiliar. It takes time for us to learn that it’s not so uncomfortable after all.
  • Guilt by association: Most people use “web3” and “crypto” interchangeably, which is not exactly fair. They also associate “crypto” with “crime,” which is much harder for me to refute. Most  mainstream cryptocurrency news stories involve phishing scams, a token’s meltdown, or a fund collapsing. Mix that with the environmental impact of crypto mining and I can see why people would assume it’s good for nothing.

    (One could argue that web3 has proven very good for criminals, and that the killer app is separating people from their money. I won’t dispute that. But for now, let’s focus on legitimate use cases that will have mass appeal.)
What It Won’t Be

My gut feeling is that targeted, invasive advertising will not be web3’s killer app.

It will certainly get some traction as companies try to make it happen. Adtech drove a lot of web2 and I already see attempts to ride that wave into web3. To advertisers, a metaverse property is a surface on which to show ads, in a (semi-)walled garden, where they can collect contact details.

And, frankly, that’s the problem. Web2’s “collect personal info to try to identify specific individuals who may be interested and then pummel them with messaging” is incompatible with web3’s ethos of “honor pseudonymity and give people the opportunity to tell you when they’re interested.”

Web3 shifts the power of outreach to the buyer. That sounds like a better system to me, because of the strength of self-selection. But to get there, marketers will have to unlearn old habits and embrace this world in which they derive greater benefit yet have less control. Understandably, they will have trouble letting go.

So if not advertising, then what?

Based on my research, I suspect web3’s killer apps will come out of two unlikely fields: fashion and loyalty programs.

Fashion-forward

The fashion industry was an early adopter of web3. From accepting cryptocurrency as a form of payment, to token-gating events (including special NFTs for VIP passes), to virtual models. Well-known fashion houses have created wearables and perfumes for metaverse avatars, some of which are digital twins for real-world items. They’ve even flipped that around, to road-test digital products before releasing them in physical form. Much of this work has led to the understanding of using NFTs to build community.

That’s admittedly more of a sampler platter than a single use case. There’s no clear leader in there. Yet. But if the best way to find something is by looking, then the fashion industry is poised to find that killer app precisely because they are running so many experiments. They’re testing web3 tools in public, in real-world situations, and they are learning at each step.

Even if you know zilch about fashion, you can still keep an eye on this field’s web3 work and adapt it to your own. I highly recommend Vogue Business as a start. That’s right, the eponymous fashion magazine has a dedicated publication for behind-the-scenes industry issues such as technology, sustainability, and economic trends. Stumbling onto that website jump-started my understanding of web3. I saw real business use cases outside of DeFi, and got my first taste of what I would later refer to as NFTs With Benefits: using the tokens as access passes and for VIP status.)

Rewarding Loyalty

Loyalty programs are an interesting bunch. They’re the other side of the marketing department, with a very different approach compared to their siblings in the advertising arena.

The idea behind a loyalty program is that someone is already a customer, and they have expressly signed up to join your fan club. (That sounds a lot like the web3 ideal of letting people self-select, does it not?) Membership in a loyalty program gives rise to a virtuous cycle: people like what you do, so they patronize your business more; you then find new ways to keep them happy, so they continue to like you.

The value in this positive feedback loop becomes clear when you consider that the cost of acquiring a new customer is typically much higher than keeping an existing customer engaged. And that repeat business adds up.  Major airlines’ frequent-flier programs rake in billions of dollars each year. Businesses have a strong incentive to keep those loyalty programs humming.

How does web3 fit in here? Loyalty programs are often built on a gamified structure, such as “fly X miles within Y months to get Z status.” Companies create web3 games that let people show how engaged they are with the brand. Chipotle customers rolled virtual burritos inside a Roblox eatery as a way for the chain to introduce its Garlic Guajillo Steak dish. Universal Studios gave out NFTs for participation in its in-person scavenger hunt.  And Starbucks recently unveiled blockchain-based updates to its Rewards program, challenging people to earn “Journey Stamps”—NFTs in everything but name—for trying different drinks.

This is when you’d ask why companies can’t build these games on existing technologies. That would be a fair question, since nothing I’ve described thus far really needs a blockchain. But it does offer two perks:

First, a loyalty program operates on a sequence of transactions such as “spend points,” “acquire points,” “use service.” Blockchain technology is purpose-built to record transactions to a tamper-resistant ledger. And a blockchain’s decentralized nature makes it easier for members in a shared venture—think airlines with codeshare agreements, or airlines partnering with hotels—to get instant updates on member activity. They can even build all of this behind the scenes, shielding customers from the underlying crypto wallet management.

Second, for those loyalty programs that expose the blockchain functionality to members, those crypto wallets serve as digital identities. True fans won’t just achieve status in a program; they’ll be able to broadcast that status by showing off the associated NFTs in a public-facing wallet. And that is a strong form of organic marketing.

Time Will Tell

Fashion and loyalty programs are poised to uncover web3’s killer apps, whatever those may be. At least, that’s how it’s adding up right now. I look forward to reviewing this article over the next few years to see whether this turns out to be true.

Whatever it is, I think back to something Mike Loukides has told me: “I think the winner will be whoever can build a blockchain that you don’t even know you’re using.” This is true. Consumers rarely care what technology runs their favorite apps; they just want them to work. Additionally, web3 still has a reputation problem. If companies are to reap blockchain’s technology benefits, they’d do well to keep them behind the scenes. Or at least follow the Starbucks example and give the tools new, brand-specific names.

We should also consider what happens when those killer apps finally surface. That will be the end of one race and the start of another. The outsized interest in building on and monetizing those killer apps will drive improvements in the underlying technology. And those improvements can be applied elsewhere.

Consider how much adtech has poured back into the AI ecosystem. Google and Facebook drove advances in neural networks, contributing code (TensorFlow, Torch, Prophet), hardware (custom TPU chips), and tooling (autoML and model hosting infrastructure through Vertex AI). That’s not to speak of the educational material that’s sprung up around these tools and services. Combined, these have lowered the barrier to entry for individuals to learn about neural networks and for businesses to put those powerful models to use.

So I look forward to the continued quest for the web3 killer app(s), in part for what that will do for the space as a whole.

Categories: Technology

Sydney and the Bard

O'Reilly Radar - Thu, 2023/02/16 - 11:59

It’s been well publicized that Google’s Bard made some factual errors when it was demoed, and Google paid for these mistakes with a significant drop in their stock price. What didn’t receive as much news coverage (though in the last few days, it’s been well discussed online) are the many mistakes that Microsoft’s new search engine, Sydney, made. The fact that we know its name is Sydney is one of those mistakes, since it’s never supposed to reveal its name. Sydney-enhanced Bing has threatened and insulted its users, in addition to being just plain wrong (insisting that it was 2022, and insisting that the first Avatar movie hadn’t been released yet). There are excellent summaries of these failures in Ben Thompson’s newsletter Stratechery and Simon Willison’s blog. It might be easy to dismiss these stories as anecdotal at best, fraudulent at worst, but I’ve seen many reports from beta testers who managed to duplicate them.

Of course, Bard and Sydney are beta releases that aren’t open to the wider public yet. So it’s not surprising that things are wrong. That’s what beta tests are for. The important question is where we go from here. What are the next steps?

Large language models like ChatGPT and Google’s LaMDA aren’t designed to give correct results. They’re designed to simulate human language—and they’re incredibly good at that. Because they’re so good at simulating human language, we’re predisposed to find them convincing, particularly if they word the answer so that it sounds authoritative. But does 2+2 really equal 5? Remember that these tools aren’t doing math, they’re just doing statistics on a huge body of text. So if people have written 2+2=5 (and they have in many places, probably never intending that to be taken as correct arithmetic), there’s a non-zero probability that the model will tell you that 2+2=5.

The ability of these models to “make up” stuff is interesting, and as I’ve suggested elsewhere, might give us a glimpse of artificial imagination. (Ben Thompson ends his article by saying that Sydney doesn’t feel like a search engine; it feels like something completely different, something that we might not be ready for—perhaps what David Bowie meant in 1999 when he called the Internet an “alien lifeform”). But if we want a search engine, we will need something that’s better behaved. Again, it’s important to realize that ChatGPT and LaMDA aren’t trained to be correct. You can train models that are optimized to be correct—but that’s a different kind of model. Models like that are being built now; they tend to be smaller and trained on specialized data sets (O’Reilly Media has a search engine that has been trained on the 70,000+ items in our learning platform). And you could integrate those models with GPT-style language models, so that one group of models supplies the facts and the other supplies the language.

That’s the most likely way forward. Given the number of startups that are building specialized fact-based models, it’s inconceivable that Google and Microsoft aren’t doing similar research. If they aren’t, they’ve seriously misunderstood the problem. It’s okay for a search engine to give you irrelevant or incorrect results. We see that with Amazon recommendations all the time, and it’s probably a good thing, at least for our bank accounts. It’s not okay for a search engine to try to convince you that incorrect results are correct, or to abuse you for challenging it. Will it take weeks, months, or years to iron out the problems with Microsoft’s and Google’s beta tests? The answer is: we don’t know. As Simon Willison suggests, the field is moving very fast, and can make surprising leaps forward. But the path ahead isn’t short.

Categories: Technology

AI Hallucinations: A Provocation

O'Reilly Radar - Tue, 2023/02/14 - 04:23

Everybody knows about ChatGPT. And everybody knows about ChatGPT’s propensity to “make up” facts and details when it needs to, a phenomenon that’s come to be called “hallucination.” And everyone has seen arguments that this will bring about the end of civilization as we know it.

I’m not going to argue with any of that. None of us want to drown in masses of “fake news,” generated at scale by AI bots that are funded by organizations whose intentions are most likely malign. ChatGPT could easily outproduce all the world’s legitimate (and, for that matter, illegitimate) news agencies. But that’s not the issue I want to address.

I want to look at “hallucination” from another direction. I’ve written several times about AI and art of various kinds. My criticism of AI-generated art is that it’s all, well, derivative. It can create pictures that look like they were painted by Da Vinci–but we don’t really need more paintings by Da Vinci. It can create music that sounds like Bach–but we don’t need more Bach. What it really can’t do is make something completely new and different, and that’s ultimately what drives the arts forward. We don’t need more Beethoven. We need someone (or something) who can do what Beethoven did: horrify the music industry by breaking music as we know it and putting it back together differently. I haven’t seen that happening with AI. I haven’t yet seen anything that would make me think it might be possible.  Not with Stable Diffusion, DALL-E, Midjourney, or any of their kindred.

Until ChatGPT. I haven’t seen this kind of creativity yet, but I can get a sense of the possibilities. I recently heard about someone who was having trouble understanding some software someone else had written. They asked ChatGPT for an explanation. ChatGPT gave an excellent explanation (it is very good at explaining source code), but there was something funny: it referred to a language feature that the user had never heard of. It turns out that the feature didn’t exist. It made sense, it was something that certainly could be implemented. Maybe it was discussed as a possibility in some mailing list that found its way into ChatGPT’s training data, but was never implemented? No, not that, either. The feature was “hallucinated,” or imagined. This is creativity–maybe not human creativity, but creativity nonetheless.

What if we viewed an an AI’s “hallucinations” as the precursor of creativity? After all, when ChatGPT hallucinates, it is making up something that doesn’t exist. (And if you ask it, it is very likely to admit, politely, that it doesn’t exist.) But things that don’t exist are the substance of art. Did David Copperfield exist before Charles Dickens imagined him? It’s almost silly to ask that question (though there are certain religious traditions that view fiction as “lies”). Bach’s works didn’t exist before he imagined them, nor did Thelonious Monk’s, nor did Da Vinci’s.

We have to be careful here. These human creators didn’t do great work by vomiting out a lot of randomly generated “new” stuff. They were all closely tied to the histories of their various arts. They took one or two knobs on the control panel and turned it all the way up, but they didn’t disrupt everything. If they had, the result would have been incomprehensible, to themselves as well as their contemporaries, and would lead to a dead end. That sense of history, that sense of extending art in one or two dimensions while leaving others untouched, is something that humans have, and that generative AI models don’t. But could they?

What would happen if we trained an AI like ChatGPT and, rather than viewing hallucination as error and trying to stamp it out, we optimized for better hallucinations? You can ask ChatGPT to write stories, and it will comply. The stories aren’t all that good, but they will be stories, and nobody claims that ChatGPT has been optimized as a story generator. What would it be like if a model were trained to have imagination plus a sense of literary history and style? And if it optimized the stories to be great stories, rather than lame ones? With ChatGPT, the bottom line is that it’s a language model. It’s just a language model: it generates texts in English. (I don’t really know about other languages, but I tried to get it to do Italian once, and it wouldn’t.) It’s not a truth teller; it’s not an essayist; it’s not a fiction writer; it’s not a programmer. Everything else that we perceive in ChatGPT is something we as humans bring to it. I’m not saying that to caution users about ChatGPT’s limitations; I’m saying it because, even with those limitations, there are hints of so much more that might be possible. It hasn’t been trained to be creative. It has been trained to mimic human language, most of which is rather dull to begin with.

Is it possible to build a language model that, without human interference, can experiment with “that isn’t great, but it’s imaginative. Let’s explore it more”? Is it possible to build a model that understands literary style, knows when it’s pushing the boundaries of that style, and can break through into something new? And can the same thing be done for music or art?

A few months ago, I would have said “no.” A human might be able to prompt an AI to create something new, but an AI would never be able to do this on its own. Now, I’m not so sure. Making stuff up might be a bug in an application that writes news stories, but it is central to human creativity. Are ChatGPT’s hallucinations a down payment on “artificial creativity”? Maybe so.

Categories: Technology

Meeting topics for Feb 9th

PLUG - Wed, 2023/02/08 - 14:12
We have 2 presentations lined up for this month's meeting:
Fatima Taj will present: How to Navigate the Early Days at Your First Tech Job and Bob Murphy will present: A brief introduction to Mastodon and the Fediverse

This is a remote meeting. Please join by going to https://lufthans.bigbluemeeting.com/plu-yuk-7xx at 7pm on Thursday Feb 9th

Fatima Taj: How to Navigate the Early Days at Your First Tech Job

Description:
Reflect on existing processes/documentation: Every team has its own processes/style of documentation and this offers you a great opportunity to make a meaningful contribution in! For example, if you feel that your onboarding was particularly difficult because of the lack of documentation/processes, this could be something that you could work on, especially since you have a fresh perspective as a new hire. Alternatively, you could also look into potential ways of improving existing processes/documentation.

The Art of Asking Questions: As overwhelming as it can be to ask a question, especially when starting a job as a new graduate, there is an art to asking questions which can allow you to feel both empowered and unintimidated. This includes doing your research before, using the rubber duck technique before asking the question, and being respectful of people’s time.

Deep Dive into your Projects: Even on teams with excellent documentation, getting started on your technical projects can be a very daunting task, since there’s multiple components involved. Overtime though, you will understand how these things are done on your team, but it can save you a lot of time and potential headaches to figure this out early on, by keeping a handy list to refer to including basic project setup, testing requirements, review and deployment processes, task tracking and documentation.

Gaining Technical Context: Gaining a high level overview of your project at the very beginning is invaluable because it not only allows you to gain a deeper understanding of the work that you’ll be doing, but also allows you to feel more connected to your team and company in general.

Establish Expectations: In order to set yourself up for long term success, it’s imperative that you establish a clear benchmark/criteria from the get go. If there isn’t a clear one available, work with your manager to establish one. This allows you to be strategic with your career growth in general and also removes any vagueness surrounding what is expected of you and how your performance will be evaluated moving forward. Additionally, it offers you a greater sense of clarity in terms of what you’re doing well and what could be improved upon.

Working in Distributed Teams and Maintaining a Work-Life Balance: When starting out your career, it’s very easy to get caught up in your work, often leading to new employees working overtime. Initially, one might not be cognizant of this, but overtime, this can lead to extreme burn out and a myriad of other issues. It’s crucial to maintain a work-life balance, even more so now when a vast majority of people work from home and across a variety of time zones, and the separation between work and a life outside of work gets blurred. This includes turning off notifications, being diligent about not working on weekends, and using your paid time off.

About Fatima:
Fatima is a graduate of the University of Waterloo, Canada. Post graduation, she's worked full-time as a Software Developer at DRW, a trading firm, and currently works at Yelp as a Software Engineer. Fatima is passionate about supporting fellow tech enthusiasts and has spoken at over 70 hackathons across North America in 2022. In addition, she's also been a panelist at Harvard WeCode, presented at the Women Who Code Connect Event, Black is Tech Conference, Tapia Conference, and Women of Silicon Roundabout, London's biggest event for women in technology.


Bob Murphy: A brief introduction to Mastodon and the Fediverse.

Description:
The Fediverse is a collection of communities that is a bit of a throwback to a smaller, more personal time on the internet. There are services for short messaging, audio and video sharing, and event organizing, among other things. Mastodon is a fully open source social media platform, with no advertising, monetizing, or venture capital. It is a part of the Fediverse, a social network that is truly a network, by incorporating ideas and protocols that allow users and information to freely spread throughout a wide diaspora of servers and services. Explore how you might wish to join into the rich, new world that has more of a resemblance of the internet as it was envisioned to be.

About Bob:
Bob is a Linux Systems Administrator who has been a user of GNU/Linux for own personal use since the late nineties. Bob has used many distributions over the years, starting with Slackware, up to the latest Red Hat and Ubuntu releases.
murph.info

Meeting topics for Feb 9th

PLUG - Wed, 2023/02/08 - 14:12
We have 2 presentations lined up for this month's meeting:
Fatima Taj will present: How to Navigate the Early Days at Your First Tech Job and Bob Murphy will present: A brief introduction to Mastodon and the Fediverse

This is a remote meeting. Please join by going to https://lufthans.bigbluemeeting.com/plu-yuk-7xx at 7pm on Thursday Feb 9th

Fatima Taj: How to Navigate the Early Days at Your First Tech Job

Description:
Reflect on existing processes/documentation: Every team has its own processes/style of documentation and this offers you a great opportunity to make a meaningful contribution in! For example, if you feel that your onboarding was particularly difficult because of the lack of documentation/processes, this could be something that you could work on, especially since you have a fresh perspective as a new hire. Alternatively, you could also look into potential ways of improving existing processes/documentation.

The Art of Asking Questions: As overwhelming as it can be to ask a question, especially when starting a job as a new graduate, there is an art to asking questions which can allow you to feel both empowered and unintimidated. This includes doing your research before, using the rubber duck technique before asking the question, and being respectful of people’s time.

Deep Dive into your Projects: Even on teams with excellent documentation, getting started on your technical projects can be a very daunting task, since there’s multiple components involved. Overtime though, you will understand how these things are done on your team, but it can save you a lot of time and potential headaches to figure this out early on, by keeping a handy list to refer to including basic project setup, testing requirements, review and deployment processes, task tracking and documentation.

Gaining Technical Context: Gaining a high level overview of your project at the very beginning is invaluable because it not only allows you to gain a deeper understanding of the work that you’ll be doing, but also allows you to feel more connected to your team and company in general.

Establish Expectations: In order to set yourself up for long term success, it’s imperative that you establish a clear benchmark/criteria from the get go. If there isn’t a clear one available, work with your manager to establish one. This allows you to be strategic with your career growth in general and also removes any vagueness surrounding what is expected of you and how your performance will be evaluated moving forward. Additionally, it offers you a greater sense of clarity in terms of what you’re doing well and what could be improved upon.

Working in Distributed Teams and Maintaining a Work-Life Balance: When starting out your career, it’s very easy to get caught up in your work, often leading to new employees working overtime. Initially, one might not be cognizant of this, but overtime, this can lead to extreme burn out and a myriad of other issues. It’s crucial to maintain a work-life balance, even more so now when a vast majority of people work from home and across a variety of time zones, and the separation between work and a life outside of work gets blurred. This includes turning off notifications, being diligent about not working on weekends, and using your paid time off.

About Fatima:
Fatima is a graduate of the University of Waterloo, Canada. Post graduation, she's worked full-time as a Software Developer at DRW, a trading firm, and currently works at Yelp as a Software Engineer. Fatima is passionate about supporting fellow tech enthusiasts and has spoken at over 70 hackathons across North America in 2022. In addition, she's also been a panelist at Harvard WeCode, presented at the Women Who Code Connect Event, Black is Tech Conference, Tapia Conference, and Women of Silicon Roundabout, London's biggest event for women in technology.


Bob Murphy: A brief introduction to Mastodon and the Fediverse.

Description:
The Fediverse is a collection of communities that is a bit of a throwback to a smaller, more personal time on the internet. There are services for short messaging, audio and video sharing, and event organizing, among other things. Mastodon is a fully open source social media platform, with no advertising, monetizing, or venture capital. It is a part of the Fediverse, a social network that is truly a network, by incorporating ideas and protocols that allow users and information to freely spread throughout a wide diaspora of servers and services. Explore how you might wish to join into the rich, new world that has more of a resemblance of the internet as it was envisioned to be.

About Bob:
Bob is a Linux Systems Administrator who has been a user of GNU/Linux for own personal use since the late nineties. Bob has used many distributions over the years, starting with Slackware, up to the latest Red Hat and Ubuntu releases.
murph.info

Radar Trends to Watch: February 2023

O'Reilly Radar - Tue, 2023/02/07 - 04:18

This month’s news seems to have been derailed by the three-ring circus: Musk and Twitter, Musk and Tesla, and SBF and FTX. That said, there are a lot of important things happening. We usually don’t say much about computing hardware, but RISC-V is gathering steam. I’m excited by Ion Stoica’s vision of “sky computing,” which is cloud-independent. A similar but even more radical project is Petals, which is a system for running the BLOOM large language model across a large number of volunteer hosts: cloud-free cloud computing, which the authors liken to Bittorrent. There’s been a lot of talk about decentralization; this is the real thing. That model for large-scale computation is more interesting, at least to me, than the ability to run one specific language model.

Artificial Intelligence
  • Adversarial learning tries to confuse machine learning systems by giving them altered input data, tricking them into giving incorrect answers. It is an important technique for improving AI security and accuracy.
  • We all know about AI-generated text, voices, and art; what about handwriting? Calligrapher.ai is a handwriting generator. It’s nowhere near as flexible as tools like Stable Diffusion, but it means that ChatGPT can not only write letters, it can sign them.
  • ChatGPT has been shown to be good at explaining code. It’s also good at re-writing code that has been intentionally obfuscated in a clear, human-readable version. There are clear applications (not all of them ethical) for this ability.
  • Who needs a database for an app’s backend? For that matter, who needs a backend at all? Just use GPT-3.
  • Reinforcement learning from human feedback (RLHF) is a machine learning training technique that integrates humans into the training loop. Humans provide additional rewards, in addition to automated rewards. RLHF, which was used in ChatGPT, could be a good way to build AI systems that are less prone to hate speech and similar problems.
  • Demis Hassabis, founder of DeepMind, advises that humans be careful in adopting AI. Don’t move fast and break things.
  • A group of researchers from Google has published a Deep Learning Tuning Playbook on Github. It recommends a procedure for hyperparameter tuning to optimize the performance of Deep Learning models.
  • Anthropic, a startup founded by former OpenAI researchers, has created a chatbot named Claude with capabilities similar to ChatGPT.  Claude appears to be somewhat less prone to “hallucination” and hate speech, though they are still issues.
  • Satya Nadella has tweeted that Microsoft will offer ChatGPT as part of Azure’s OpenAI service. It isn’t clear how this (paid) service relates to other talk about monetizing ChatGPT.
  • One application for ChatGPT is writing documentation for developers, and providing a conversational search engine for the documentation and code. Writing internal documentation is an often omitted part of any software project.
  • AllenAI (aka AI2) has developed a language model called ACCoRD for generating descriptions of scientific concepts. It is unique in that it rejects the idea of a “best” description, and instead creates several descriptions of a concept, intended for different audiences.
  • A researcher trained a very small neural network to do binary addition, and had some fascinating observations about how the network works.
  • OpenAI is considering a paid, “pro” version of ChatGPT. It’s not clear what additional features the Pro version might have, what it would cost, or whether a free public version with lower performance will remain. The answers no doubt depend on Microsoft’s plans for further integrating ChatGPT into its products.
  • ChatGPT can create a text adventure game, including a multi-user dungeon (MUD) in which the other players are simulated. That’s not surprising in itself. The important question is whether these games have finite boundaries or extend for as long as you keep playing.
  • A startup has built a truth checker for ChatGPT. It filters ChatGPT’s output to detect “hallucinations,” using its own AI that has been trained for a specific domain. They claim to detect 90% of ChatGPT’s errors in a given domain. Users can add their own corrections.
  • Andrej Karpathy has written nanoGPT, a very small version of the GPT language models that can run on small systems–possibly even on a laptop.
  • Petals is a system for running large language models (specifically, BLOOM-176B, roughly the size of GPT-3) collaboratively. Parts of the computation run on different hosts, using compute time donated by volunteers who receive higher priority for their jobs.
  • Having argued that we would eventually see formal languages for prompting natural language text generators, I’m proud to say that someone has done it.
  • DoNotPay has developed an AI “lawyer” that is helping a defendant make arguments in court. The lawyer runs on a cell phone, through which it hears the proceedings. It tells the defendant what to say through Bluetooth earbuds. DoNotPay’s CEO notes that this is illegal in almost all courtrooms. (After receiving threats from bar associations, DoNotPay has abandoned this trial.)
  • Perhaps prompted by claims that Google’s AI efforts have fallen behind OpenAI and others, Google has announced Muse, which generates images from text prompts. They claim that Muse is significantly faster and more accurate than DALL-E 2 and Stable Diffusion.
  • Microsoft has developed an impressive speech synthesis (text-to-speech) model named VALL-E. It is a zero-shot model that can imitate anyone’s voice using only a three-second sample.
  • Amazon has introduced Service Cards for several of their pre-built models (Rekognition, Textract, and Transcribe). Service cards describe the properties of models: how the model was trained, where the training data came from, the model’s biases and weaknesses. They are an implementation of Model Cards, proposed in Model Cards for Model Reporting.
  • The free and open source BLOOM language model can be run on AWS. Getting it running isn’t trivial, but there are instructions that describe how to get the resources you need.
Data
  • How do you use the third dimension in visualization? Jeffrey Heer (one of the creators of D3) and colleagues are writing about “cinematic visualization.”
  • SkyPilot is an open source platform for running data science jobs on any cloud: it is cloud-independent, and a key part of Ion Stoica’s vision of “sky computing” (provider-independent cloud computing).
Security
  • An annotated field guide to detecting phishing attacks might help users to detect phishes before they do damage. According to one study from 2020, most cyber attacks begin with a phish.
  • Docker security scanning tools inspect Docker images for vulnerabilities and other issues. They could become an important part of software supply chain security.
  • Browser-in-browser phishing attacks are becoming more common, and are difficult to detect. In these attacks, a web site pops up a replica of a single sign-on window from Google, Facebook, or some other SSO provider to capture the user’s login credentials.
  • We’re again seeing an increase in advertisements delivering malware or attracting unwary users to web sites that install malware. Ad blockers provide some protection.
  • Amazon has announced that AWS automatically encrypts all new objects stored in S3. Encrypted by default is a big step forward in cloud data security.
  • The Python Package Index (PyPI) continues to suffer from attacks that cause users to install packages infected with malware. Most notably, the PyTorch nightly build was linked to a version that would steal system information. Software supply chain problems continue to plague us.
  • Messaging provider Slack and continuous integration provider CircleCI were both victims of attacks and thefts of software and data. The companies haven’t been forthcoming with details, but it seems likely that CircleCI has lost all customer secrets.
Programming
  • GPU.js is a JavaScript library that transpiles and compiles simple JavaScript functions to run on a GPU.
  • Libsodium is being used to benchmark WebAssembly, which is gradually becoming a mainstream technology.
  • Julia Evans (@b0rk, @b0rk@mastodon.social) has an excellent discussion of the problems that arise from using floating point arithmetic carelessly.
  • Platform engineering may be the latest buzzword, but building reliable pipelines and tools for self-service development and deployment delivers important benefits for programmers and their companies.
  • Codeium is an open source code completion engine, like Copilot, that plugs into Vim. It isn’t clear what kind of language model Codeium uses.
  • YouPlot is a terminal-based plotting tool: no fancy graphics, just your standard terminal window.  Quick and easy.
  • Tetris can be used to implement a general purpose digital computer that, among other things, is capable of running Tetris.
Chips and Chip Design
  • A new generation of processors could use vibration to generate a flow of air through the chip, providing cooling without the need for fans. The developers are collaborating with Intel and targeting high-end laptops.
  • Google wants RISC-V to become a “tier-1” chip architecture for Android phones, giving it the same status as ARM. There is already a riscv64 branch in the source repository, though it’s far from a finished product.
  • Ripes is a visual computer architecture simulator for the RISC-V. You can watch your code execute (slowly). It’s primarily a tool for teaching, but it’s fun to play with.
Things
  • Boston Dynamics’ humanoid robot Atlas now has the ability to grab and toss things (including awkward and heavy objects).  This is a big step towards a robot that can do industrial or construction work.
  • Matter, a standard for smart home connectivity, appears to be gaining momentum. Among other things, it allows devices to interact with a common controller, rather than an app (and possibly a hub) for each device.
  • Science fiction alert: Researchers have created a tractor beam! While it’s very limited, it is capable of pulling specially constructed macroscopic objects.
  • A new catalyst has enabled a specialized solar cell to achieve 9% efficiency in generating hydrogen from water. This is a factor of 10 better than other methods, and approaches the efficiency needed to make “green hydrogen” commercially viable.
Web
  • A not-so private metaverse: Someone has built a “private metaverse” (hosted on a server somewhere for about $12/month) to display his art and to demonstrate that a metaverse can be open, and doesn’t have to be subject to land-grabs and rent-taking by large corporations.
  • Twitter has cut off API access for third party apps. This was a big mistake the first time (a decade ago); it’s an even bigger mistake now.
  • GoatCounter is an alternative to Google Analytics. It provides “privacy-friendly” web analytics. It can be self-hosted, or used as a service (free to non-commercial users).
  • Google is developing a free tool that websites can use to detect and remove material associated with terrorism, as an aid to help moderators.
Biology
  • Where do we go next with mRNA vaccines? Flu, Zika, HIV, cancer treatments? The vaccines are relatively easy to design and to manufacture.
Categories: Technology

Automating the Automators: Shift Change in the Robot Factory

O'Reilly Radar - Tue, 2023/01/17 - 04:33

What would you say is the job of a software developer? A layperson, an entry-level developer, or even someone who hires developers will tell you that job is to … well … write software. Pretty simple.

An experienced practitioner will tell you something very different. They’d say that the job involves writing some software, sure. But deep down it’s about the purpose of software. Figuring out what kinds of problems are amenable to automation through code. Knowing what to build, and sometimes what not to build because it won’t provide value.

They may even summarize it as: “my job is to spot for() loops and if/then statements in the wild.”

I, thankfully, learned this early in my career, at a time when I could still refer to myself as a software developer. Companies build or buy software to automate human labor, allowing them to eliminate existing jobs or help teams to accomplish more. So it behooves a software developer to spot what portions of human activity can be properly automated away through code, and then build that.

This mindset has followed me into my work in ML/AI. Because if companies use code to automate business rules, they use ML/AI to automate decisions.

Given that, what would you say is the job of a data scientist (or ML engineer, or any other such title)?

I’ll share my answer in a bit. But first, let’s talk about the typical ML workflow.

Building Models

A common task for a data scientist is to build a predictive model. You know the drill: pull some data, carve it up into features, feed it into one of scikit-learn’s various algorithms. The first go-round never produces a great result, though. (If it does, you suspect that the variable you’re trying to predict has mixed in with the variables used to predict it. This is what’s known as a “feature leak.”) So now you tweak the classifier’s parameters and try again, in search of improved performance. You’ll try this with a few other algorithms, and their respective tuning parameters–maybe even break out TensorFlow to build a custom neural net along the way–and the winning model will be the one that heads to production.

You might say that the outcome of this exercise is a performant predictive model. That’s sort of true. But like the question about the role of the software developer, there’s more to see here.

Collectively, your attempts teach you about your data and its relation to the problem you’re trying to solve. Think about what the model results tell you: “Maybe a random forest isn’t the best tool to split this data, but XLNet is.” If none of your models performed well, that tells you that your dataset–your choice of raw data, feature selection, and feature engineering–is not amenable to machine learning. Perhaps you need a different raw dataset from which to start. Or the necessary features simply aren’t available in any data you’ve collected, because this problem requires the kind of nuance that comes with a long career history in this problem domain. I’ve found this learning to be a valuable, though often understated and underappreciated, aspect of developing ML models.

Second, this exercise in model-building was … rather tedious? I’d file it under “dull, repetitive, and predictable,” which are my three cues that it’s time to automate a task.

  • Dull: You’re not here for the model itself; you’re after the results. How well did it perform? What does that teach me about my data?
  • Repetitive: You’re trying several algorithms, but doing roughly the same thing each time.
  • Predictable: The scikit-learn classifiers share a similar interface, so you can invoke the same train() call on each one while passing in the same training dataset.

Yes, this calls for a for() loop. And data scientists who came from a software development background have written similar loops over the years. Eventually they stumble across GridSearchCV, which accepts a set of algorithms and parameter combinations to try. The path is the same either way: setup, start job, walk away. Get your results in a few hours.

Building a Better for() loop for ML

All of this leads us to automated machine learning, or autoML. There are various implementations–from the industrial-grade AWS SageMaker Autopilot and Google Cloud Vertex AI, to offerings from smaller players–but, in a nutshell, some developers spotted that same for() loop and built a slick UI on top. Upload your data, click through a workflow, walk away. Get your results in a few hours.

If you’re a professional data scientist, you already have the knowledge and skills to test these models. Why would you want autoML to build models for you?

  • It buys time and breathing room. An autoML solution may produce a “good enough” solution in just a few hours. At best, you’ll get a model you can put in production right now (short time-to-market), buying your team the time to custom-tune something else (to get better performance). At worst, the model’s performance is terrible, but it only took a few mouse clicks to determine that this problem is hairier than you’d anticipated. Or that, just maybe, your training data is no good for the challenge at hand.
  • It’s convenient. Damn convenient. Especially when you consider how Certain Big Cloud Providers treat autoML as an on-ramp to model hosting. It takes a few clicks to build the model, then another few clicks to expose it as an endpoint for use in production. (Is autoML the bait for long-term model hosting? Could be. But that’s a story for another day.) Related to the previous point, a company could go from “raw data” to “it’s serving predictions on live data” in a single work day.
  • You have other work to do. You’re not just building those models for the sake of building them. You need to coordinate with stakeholders and product managers to suss out what kinds of models you need and how to embed them into the company’s processes. And hopefully they’re not specifically asking you for a model, but asking you to use the company’s data to address a challenge. You need to spend some quality time understanding all of that data through the lens of the company’s business model. That will lead to additional data cleaning, feature selection, and feature engineering. Those require the kind of context and nuance that the autoML tools don’t (and can’t) have.
Software Is Hungry, May as Well Feed It

Remember the old Marc Andreessen line that software is eating the world?

More and more major businesses and industries are being run on software and delivered as online services — from movies to agriculture to national defense. Many of the winners are Silicon Valley-style entrepreneurial technology companies that are invading and overturning established industry structures. Over the next 10 years, I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.

This was the early days of developers spotting those for() loops and if/then constructs in the wild. If your business relied on a hard-and-fast rule, or a predictable sequence of events, someone was bound to write code to do the work and throw that on a few dozen servers to scale it out.

And it made sense. People didn’t like performing the drudge work. Getting software to take the not-so-fun parts separated duties according to ability: tireless repetition to the computers, context and special attention to detail to the humans.

Andreessen wrote that piece more than a decade ago, but it still holds. Software continues to eat the world’s dull, repetitive, predictable tasks. Which is why software is eating AI.

(Don’t feel bad. AI is also eating software, as with GitHub’s Copilot. Not to mention, some forms of creative expression. Stable Diffusion, anyone?  The larger lesson here is that automation is a hungry beast. As we develop new tools for automation, we will bring more tasks within automation’s reach.)

Given that, let’s say that you’re a data scientist in a company that’s adopted an autoML tool. Fast-forward a few months. What’s changed?

Your Team Looks Different

Introducing autoML into your workflows has highlighted three roles on your data team. The first is the data scientist who came from a software development background, someone who’d probably be called a “machine learning engineer” in many companies. This person is comfortable talking to databases to pull data, then calling Pandas to transform it. In the past they understood the APIs of TensorFlow and Torch to build models by hand; today they are fluent in the autoML vendor’s APIs to train models, and they understand how to review the metrics.

The second is the experienced ML professional who really knows how to build and tune models. That model from the autoML service is usually good, but not great, so the company still needs someone who can roll up their sleeves and squeeze out the last few percentage points of performance. Tool vendors make their money by scaling a solution across the most common challenges, right? That leaves plenty of niches the popular autoML solutions can’t or won’t handle. If a problem calls for a shiny new technique, or a large, branching neural network, someone on your team needs to handle that.

Closely related is the third role, someone with a strong research background. When the well-known, well-supported algorithms no longer cut the mustard, you’ll need to either invent something whole cloth or translate ideas out of a research paper. Your autoML vendor won’t offer that solution for another couple of years, so, it’s your problem to solve if you need it today.

Notice that a sufficiently experienced person may fulfill multiple roles here. It’s also worth mentioning that a large shop probably needed people in all three roles even before autoML was a thing.

(If we twist that around: aside from the FAANGs and hedge funds, few companies have both the need and the capital to fund an ongoing ML research function. This kind of department provides very lumpy returns–the occasional big win that punctuates long stretches of “we’re looking into it.”)

That takes us to a conspicuous omission from that list of roles: the data scientists who focused on building basic models. AutoML tools are doing most of that work now, in the same way that the basic dashboards or visualizations are now the domain of self-service tools like AWS QuickSight, Google Data Studio, or Tableau. Companies will still need advanced ML modeling and data viz, sure. But that work goes to the advanced practitioners.

In fact, just about all of the data work is best suited for the advanced folks.  AutoML really took a bite out of your entry-level hires. There’s just not much for them to do. Only the larger shops have the bandwidth to really bring someone up to speed.

That said, even though the team structure has changed, you still have a data team when using an autoML solution. A company that is serious about doing ML/AI needs data scientists, machine learning engineers, and the like.

You Have Refined Your Notion of “IP”

The code written to create most ML models was already a commodity.   We’re all calling into the same Pandas, scikit-learn, TensorFlow, and Torch libraries, and we’re doing the same “convert data into tabular format, then feed to the algorithm” dance. The code we write looks very similar across companies and even industries, since so much of it is based on those open-source tools’ call semantics.

If you see your ML models as the sum total of algorithms, glue code, and training data, then the harsh reality is that your data was the only unique intellectual property in the mix anyway. (And that’s only if you were building on proprietary data.) In machine learning, your competitive edge lies in business know-how and ability to execute. It does not exist in the code.

AutoML drives this point home. Instead of invoking the open-source scikit-learn or Keras calls to build models, your team now goes from Pandas data transforms straight to … the API calls for AWS AutoPilot or GCP Vertex AI.  The for() loop that actually builds and evaluates the models now lives on someone else’s systems. And it’s available to everyone.

Your Job Has Changed

Building models is still part of the job, in the same way that developers still write a lot of code. While you called it “training an ML model,” developers saw “a for() loop that you’re executing by hand.” It’s time to let code handle that first pass at building models and let your role shift accordingly.

What does that mean, then? I’ll finally deliver on the promise I made in the introduction. As far as I’m concerned, the role of the data scientist (and ML engineer, and so on) is built on three pillars:

  • Translating to numbers and back. ML models only see numbers, so machine learning is a numbers-in, numbers-out game. Companies need people who can translate real-world concepts into numbers (to properly train the models) and then translate the models’ numeric outputs back into a real-world context (to make business decisions).  Your model says “the price of this house should be $542,424.86”? Great. Now it’s time to explain to stakeholders how the model came to that conclusion, and how much faith they should put in the model’s answer.
  • Understanding where and why the models break down: Closely related to the previous point is that models are, by definition, imperfect representations of real-world phenomena. When looking through the lens of your company’s business model, what is the impact of this model being incorrect? (That is: what model risk does the company face?)

    My friend Roger Magoulas reminded me of the old George Box quote that “all models are wrong, but some are useful.” Roger emphasized that we must consider the full quote, which is:

Since all models are wrong the scientist must be alert to what is importantly wrong. It is inappropriate to be concerned about mice when there are tigers abroad.

  • Spotting ML opportunities in the wild: Machine learning does four things well: prediction (continuous outputs), classification (discrete outputs), grouping things (“what’s similar?”), and catching outliers (“where’s the weird stuff?”). In the same way that a developer can spot for() loops in the wild, experienced data scientists are adept at spotting those four use cases. They can tell when a predictive model is a suitable fit to augment or replace human activity, and more importantly, when it’s not.

Sometimes this is as straightforward as seeing where a model could guide people. Say you overhear the sales team describing how they lose so much time chasing down leads that don’t work. The wasted time means they miss leads that probably would have panned out. “You know … Do you have a list of past leads and how they went? And are you able to describe them based on a handful of attributes? I could build a model to label a deal as a go/no-go. You could use the probabilities emitted alongside those labels to prioritize your calls to prospects.”

Other times it’s about freeing people from mind-numbing work, like watching security cameras. “What if we build a model to detect motion in the video feed? If we wire that into an alerts system, our staff could focus on other work while the model kept a watchful eye on the factory perimeter.”

And then, in rare cases, you sort out new ways to express ML’s functionality. “So … when we invoke a model to classify a document, we’re really asking for a single label based on how it’s broken down the words and sequences in that block of text. What if we go the other way? Could we feed a model tons of text, and get it to produce text on demand? And what if that could apply to, say, code?”

It Always Has Been 

From a high level, then, the role of the data scientist is to understand data analysis and predictive modeling, in the context of the company’s use cases and needs. It always has been. Building models was just on your plate because you were the only one around who knew how to do it. By offloading some of the model-building work to machines, autoML tools remove some of that distraction, allowing you to focus more on the data itself.

The data is certainly the most important part of all this. You can consider the off-the-shelf ML algorithms (available as robust, open-source implementations) and unlimited compute power (provided by cloud services) as constants. The only variable in your machine learning work–the only thing you can influence in your path to success–is the data itself.  Andrew Ng emphasizes this point in his drive for data-centric AI, and I wholeheartedly agree.

Making the most of that data will require that you understand where it came from, assess its quality, and engineer it into features that the algorithms can use. This is the hard part. And it’s the part we can’t yet hand off to a machine. But once you’re ready, you can hand those features off to an autoML tool–your trusty assistant that handles the grunt work–to diligently use them to train and compare various models.

Software has once again eaten dull, repetitive, predictable tasks. And it has drawn a dividing line, separating work based on ability.

Where to Next?

Some data scientists might claim that autoML is taking their job away. (We will, for the moment, skip past the irony of someone in tech complaining that a robot is taking their job.) Is that true, though? If you feel that building models is your job, then, yes.

For the more experienced readers, autoML tools are a slick replacement for their trusty-but-rusty homegrown for() loops. A more polished solution for doing a first pass at building models. They see autoML tools, not as a threat, but as a force multiplier that will test a variety of algorithms and tuning parameters while they tackle the important work that actually requires human nuance and experience. Pay close attention to this group, because they have the right idea.

The data practitioners who embrace autoML tools will use their newfound free time to forge stronger connections to the company’s business model. They’ll look for novel ways to apply data analysis and ML models to products and business challenges, and try to find those pockets of opportunity that autoML tools can’t handle.

If you have entrepreneurship in your blood, you can build on that last point and create an upstart autoML company. You may hit on something the big autoML vendors don’t currently support, and they’ll acquire you. (I currently see an opening for clustering-as-a-service, in case you’re looking for ideas.) Or if you focus on a niche that the big players deem too narrow, you may get acquired by a company in that industry vertical.

Software is hungry.  Find ways to feed it.

Categories: Technology

Digesting 2022

O'Reilly Radar - Tue, 2023/01/10 - 06:37

Although I don’t subscribe to the idea that history or technology moves in jerky one-year increments, it’s still valuable to take stock at the start of a new year, look at what happened last year, and decide what was important and what wasn’t.

We started the year with many people talking about an “AI winter.” A quick Google search shows that anxiety about an end to AI funding has continued through the year. Funding comes and goes, of course, and with the possibility of a media-driven recession, there’s always the possibility of a funding collapse. Funding aside, 2022 has been a fantastic year for AI. GPT-3 wasn’t new, of course, but ChatGPT made GPT-3 usable in ways people hadn’t imagined. How will we use ChatGPT and its descendants? I don’t believe they put an end to search. When I search, I’m (usually) more interested in the source than I am in an “answer.” But I have a question.  Much has been made about ChatGPT’s ability to “hallucinate” facts. I wonder whether that kind of hallucination could be a prelude to “artificial creativity”? I’ll try to have something more to say about that in the coming year.

GitHub CoPilot also wasn’t new in 2022, but in the last year we’ve heard of more and more programmers who are using ChatGPT to write production code. It isn’t just people “kicking the tires”; AI-generated code will inevitably be part of the future. The important questions are: who will it help, and how? Right now, it seems like CoPilot will be less likely to help beginners, and more likely to be a force-multiplier for experienced programmers, allowing them to focus more on what they are trying to do than on remembering details about syntax and libraries. In the longer term, it might bring about a complete change in what “computer programming” means.

DALL-E 2, Stable Diffusion, and Midjourney made it possible for people without artistic skills to generate pictures based on verbal descriptions, with results that are often fantastic. Google and Facebook haven’t released anything to the public, but they have demoed similar applications. All of these tools are raising important questions about intellectual property and copyright. They are already inspiring new startups with new applications, and those companies will inevitably attract investment.

Those tools aren’t without their problems, and if we really want to avoid another AI Winter, we’d do well to think about what those problems are. Intellectual property is one issue: GitHub is already being sued because CoPilot’s output can reproduce code that it was trained on, without regard for the code’s initial license. The art generation programs will inevitably face similar challenges: what happens when you tell an AI system to produce a drawing “in the style of” some artist? What happens when you ask the AI to create an avatar for a woman, and it creates something that’s highly sexualized? ChatGPT’s ability to produce plausible text output is spectacular, but its ability to discriminate fact from non-fact is limited. Will we see a Web that’s flooded with “fake news” and spam? We arguably have that already, but tools like ChatGPT can generate content at a scale that we can’t yet imagine.

At its heart, ChatGPT is really a user interface hack: a chat front end bolted onto an updated version of the GPT-3 language model. “User interface hack” sounds pejorative, but I don’t mean it that way. We now need to start building new applications around these models. UI design is important–and UI design for AI applications is a topic that hasn’t been adequately explored. What can we build with large language and generative art models? How will these models interact with their human users?  Exploring those questions will drive a lot of creativity.

After ChatGPT, perhaps the biggest surprise of 2022 was the rise of Mastodon. Mastodon isn’t new, of course; I’ve been looking in from the outside for some time. I’ve never thought it had achieved critical mass, or that it was capable of achieving critical mass. I was proven wrong when Elon Musk’s antics drove thousands of Twitter users to Mastodon (including me). Mastodon is a federated network of communities that are (mostly) pleasant, friendly, and populated by smart people. The sudden influx of Twitter users proved that Mastodon could scale. There were some growing pains, but not as much as I would have expected. I haven’t seen a single “fail whale.”

The growth of Mastodon proved that the federated model worked. It’s important to think about this. Mastodon is a decentralized service based on the ActivityPub protocol. Nobody owns it; nobody controls it, though individuals control specific servers. And there isn’t a blockchain or a token in sight. In the past year, we’ve been treated to a steady diet of noise about Web3, most of which insists that the next step in online interaction must be built on a blockchain, that everything must be owned, everything must be paid for, and that rent collectors (aka “miners”) will have their hands out taking their cut on each transaction. I won’t go so far as to claim that Mastodon is Web3; but I do think that the next generation of the Web, however it evolves, will look much more like Mastodon than like OpenSea, and that it will be based on protocols like ActivityPub.

Which leads us to blockchains and crypto. I’m not going to engage in Schadenfreude here, but I’ve long wondered what can be built with blockchains. At one time, I thought that supply chain management would be the poster child for the Enterprise Blockchain. Unfortunately, IBM and Maersk have abandoned their TradeLens project. NFTs? I have always been skeptical of the connection between NFTs and the art world. NFTs seemed an awful lot like buying a painting and framing the receipt. They existed purely to show that you could spend cryptocurrency at scale, and the people who spent their coins that way have gotten what they deserved. But I’m not willing to say that there’s no value here. NFTs may help us to solve the problem of online identity, a problem that we haven’t yet solved on the Web (though I’m not convinced that NFT advocates have really understood how complex identity is). Are there other applications? A number of companies, including Starbucks and Universal Studios, are using NFTs to build customer loyalty programs and theme park experiences. At this point, NFTs still look like a technology in search of a problem to solve, but I suspect that the appropriate problem isn’t out there.

There was more in 2022, of course. Will we see a Metaverse, or was that just Facebook’s attempt to change the narrative about its actions? Will Europe continue to take the lead in regulating the tech sector, and will other nations follow? Will our daily lives be improved by a flood of interoperable smart devices? In 2023, we shall see.

Categories: Technology

Radar Trends to Watch: January 2023

O'Reilly Radar - Wed, 2023/01/04 - 04:53

Perhaps unsurprisingly, December was a slow month. Blog posts and articles dropped off over the holidays; the antics of Sam Bankman-Fried and Elon Musk created a lot of distractions. While we won’t engage in Schadenfreude over the Twitter exodus, or SBF’s fall from the financial firmament, the most interesting news of the month is the rise of Mastodon. Mastodon isn’t new, and it doesn’t yet challenge the major social media players. But it’s real, it’s scaling, and its federated model presents a different way of thinking about social media, services, and (indeed) Web3. And ChatGPT? Yes, everyone was talking about it. It’s been known to impersonate Linux, help developers learn new programming languages, and even improve traditional college courses (where its ability to make mistakes can be turned into an asset).

AI
  • One developer has integrated ChatGPT into an IDE, where it can answer questions about the codebase he’s working on. This application promises to be incredibly useful to programmers who are working on large software projects.
  • While most of the discussion around ChatGPT swirls around errors and hallucinations, one college professor has started to use ChatGPT as a teaching tool. His ideas focus on ChatGPT’s flaws: for example, having it write an essay for students to analyze and correct.
  • Geoff Hinton proposes forward-forward neural networks, which may be as effective as backpropagation while requiring much less power to train. He also proposes new hardware architectures for artificial intelligence.
  • Riffusion is a generative model based on Stable Diffusion that creates sound by generating spectrograms. Riffusion doesn’t work with sound itself; it only produces the spectrogram, which can be converted to sound downstream.
  • A deluge of content generated by AI has the potential to “poison” public sources of training data. What does it mean to train an AI on data that comes from another AI, rather than a human?
  • DeepMind’s AlphaCode has scored better than 45% of human programmers in a coding competition. Their most important innovation appears to be generating many solutions to a problem and running some simple test cases to select which solutions to submit.
  • Stability AI has announced that artists may remove their work from the training set used to build Stable Diffusion 3. Opting out requires creating an account on Have I Been Trained and uploading images to be excluded.
  • The World Cup used an AI “referee” to assist officials in detecting when players are offside. The system incorporates input from (among other things) a “connected ball” that provided position updates 500 times per second.
  • XetHub is “a collaborative storage platform for managing data at scale.” Essentially, it’s GitHub for data. It appears to be built on top of Git, but with a different approach to minimizing duplication, managing large objects, and supporting different file types. It supports repos up to 1TB, with plans to go to 100TB.
  • Large language models can be used to understand physician’s notes.  While these notes are recorded in electronic health records, they are full of abbreviations, many of which are idiosyncratic and difficult for anyone other than the author to understand.
  • ChatGPT’s training set included a lot of information about Linux, so you can tell it to act like a Linux terminal. You’ll get a shell prompt, along with a simulated filesystem. Most system commands work, and even some programming–though the output is predicted from the training set, not the result of actually running a program. Is this the future of operating systems?
  • Simon Willison is using ChatGPT and Copilot to learn Rust by solving problems from Advent of Code. Although ChatGPT occasionally hallucinates answers, it is surprisingly accurate, and capable of explaining what the code it generates is doing.
  • While ChatGPT’s ability to hold a conversation is impressive, its accuracy is not. StackOverflow has prohibited posts generated by ChatGPT because of incorrect answers.
  • Diffusion models, the AI models on which generative art tools like DALL-E are based, are being used to design new proteins that have specific properties. It is then possible to synthesize these proteins in a lab. These new proteins could lead to new kinds of drugs.
  • Adrian Holavaty’s experiments in music generation using ChatGPT are interesting. Adrian isn’t (yet) trying to get ChatGPT to compose new music; it’s more like “Give me Twinkle Twinkle in MusicML.”  Still, within limits, the chat server can do it.
  • OpenAI is continuing to improve GPT-3. A variant of GPT-3 has been trained to admit when it doesn’t know something, and is less prone to generating inappropriate responses. However, there are still many shortcomings.
  • AI was used to edit swear words out of a movie in production without reshooting any scenes, getting its MPAA rating from R down to PG-13.
  • Scott Aaronson’s lecture summarizing his work (to date) on AI safety is worth reading.
Programming
  • Dioxus is a library for write-once-run-anywhere Web and Mobile programing in Rust.
  • Fission is a web-native (as distinct from cloud native) computing stack that is truly local-first. It was designed to build distributed systems like Mastodon (though Mastodon doesn’t use it at this point) that don’t have central servers, and that can scale.
  • GitHub requires all users to enable two-factor authentication by the end of 2023. They have also enabled secret scanning for free on all public repositories. Secret scanning inspects code for authentication credentials and other secrets that may have been inadvertently left in code.
  • Is no code test automation the next trend in software testing? And looking to the future, is it a stepping stone to fully automated testing using artificial intelligence?
  • JavaScript on the edge? Will JavaScript become the common language for edge computing? That depends in part on what edge computing really means, and that continues to be vague. Is “edge computing” just caching on CDNs?
  • Stephen O’Grady suggests some heuristics to evaluate an organization’s commitment to developer experience.
  • Automated reasoning about programs is a useful adjunct to testing. The Halting Problem doesn’t mean that reasoning about errors in code is impossible; it just means that we (occasionally) have to accept “don’t know” as an answer.
  • Julia Evans (@b0rk) has an excellent set of tips for analyzing logs.  Julia has also offered a Debugging Manifesto.
  • AWS Clean Rooms are a new service that allows organizations to cooperate on data analysis without revealing the underlying data to each other.
  • WasmEdge is a lightweight Web Assembly runtime that’s built for cloud native applications, edge computing applications, and embedded systems.
Security
  • A security breach at LastPass, first reported last August, is worse than the company admitted. Customer information was stolen, including customer vaults containing sensitive information. The vaults are (probably) still protected by customers’ master passwords, though it’s possible the attackers have found a back door.
  • A new wiper malware, called Azov, is spreading rapidly in the wild.  Azov is a sophisticated piece of software that is purely destructive: it overwrites files with random data. Recovery is impossible, aside from restoring from backup.
  • Any new technology has security risks. Here’s a summary of security risks that developers working with WebAssembly should be aware of.
  • Bettercap is a next-generation tool for exploring networks: scanning and probing WiFi and Bluetooth, in addition to Ethernet, spoofing common network protocols, and many other features. It’s an all-in-one tool for network reconnaissance and attacks.
Biology
  • In Greenland, scientists have found and sequenced 2 million year old DNA. The DNA comes from a number of different plants and animals (including mastodons), and gives a picture of what Greenland was like when it had a warmer climate.
Metaverse
  • Nokia argues that the Industrial Metaverse will be centered on digital twins: computer simulations that run in parallel to real-world systems.
  • Webspaces are a new kind of website that can create 3D worlds, using nothing but static HTML. Webspaces preserve (or reclaim) much of the vision of the early Web: learning by copying and pasting from others’ sites, editing in the browser as an editor, and self-hosting.
  • Fashion may be the Metaverse’s first killer app. Though it’s fashion that only exists in the Metaverse–a constraint that’s both freeing and limiting.
  • Build your own Decentralized Twitter is a good introduction (first of three parts) to building federated services. Mastodon is the most prominent example of a federated service, but there are many more applications.
Web
  • Although compatibility issues remain, the latest release of the Chrome browser supports passkeys, a replacement for passwords and password managers that is much more secure.
  • danah boyd has published an must-read essay on social media, failure, and Twitter. danah doesn’t draw any conclusions, but gives an excellent analysis of what failure means.
  • The Brave browser is now showing “privacy preserving” ads in its search results. These ads are currently in a limited beta. Ads will be based only on search query, country, and device type. Brave also plans to release a for-pay ad-free browser.
Web3
  • The venerable WinAmp MP3 player now supports music NFTs. It can be linked to a Metamask wallet, and can download and play files that have been purchased via NFT.
Regulation
  • Europe has become the de facto leader in regulating technology; it’s safe to predict that Europe will implement regulations about cybersecurity, algorithmic accountability, and cryptocurrency in the coming year–and that technology companies will have to comply. It’s less clear whether these changes will have any effect outside of Europe.
  • Privacy regulators in Europe have ruled that it is illegal for Facebook to track user’s activity without explicit consent. This ruling seriously limits Facebook’s ability to use targeted ads.
Categories: Technology

What Does Copyright Say about Generative Models?

O'Reilly Radar - Tue, 2022/12/13 - 05:22

The current generation of flashy AI applications, ranging from GitHub Copilot to Stable Diffusion, raise fundamental issues with copyright law. I am not an attorney, but these issues need to be addressed–at least within the culture that surrounds the use of these models, if not the legal system itself.

Copyright protects outputs of creative processes, not inputs. You can copyright a work you produced, whether that’s a computer program, a literary work, music, or an image. There is a concept of “fair use” that’s most applicable to text, but still applicable in other domains. The problem with fair use is that it is never precisely defined. The US Copyright Office’s statement about fair use is a model for vagueness:

Under the fair use doctrine of the U.S. copyright statute, it is permissible to use limited portions of a work including quotes, for purposes such as commentary, criticism, news reporting, and scholarly reports. There are no legal rules permitting the use of a specific number of words, a certain number of musical notes, or percentage of a work. Whether a particular use qualifies as fair use depends on all the circumstances.

We are left with a web of conventions and traditions. You can’t quote another work in its entirety without permission. For a long time, it was considered acceptable to quote up to 400 words without permission, though that “rule” was no more than an urban legend, and never part of copyright law. Counting words never shielded you from infringement claims–and in any case, it applies poorly to software as well as works that aren’t written text. Elsewhere the US copyright office states that fair use includes ”transformative” use, though “transformative” has never been defined precisely. It also states that copyright does not extend to ideas or facts, only to particular expressions of those facts–but we have to ask where the “idea” ends and where the “expression” begins. Interpretation of these principles will have to come from the courts, and the body of US case law on software copyright is surprisingly small–only 13 cases, according to the copyright office’s search engine. Although the body of case law for music and other art forms is larger, it’s even less clear how these ideas apply. Just as quoting a poem in its entirety is a copyright violation, you can’t reproduce images in their entirety without permission. But how much of a song or a painting can you reproduce? Counting words isn’t just ill-defined, it is useless for works that aren’t made of words.

These rules of thumb are clearly about outputs, rather than inputs: again, the ideas that go into an article aren’t protected, just the words. That’s where generative models present problems. Under some circumstances, output from Copilot may contain, verbatim, lines from copyrighted code. The legal system has tools to handle this case, even if those tools are imprecise. Microsoft is currently being sued for “software piracy” because of GitHub. The case is based on outputs: code generated by Copilot that reproduces code in its training set, but that doesn’t carry license notices or attribution. It’s about Copilot’s compliance with the license attached to the original software. However, that lawsuit doesn’t address the more important question. Copilot itself is a commercial product that is built a body of training data, even though it is completely different from that data. It’s clearly “transformative.” In any AI application, the training data is at least as important to the final product as the algorithms, if not more important. Should the rights of the authors of the training data be taken into account when a model is built from their work, even if the model never reproduces their work verbatim? Copyright does not adequately address the inputs to the algorithm at all.

We can ask similar questions about works of art. Andy Baio has a great discussion of an artist, Hollie Mengert, whose work was used to train a specialized version of Stable Diffusion. This model enables anyone to produce Mengert-like artworks from a textual prompt. They’re not actual reproductions; and they’re not as good as her genuine artworks–but arguably “good enough” for most purposes. (If you ask Stable Diffusion to generate “Mona Lisa in the style of DaVinci,” you get something that clearly looks like Mona Lisa, but that would embarrass poor Leonardo.) However, users of a model can produce dozens, or hundreds, of works in the time Mengert takes to make one. We certainly have to ask what it does to the value of Mengert’s art. Does copyright law protect “in the style of”? I don’t think anyone knows. Legal arguments over whether works generated by the model are “transformative” would be expensive, possibly endless, and likely pointless. (One hallmark of law in the US is that cases are almost always decided by people who aren’t experts. The Grotesque Legacy of Music as Property shows how this applies to music.) And copyright law doesn’t protect the inputs to a creative process, whether that creative process is human or cybernetic. Should it? As humans, we are always learning from the work of others; “standing on the shoulders of giants” is a quote with a history that goes well before Isaac Newton used it. Are machines also allowed to stand on the shoulders of giants?


Mona Lisa in the style of DaVinci. DaVinci isn’t worried. (Courtesy Hugo Bowne-Anderson)

To think about this, we need an understanding of what copyright does culturally. It’s a double-edged sword. I’ve written several times about how Beethoven and Bach made use of popular tunes in their music, in ways that certainly wouldn’t be legal under current copyright law. Jazz is full of artists quoting, copying, and expanding on each other. So is classical music–we’ve just learned to ignore that part of the tradition. Beethoven, Bach, and Mozart could easily have been sued for their appropriation of popular music (for that matter, they could have sued each other, and been sued by many of their “legitimate” contemporaries)–but that process of appropriating and moving beyond is a crucial part of how art works.


J. S. Bach’s 371 Choral Copyright Violations. He would have been in trouble if copyright as we now understand it had existed.

We also have to recognize the protection that copyright gives to artists. We lost most of Elizabethan theater because there was no copyright. Plays were the property of the theater companies (and playwrights were often members of those companies), but that property wasn’t protected; there was nothing to prevent another company from performing your play.  Consequently, playwrights had no interest in publishing their plays. The scripts were, literally, trade secrets. We’ve probably lost at least one play by Shakespeare (there’s evidence he wrote a play called Love’s Labors Won); we’ve lost all but one of the plays of Thomas Kyd; and there are other playwrights known through playbills, reviews, and other references for whom there are no surviving works. Christopher Marlowe’s Doctor Faustus, the most important pre-Shakespearian play, is known to us through two editions, both published after Marlowe’s death, and one of those editions is roughly a third longer than the other. What did Marlowe actually write? We’ll never know. Without some kind of protection, authors had no interest in publishing at all, let alone publishing accurate texts.

So there’s a finely tuned balance to copyright, which we almost certainly haven’t achieved in practice. It needs to protect creativity without destroying the ability to learn from and modify earlier works. Free and open source software couldn’t exist without the protection of copyright–though without that protection, open source might not be needed. Patents were intended to play a similar role: to encourage the spread of information by guaranteeing that inventors could profit from their invention, limiting the need for “trade secrets.”

Copying works of art has always been (and still is) a part of an artist’s education. Authors write and rewrite each other’s works constantly; whole careers have been made tracing the interactions between John Milton and William Blake. Whether we’re talking about prose or painting, generative AI devalues traditional artistic technique (as I’ve argued), though possibly giving rise to a different kind of technique: the technique of writing prompts that tell the machine what to create. That’s a task that is neither simple nor uncreative. To take Mona Lisa and go a step further than Da Vinci–or to go beyond facile imitations of Hollie Mengert–requires an understanding of what this new medium can do, and how to control it. Part of Google’s AI strategy appears to be building tools that help artists to collaborate with AI systems; their goal is  to enable authors to create works that are transformative, that do more than simply reproducing a style or piecing together sentences. This kind of work certainly raises questions of reproducibility: given the output of an AI system, can that output be recreated or modified in predictable ways? And it might cause us to realize that the old cliche “A picture is worth a thousand words” significantly underestimates the number of words it takes to describe a picture.

How do we best protect creative freedom? Is a work of art something that can be “owned,” and what does that mean in an age when digital works can be reproduced perfectly, at will? We need to protect both the original artists, like Hollie Mengert, and those who use their original work as a springboard to go beyond. Our current copyright system does that poorly, if at all. (And the existence of patent trolls demonstrates that patent law hasn’t done much better.)  What was originally intended to protect artists has turned into a rent-seeking game in which artists who can afford lawyers monetize the creativity of artists who can’t. Copyright needs to protect the input side of any generative system: it needs to govern the use of intellectual property as training data for machines. But copyright also needs to protect the people who are being genuinely creative with those machines: not just making more works “in the style of,” but treating AI as a new artistic medium. The finely tuned balance that copyright needs to maintain has just become more difficult.

There may be solutions outside of the copyright system. Shutterstock, which previously announced that they were removing all AI-generated images from their catalog, has announced a collaboration with OpenAI that allow the creation of images using a model that has only been trained on images licensed to Shutterstock. Creators of the images used for training will receive a royalty based on images created by the model. Shutterstock hasn’t released any details about the compensation plan, and it’s easy to suspect that the actual payments will be similar to the royalties musicians get from streaming services: microcents per use. But their approach could work with the right compensation plan. Deviant Art has released DreamUp, a model based on Stable Diffusion that allows artists to specify whether models can be trained on their content, along with identifying all of its outputs as computer generated. Adobe has just announced their own set of guidelines for submitting generative art to their Adobe Stock collection, which requiring that AI-generated art be labeled as such, and that the (human) creators have obtained all the licenses that might be required for the work.

These solutions could be taken a step further. What if the models were trained on licenses, in addition to the original works themselves? It is easy to imagine an AI system that has been trained on the (many) Open Source and Creative Commons licenses. A user could specify what license terms were acceptable, and the system would generate appropriate output–including licenses and attributions, and taking care of compensation where necessary. We need to remember that few of the current generative AI tools that now exist can be used “for free.” They generate income, and that income can be used to compensate creators.

Ultimately we need both solutions: fixing copyright law to accommodate works used to train AI systems, and developing AI systems that respect the rights of the people who made the works on which their models were trained. One can’t happen without the other.

Categories: Technology

Radar Trends to Watch: December 2022

O'Reilly Radar - Tue, 2022/12/06 - 05:21

This month’s news has been overshadowed by the implosion of SBF’s TFX and the possible implosion of Elon Musk’s Twitter. All the noise doesn’t mean that important things aren’t happening. Many companies, organizations, and individuals are wrestling with the copyright implications of generative AI. Google is playing a long game: they believe that the goal isn’t to imitate art works, but to build better user interfaces for humans to collaborate with AI so they can create something new. Facebook’s AI for playing Diplomacy is an exciting new development. Diplomacy requires players to negotiate with other players, assess their mental state, and decide whether or not to honor their commitments. None of these are easy tasks for an AI. And IBM now has a 433 Qubit quantum chip–an important step towards making a useful quantum processor.

Artificial Intelligence
  • Facebook has developed an AI system that plays Diplomacy. Diplomacy is a board game that includes periods for non-binding negotiations between players, leading to collaborations and betrayals. It requires extensive use of natural language, in addition to the ability to understand and maintain relationships with other players.
  • Shutterstock will be collaborating with OpenAI to build a model based on DALL-E that has been trained only on art that Shutterstock has licensed. They will also put in place a plan for compensating artists whose work was used to train the model.
  • Facebook’s large language model for scientific research, Galactica, only survived online for three days. It produced scientific papers that sounded reasonable, but the content was often factually incorrect, including “fake research” attributed to real scientists. It was prone to generating hate research directed against almost any minority.
  • Google has put a Switch Transformers model on HuggingFace. This is a very large Mixture of Experts model (1.6 trillion parameters) that uses many sub-models, routing different tokens to different models. Despite the size, Switch Transformers are relatively fast and efficient.
  • OneAI has launched a Natural Language Processing-as-a-Service service, based on OpenAI’s Whisper model. Whisper is relatively small, impressively accurate, and supports multiple languages.
  • AI governance–including the ability to explain and audit results–is a necessity if AI is going to thrive in an era of declining public trust and increasing regulation.
  • Researches have developed an AI system that learns to identify objects by using a natural language interface to ask humans what they’re seeing. This could be a route towards AI that learns more effectively.
  • Google is developing a human-in-the-loop tool for their large language model LaMDA, designed to help writers interact with AI to create a story. The Wordcraft Writers Workshop is another project about collaborating with LaMDA. “Using LaMDA to write full stories is a dead end.”
  • You didn’t really want a never-ending AI-generated discussion between Werner Herzog and Slavoj Žižek, did you? Welcome to the Infinite Conversation.
  • Code as Policies extends AI code generation to robotics: it uses a large language model to generate Python code for robotic tasks from verbal descriptions. The result is a robot that can perform tasks that it has not been explicitly trained to do. Code is available on GitHub.
  • AskEdith is a natural language interface for databases that converts English into SQL. Copilot for DBAs.
  • Facebook has used AI to build an audio CODEC that is 10 times more efficient than MP3.
  • SetFit is a much smaller language model (1/1600th the size of GPT-3) that allows smaller organizations to build specialized natural language systems with minimal training data.
  • Wide transformer models with fewer attention layers may be able to reduce the size (and power requirements) of large language models while increasing their performance and interpretability.
  • Semi-supervised learning is a partially automated process for labeling large datasets. Starting with a small amount of hand-labeled data, you train a model to label data; use that model; check results for accuracy; and retrain.
Programming
  • DuckDB is a very fast database designed for online analytic processing (OLAP) of small to medium datasets. It runs easily on a laptop and integrates very well with Python.
  • How do you manage SBOM drift? Building a software bill of materials is one thing; keeping it accurate as a project goes through development and deployment is another.
  • Who is using Rust? Time for a study. Nearly 200 companies, including Microsoft and Amazon; Azure’s CTO strongly suggests that developers avoid C or C++ in favor of Rust.
  • What comes after Copilot? Github is looking at voice-to-code: programming without a keyboard.
  • genv is a tool for managing GPU use, an often neglected part of MLOps. Unlike CPUs, they are usually allocated statically, and can’t be reallocated if they’re underused or unused.
  • Multidomain service orchestration could be the next step beyond Kubernetes: orchestration between software components that are running in completely different environments.
  • Rewind, an unreleased product for Macs, claims to record everything you do, see, or hear, so you can look it up later. There are obvious ramifications for privacy and security, though users can start and stop recording. The key technology seems to be extremely effective compression.
  • Progressive delivery for databases? As James Governor points out, database schemas have been left behind by CI/CD. That may be changing.
  • Turbopack, a new Rust-based bundler for Next.js, promises greatly improved performance. Unlike Webpack, Turbopack does incremental builds, and is designed for use in both development and production.
  • Shell scripting never goes out of date. Here are some best practices, starting with “always use bash.”
Security Quantum Computing
  • Scott Aaronson has posted an “extremely compressed” (3-hour) version of his undergraduate course in Quantum Computing on YouTube. It’s an excellent way to get started.
  • Horizon Quantum Computing is launching a development platform that will let programmers write code in a language like C or C++, and then compile and optimize it for a quantum computer.
  • IBM has created a 433-qubit quantum chip, and updated the Qiskit runtime with improved error correction. This represents a big step forward, though we are still far from usable quantum computing.
Cryptocurrency and Blockchains
  • The Australian Stock Exchanged canceled its 6-year-old blockchain experiment, which would have put most of its work onto a Blockchain-like shared distributed ledger.
  • Vitalik Buterin responds to the FTX failure by hypothesizing about a “proof of solvency” that would be independent of audits and other “fiat” methods. The theme is familiar: can cryptocurrency move closer to trustlessness?
  • One “selling point” of NFTs has been that royalties can be passed to creators on resale of the NFT. However, many marketplaces do not enforce royalty payments, and building royalties into the smart contracts underlying NFTs is close to impossible. Some marketplaces, including Magic Eden and OpenSea, have developed tools for enforcing royalty payments.
  • Infrastructure for renewable energy is bound to be less centralized. Is it an application for a blockchain? Or is a blockchain just a tool for recentralization? Is it creepy when Shell is arguing for decentralization?
Metaverse
  • Can a nation upload itself to the metaverse? At the COP27 climate summit, Tuvalu’s foreign minister proposed, bitterly, that this may be their only solution to global warming, which will put their entire nation underwater. Their geography, culture, and national sovereignty could be preserved in a virtual world.
  • The Dark Forest is a massive multiplayer online game that is based on a blockchain. It is almost certainly the most complex game based on blockchain technology. There is no central server; it may show a way into building a Metaverse that is truly decentralized.
  • When is VR too connected to the real world? Palmer Lucky, founder of Oculus, has built a VR headset that will kill you if you die in the game. While he says this is just “office art,” he seems to believe that devices like this will eventually become real products.
  • The internet developed organically, in ways nobody could have predicted. Ben Evans argues that if the Metaverse happens, it will also develop organically. That isn’t an excuse not to experiment. But it is a reason not to invest too much in conflicting definitions.
Web
  • The flow of users from Twitter to Mastodon means that the ActivityPub protocol (the protocol behind Mastodon’s federated design) is worth understanding. Mastodon won’t (can’t) make the mistake of disenfranchising developers of new clients and other applications.
  • Google is imposing a penalty on AI-generated content in its rankings. While a reduction of 20% seems small, that penalty causes a significant reduction in traffic.
Things
Categories: Technology

AI’s ‘SolarWinds Moment’ Will Occur; It’s Just a Matter of When

O'Reilly Radar - Tue, 2022/11/29 - 05:36

Major catastrophes can transform industries and cultures. The Johnstown Flood, the sinking of the Titanic, the explosion of the Hindenburg, the flawed response to Hurricane Katrina–each had a lasting impact.

Even when catastrophes don’t kill large numbers of people, they often change how we think and behave. The financial collapse of 2008 led to tighter regulation of banks and financial institutions. The Three Mile Island accident led to safety improvements across the nuclear power industry.

Sometimes a series of negative headlines can shift opinion and amplify our awareness of lurking vulnerabilities. For years, malicious computer worms and viruses were the stuff of science fiction. Then we experienced Melissa, Mydoom, and WannaCry. Cybersecurity itself was considered an esoteric backroom technology problem until we learned of the Equifax breach, the Colonial Pipeline ransomware attack, Log4j vulnerability, and the massive SolarWinds hack. We didn’t really care about cybersecurity until events forced us to pay attention.

AI’s “SolarWinds moment” would make it a boardroom issue at many companies. If an AI solution caused widespread harm, regulatory bodies with investigative resources and powers of subpoena would jump in. Board members, directors, and corporate officers could be held liable and might face prosecution. The idea of corporations paying huge fines and technology executives going to jail for misusing AI isn’t far-fetched–the European Commission’s proposed AI Act includes three levels of sanctions for non-compliance, with fines up to €30 million or 6% of total worldwide annual income, depending on the severity of the violation.

A couple of years ago, U.S. Sen. Ron Wyden (D-Oregon) introduced a bill requiring “companies to assess the algorithms that process consumer data to examine their impact on accuracy, fairness, bias, discrimination, privacy, and security.” The bill also included stiff criminal penalties “for senior executives who knowingly lie” to the Federal Trade Commission about their use of data. While it’s unlikely that the bill will become law, merely raising the possibility of criminal prosecution and jail time has upped the ante for “commercial entities that operate high-risk information systems or automated-decision systems, such as those that use artificial intelligence or machine learning.”

AI + Neuroscience + Quantum Computing: The Nightmare Scenario

Compared to cybersecurity risks, the scale of AI’s destructive power is potentially far greater. When AI has its “Solar Winds moment,” the impact may be significantly more catastrophic than a series of cybersecurity breaches. Ask AI experts to share their worst fears about AI and they’re likely to mention scenarios in which AI is combined with neuroscience and quantum computing. You think AI is scary now? Just wait until it’s running on a quantum coprocessor and connected to your brain. 

Here’s a more likely nightmare scenario that doesn’t even require any novel technologies: State or local governments using AI, facial recognition, and license plate readers to identify, shame, or prosecute families or individuals who engage in behaviors that are deemed immoral or anti-social. Those behaviors could range from promoting a banned book to seeking an abortion in a state where abortion has been severely restricted.

AI is in its infancy, but the clock is ticking. The good news is that plenty of people in the AI community have been thinking, talking, and writing about AI ethics. Examples of organizations providing insight and resources on ethical uses of AI and machine learning include ​The Center for Applied Artificial Intelligence at the University of Chicago Booth School of Business, ​LA Tech4Good, The AI Hub at McSilver, AI4ALL, and the Algorithmic Justice League

There’s no shortage of suggested remedies in the hopper. Government agencies, non-governmental organizations, corporations, non-profits, think tanks, and universities have generated a prolific flow of proposals for rules, regulations, guidelines, frameworks, principles, and policies that would limit abuse of AI and ensure that it’s used in ways that are beneficial rather than harmful. The White House’s Office of Science and Technology Policy recently published the Blueprint for an AI Bill of Rights. The blueprint is an unenforceable document. But it includes five refreshingly blunt principles that, if implemented, would greatly reduce the dangers posed by unregulated AI solutions. Here are the blueprint’s five basic principles:

  1. You should be protected from unsafe or ineffective systems.
  2. You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
  3. You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.
  4. You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
  5. You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

It’s important to note that each of the five principles addresses outcomes, rather than processes. Cathy O’Neil, the author of Weapons of Math Destruction, has suggested a similar outcomes-based approach for reducing specific harms caused by algorithmic bias. An outcomes-based strategy would look at the impact of an AI or ML solution on specific categories and subgroups of stakeholders. That kind of granular approach would make it easier to develop statistical tests that could determine if the solution is harming any of the groups. Once the impact has been determined, it should be easier to modify the AI solution and mitigate its harmful effects.

Gamifying or crowdsourcing bias detection are also effective tactics. Before it was disbanded, Twitter’s AI ethics team successfully ran a “bias bounty” contest that allowed researchers from outside the company to examine an automatic photo-cropping algorithm that favored white people over Black people.

Shifting the Responsibility Back to People

Focusing on outcomes instead of processes is critical since it fundamentally shifts the burden of responsibility from the AI solution to the people operating it.

Ana Chubinidze, founder of AdalanAI, a software platform for AI Governance based in Berlin, says that using terms like “ethical AI” and “responsible AI” blur the issue by suggesting that an AI solution–rather than the people who are using it–should be held responsible when it does something bad. She raises an excellent point: AI is just another tool we’ve invented. The onus is on us to behave ethically when we’re using it. If we don’t, then we are unethical, not the AI.

Why does it matter who–or what–is responsible? It matters because we already have methods, techniques, and strategies for encouraging and enforcing responsibility in human beings. Teaching responsibility and passing it from one generation to the next is a standard feature of civilization. We don’t know how to do that for machines. At least not yet.

An era of fully autonomous AI is on the horizon. Would granting AIs full autonomy make them responsible for their decisions? If so, whose ethics will guide their decision-making processes? Who will watch the watchmen?

Blaise Aguera y Arcas, a vice president and fellow at Google Research, has written a long, eloquent and well-documented article about the possibilities for teaching AIs to genuinely understand human values. His article, titled, Can machines learn how to behave? is worth reading. It makes a strong case for the eventuality of machines acquiring a sense of fairness and moral responsibility. But it’s fair to ask whether we–as a society and as a species–are prepared to deal with the consequences of handing basic human responsibilities to autonomous AIs.

Preparing for What Happens Next

Today, most people aren’t interested in the sticky details of AI and its long-term impact on society. Within the software community, it often feels as though we’re inundated with articles, papers, and conferences on AI ethics. “But we’re in a bubble and there is very little awareness outside of the bubble,” says Chubinidze. “Awareness is always the first step. Then we can agree that we have a problem and that we need to solve it. Progress is slow because most people aren’t aware of the problem.”

But rest assured: AI will have its “SolarWinds moment.” And when that moment of crisis arrives, AI will become truly controversial, similar to the way that social media has become a flashpoint for contentious arguments over personal freedom, corporate responsibility, free markets, and government regulation.

Despite hand-wringing, article-writing, and congressional panels, social media remains largely unregulated. Based on our track record with social media, is it reasonable to expect that we can summon the gumption to effectively regulate AI?

The answer is yes. Public perception of AI is very different from public perception of social media. In its early days, social media was regarded as “harmless” entertainment; it took several years for it to evolve into a widely loathed platform for spreading hatred and disseminating misinformation. Fear and mistrust of AI, on the other hand, has been a staple of popular culture for decades.

Gut-level fear of AI may indeed make it easier to enact and enforce strong regulations when the tipping point occurs and people begin clamoring for their elected officials to “do something” about AI.

In the meantime, we can learn from the experiences of the EC. The draft version of the AI Act, which includes the views of various stakeholders, has generated demands from civil rights organizations for “wider prohibition and regulation of AI systems.” Stakeholders have called for “a ban on indiscriminate or arbitrarily-targeted use of biometrics in public or publicly-accessible spaces and for restrictions on the uses of AI systems, including for border control and predictive policing.” Commenters on the draft have encouraged “a wider ban on the use of AI to categorize people based on physiological, behavioral or biometric data, for emotion recognition, as well as dangerous uses in the context of policing, migration, asylum, and border management.”

All of these ideas, suggestions, and proposals are slowly forming a foundational level of consensus that’s likely to come in handy when people begin taking the risks of unregulated AI more seriously than they are today.

Minerva Tantoco, CEO of City Strategies LLC and New York City’s first chief technology officer, describes herself as “an optimist and also a pragmatist” when considering the future of AI. “Good outcomes do not happen on their own. For tools like artificial intelligence, ethical, positive outcomes will require an active approach to developing guidelines, toolkits, testing and transparency. I am optimistic but we need to actively engage and question the use of AI and its impact,” she says.

Tantoco notes that, “We as a society are still at the beginning of understanding the impact of AI on our daily lives, whether it is our health, finances, employment, or the messages we see.” Yet she sees “cause for hope in the growing awareness that AI must be used intentionally to be accurate, and equitable … There is also an awareness among policymakers that AI can be used for positive impact, and that regulations and guidelines will be necessary to help assure positive outcomes.”

Categories: Technology

Technical Health Isn’t Optional

O'Reilly Radar - Tue, 2022/11/22 - 05:25

If every company is a technology company, then every healthy company must have a healthy relationship to technology. However, we haven’t seen any discussions of “technical health,” which suggests that industry at large doesn’t know what differentiates a company that’s been through a successful digital transformation from one that’s struggling. To help us understand technological health, we asked several CTOs in the Asia-Pacific (APAC) region what their companies are doing to prevent security incidents, how they use open source software, how they use technology strategically, and how they retain employees in a challenging job market. We hope that their answers will help companies to build their own strategies for digital transformation.

Being Proactive About Security

We asked the CTOs how the companies prepared themselves for both old and new vulnerabilities. The key is being proactive, as Shashank Kaul, CTO of Webjet, noted. It’s important to use tools to scan for vulnerabilities—particularly tools provided by cloud vendors, such as Microsoft Azure’s Container Registry, which integrates with Microsoft Defender for Cloud (formerly known as Azure Security Center) to scan containers for vulnerabilities continuously. They also make use of GitHub’s Dependabot alerts, which are warnings generated when code in a GitHub repository uses a dependency with known vulnerabilities or malware. This proactive approach reflects an important shift from older reactive approaches to security, in which you deploy software and hope nothing bad happens. Tools like Container Registry and Dependabot alerts are constantly inspecting your code so they can warn about potential problems before they become actual problems.

Tim Hope, CTO of Versent, pointed to the importance of identity and role management in the cloud. Jyotiswarup Raiturkar, CTO of Angel One, said something very similar, highlighting the key role played by a zero trust policy that requires continuous validation (i.e., an identity is validated every time a resource is accessed). Raiturkar also emphasized the importance of “least privilege,” in which access to resources is limited on a “need to know” basis. Least privilege and zero trust go together: if you constantly verify identities, and only allow those entities access to the minimum information they need to do their job, you’ve made life much harder for an attacker. Unfortunately, almost all cloud services grant privileges that are overly permissive. It’s been widely reported that many—perhaps most—cloud vulnerabilities stem from misconfigured identity and access management; we’d bet that the same applies to applications that run on-premises.

It’s also important to recognize that “identities” aren’t limited to humans. In a modern software architecture, a lot of the work will be performed by services that access other services, and each service needs its own identity and its own set of privileges. Again, the key is being proactive: thinking in advance about what identities are needed in the system and determining the appropriate privileges that should be granted to each identity. Giving every user and service broad access just because it’s easier to make the system work is a recipe for failure. If you think rigorously about exactly what access every service and user needs, and implement that carefully, you’ve blocked the most important path through which an attacker can breach your infrastructure.

Threat modeling and penetration testing are also key components of a good security strategy, as Raiturkar pointed out. Threat modeling can help you assess the threats that you actually face, how likely they are, and the damage a successful attack can cause. It’s impossible to defend against every possible attack; a company needs to understand its assets and how they’re protected, then assess where it’s most vulnerable. Penetration testing is an important tool for determining how vulnerable you really are rather than how vulnerable you think you are. The insights you derive from hiring a professional to attack your own resources are almost always humbling—but being humbled is always preferable to being surprised. Although penetration testing is largely a manual process, don’t neglect the automated tools that are appearing on the scene. Your attackers will certainly be using automated tools to break down your defenses. Remember: an attacker only needs to find one vulnerability that escaped your attention. Better that someone on your team discovers that vulnerability first.

Open Source and a Culture of Sharing

The rise of open source software in the 1990s has undoubtedly transformed IT. While vendor lock-in is still a very real issue, the availability of open source software has done a lot to liberate IT. You’re no longer tied to Digital Equipment hardware because you bought a DEC compiler and have a few million lines of code using proprietary extensions (a very real problem for technologists in the 1980s and 1990s). More importantly, open source has unleashed tremendous creativity. The internet wouldn’t exist without open source. Nor would many popular programming languages, including Go, Rust, Python, JavaScript, and Ruby. And although C and C++ aren’t open source, they’d be much less important without the free GCC compiler. At the same time, it’s possible to see the cloud as a retreat from open source: you neither know nor care where the software that implements Azure or AWS came from, and many of the services your cloud provider offers are likely to be rebranded versions of open source source platforms. This practice is controversial but probably unavoidable given the nature of open source licenses.

So we asked CTOs what role open source played in their organizations. All of the CTOs said that their organizations make use of open source software and frameworks. Chander Damodaran of Brillio noted that, “the culture of sharing solutions, frameworks, and industry-leading practices” has been a crucial part of Brillio’s journey. Similarly, Tim Hope said that open source is critical in building an engineering culture and developing systems. That’s an important statement. Too many articles about engineering culture have focused on foosball and beer in the company fridge. Engineering culture must focus on getting the job done effectively, whether that’s building, maintaining, or running software. These responses suggest that sharing knowledge and solutions is the true heart of engineering culture, and that’s visibly demonstrated by open source. It’s an effective way to get software tools and components that you wouldn’t be able to develop on your own. Furthermore, those tools aren’t tied to a single vendor that might be acquired or go out of business. In the best case, they’re maintained by large communities that also have a stake in ensuring the software’s quality. Bill Joy, one of Sun Microsystems’ founders, famously stated, “No matter who you are, most of the smartest people work for someone else.” Open source allows you to use the contributions of those many smart people who will never be on your staff.

Unfortunately, only two of the CTOs we asked indicated that their staff were able to contribute to open source projects. One of the CTOs said that they were working toward policies that would allow their developers to release projects with company support. It’s almost impossible to imagine a technical company that doesn’t use open source somewhere. The use of open source is so widespread that the health of open source is directly tied to the health of the entire technology sector. That’s why a critical bug in an important project—for example, the recent Log4j vulnerability—has serious worldwide ramifications. It’s important for companies to contribute back to open source projects: fixing bugs, plugging vulnerabilities, adding features, and funding the many developers who maintain projects on a volunteer basis.

Thinking Strategically About Software

The CTOs we questioned had similar views of the strategic function of IT, though they differed in the details. Everyone stressed the importance of delivering value to the customer; value to the customer translates directly into business value. The best approach to delivering this value depends on the application—as Shashank Kaul pointed out, that might require building custom software; outsourcing parts of a project but keeping core, unique aspects of the project internal; or even buying commercial off-the-shelf software. The “build versus buy” decision has plagued CTOs for years. There are many frameworks for making these decisions (just google “build vs buy”), but the key concept is understanding your company’s core value proposition. What makes your company unique? That’s where you should focus your software development effort. Almost everything else can be acquired through open source or commercial software.

According to Tim Hope, the IT group at Versent is small. Most of their work involves integrating software-as-a-service solutions. They don’t build much custom software; they provide data governance and guidelines for other business units, which are responsible for building their own software. While the development of internal tools can take place as needed in different business units, it’s important to realize that data governance is, by nature, centralized. A company needs a standard set of policies about how to handle data, and those policies need to be enforced across the whole organization. Those standards will only become more crucial as regulations about data usage become more prevalent. Companies that haven’t adopted some form of data governance will be playing a high-stakes game of catch-up.

Likewise, Jyotiswarup Raiturkar at Angel One focuses on long-term value. Angel One distinguishes between IT, which supports internal tools (such as email), and the “tech team,” which is focused on product development. The tech team is investing heavily in building low-latency, high-throughput systems that are the lifeblood of a financial services company. Like Versent, Angel One is investing in platforms that support data discovery, data lineage, and data exploration. It should be noted that tracking data lineage is a key part of data governance. It’s extremely important to know where data comes from and how it’s gathered—and that’s particularly true for a firm in financial services, a sector that’s heavily regulated. These aren’t questions that can be left to ad hoc last-minute solutions; data governance has to be consistent throughout the organization.

Although software for internal users (sometimes called “internal customers”) was mentioned, it wasn’t a focus for any of the IT leaders we contacted. We hear increasingly about “self-service” data, democratization, “low code,” and other movements that allow business units to create their own applications. Whatever you call it, it seems that one role for a company’s technology organization is to enable the other business units to serve themselves. IT groups are also responsible for internal tools that are created to make the existing staff more efficient while avoiding the trap of turning internal projects into large IT commitments that are difficult to maintain and never really satisfy the users’ needs.

One solution is for the IT group to encourage employees in other divisions to build their own tools as they need them. This approach puts the IT group in the role of consultants and helpers rather than developers. It requires building a technology stack that’s appropriate for nontechnical employees. For instance, the IT group may need to build a data mesh that allows different units to manage their own data while using the data from other parts of the organization as needed, all subject to good policies for data governance, access control, and security. They’ll also need to learn about appropriate low-code and no-code tools that allow employees to build what they need even if they don’t have software development skills. This investment will give the rest of the company better tools to work with. Users will be able to build exactly what they need, without passing requests up and down an error-prone chain of command to reach the software developers. And the IT burden of maintaining these in-house tools will (we hope) be reduced.

Keeping Employees Happy and Challenged

It goes without saying, but we’ll say it anyway: even with news of tech sector layoffs, today’s job market is very good for employees trying to find new jobs, and very tough for employers trying to hire to support company growth. In many organizations, even maintaining the status quo is a challenge. What are APAC CTOs doing to keep their staff from jumping ship?

Every company had training and development programs, and most had multiple programs, adapted to different learning styles and needs. Some offered online training experiences only; Angel One provides both online and in-person training to its employees. Offering programs for employee training and development is clearly “table stakes,” a necessity for technological health.

It’s more important to look at what goes beyond the basics. Webjet recognizes that training can’t just take place outside of business hours. Managers are charged to carve out work time (roughly 10%) for employees to participate in training—and while 10% sounds like a small number, that’s a significant investment, on the order of 200 hours per year devoted to training. It’s worth noting that our 2021 Data & AI Salary Survey report showed that the largest salary increases went to employees who spent over 100 hours in training programs. While it’s a crude metric, those salary increases clearly say something about the value of training to an employer.

Shashank Kaul also observed that Webjet keeps its IT developers as close as possible to the problems being solved, and in conversation with their counterparts at customers’ firms, avoiding the problem of becoming a “feature factory.” This description reminds us of extreme programming, with its regular demos and contact with customers that allowed software projects to keep on target through many midcourse corrections. It’s important that Kaul also sees contact with customers and peers as an aid in retaining engineers: no one likes to spend time implementing features that are never used, particularly when they result from inadequate communication about the actual problems being solved. Webjet and Versent also run regular employee hackathons, where anyone in the organization can participate in solving problems.

Jyotiswarup Raiturkar offered some additional ideas to keep employees happy and productive. Angel One has a “permanent work from anywhere” policy that makes it much easier for employees to balance work with their personal life and goals. The ability to work from home, and the time that you get back by avoiding a lengthy commute, is worth a lot: in congested cities, an 8-hour day can easily become a 10- to 12-hour commitment. It’s important that this policy is permanent: employees at many companies got used to working at home during the pandemic and are now unhappy at being asked to return to offices.

Raiturkar also noted that Angel One’s employees can roll out features in their first few days at the company, something we’ve seen at companies that practice DevOps. An important part of Facebook’s “bootcamp” for new employees has been requiring them to deploy code to the site on their first day. Continuous deployment may have more to do with software engineering than human resources, but nothing makes employees feel more like they’re part of a team than the ability to see changes go into production.

What Is Technical Health?

In this brief look at the experience of CTOs in the APAC region, we see a proactive approach to security that includes the software supply chain. We see widespread use of open source, even if employees are limited in their freedom to contribute back to open source projects. We see Agile and DevOps practices that put software developers in touch with their users so that they’re always headed in the right direction. And we see training, hackathons, and work-from-anywhere policies that let employees know that they, their careers, and their home lives are valued.

We hope all companies will consider technical health periodically, ideally when they’re forming plans and setting goals for the coming year. As the business world moves further and further into a radical technical transformation, every company needs to put in place practices that contribute to a healthy technical environment. If all companies are software companies, technical health is not optional.

Categories: Technology

Healthy Data

O'Reilly Radar - Tue, 2022/11/15 - 08:18

This summer, we started asking about “technical health.” We don’t see a lot of people asking what it means to use technology in healthy ways, at least not in so many words. That’s understandable because “technical health” is so broad that it’s difficult to think about.  It’s easy to ask a question like “Are you using agile methodologies?” and assume that means “technical health.”  Agile is good, right?  But agile is not the whole picture. Neither is being “data driven.” Or Lean. Or using the latest, coolest programming languages and frameworks. Nor are any of these trends, present or past, irrelevant.

To investigate what’s meant by “technical health,” we have begun a series of short surveys to help us understand about what technical health means, and to help our readers think about the technical health of their organizations. The first survey looked at the use of data. It ran from August 30, 2022 to September 30, 2022. We received 693 responses, of which 337 were complete (i.e., the respondent answered all the questions). We didn’t include the incomplete respondents in our results, a practice that’s consistent with our other, lengthier surveys.

No single question and answer stood out; we can’t say “everybody does X” or “nobody does Y.” Whether or not that’s healthy in and of itself, it suggests that there isn’t yet any consensus about the role data plays. For example, the first question was “What percentage of enterprise-wide decisions are driven primarily by data?” 19% of the respondents answered “25% or less”; 31% said “76% or more.” We were surprised to see that the percentage of respondents who said that most decisions aren’t data driven was so similar to the percentage who thought they are. The difference between 19% and 31% looks much larger on paper than it is in practice. Yes, it’s almost a 2:1 ratio, but it shows that a lot of respondent work for companies that aren’t using data in their decision making. Even more significant, fully half of the respondents put their companies in the “sort of data driven” middle ground (26-50% and 51-75% received 25% and 26% of responses, respectively.) Does this mean that most companies are somewhere along the path towards being data-driven, with the “25% or less” cohort representing companies that are “catching up”? It’s hard to say.

We saw similar answers when we asked what percentage of business processes are informed by real-time data: 33% of respondents said 25% or less, while 21% said 76% or more. (26-50% and 51-75% received 22% and 24% of responses, respectively.) Incorporating real-time data into business processes is a heavier lift than running a few reports before a management meeting, so it isn’t surprising that fewer people are making widespread use of real-time data. These responses also suggest that the industry is in the process of transformation, of deciding how to use real-time data. There are many possibilities: managing inventory, supply chains, and manufacturing processes; automating customer service; and reducing time spent on routine paperwork, to name a few. But we don’t yet see a clear direction.

The bane of data science has been the HIPPO: the “highest paid person’s opinion.” When the HIPPO is in the room, data is used primarily to justify decisions that have already been made. The questions we asked don’t tell us much about the presence of the HIPPO, but we have to wonder: Is that why 20% of the respondents say that data doesn’t have a big influence in corporate decision-making? Are the 31% who said that over 75% of management decisions are based on data being ironic or naive? We don’t know, and need to keep that ambiguity in mind. Data can’t be the final word in any decision; we can’t underestimate the importance of instinct and a gut understanding of business dynamics. Data is always historical and, as such, is often better at maintaining a status quo than at helping to build a future–though when used well, data can shine light on the status quo, and help you question it. Data that’s used solely to justify the HIPPO isn’t healthy. Our survey doesn’t say much about the influence of the HIPPO. That’s something you’ll need to ponder when considering your company’s technical health.

We’ve been tracking the democratization of data–the ability of staff members who aren’t data scientists, analysts, or something else with “data” in their title–to access and use data in their job. Staff members need the ability to access and use data on their own, without going through intermediaries like database administrators (DBAs) and other custodians to generate reports and give them the data they need to work effectively. Self-service data is at the heart of the democratization process–and being data-driven isn’t meaningful if only a select priesthood has access to the data.  Companies are slowly waking up to this reality. 26% of the respondents to our survey said that less than 20% of their company’s information workers had access to self-service query and analytics. That’s arguably a high percentage (and it was the most popular single answer), but we choose to see the glass as half (or three quarters) full: 74% said that more than 20% had access. (23% of the respondents said that 41% to 60% of their company’s data workers had self-service; 15% chose 61% to 80%; and 16% chose 81% to 100%.) No answer jumps out–but remember that, not so long ago, data was the property of actuaries, analysts, and DBAs. The walls between staff members and the data they need to do their job started to break down with the “data science” movement, but data scientists were still specialists and professionals. We’re still making the transition, but our survey shows that data is becoming more accessible, to more people, and we believe that’s healthy.

Roughly one third (35%) of the respondents said that their organization used a data catalog. That seems low, but it isn’t surprising. While we like to tell each other how quickly technology changes, the fact is that real adoption is almost always slow. Data catalogs are a relatively new technology; their age is measured in years, not decades. They’re gradually being accepted.

We got a similar result when we asked about data governance tools. 58% of the respondents said they weren’t using anything (“None of the above,” but “the above” included an option for a write-in.) SAP, IBM, SAS, and Informatica were leading choices (21%, 14%, 12%, and 11% respectively; respondents could select multiple answers). Again, we expect adoption of data governance tools to be slow. Data has been the “wild west” of the technology world for years, with few restrictions on what any organization could do with the data it collected. That party is coming to the end, but nobody’s pretending that the hangover is pleasant. Like data catalogs (to which they’re closely related), governance tools are relatively new and being adopted gradually.

Looking at the bigger picture, we see that companies are grappling with the demands of self-service data. They are also facing increasing regulation governing the use of data. Both of these trends require tooling to support them. Catalogs help users find and maintain metadata that shows what data exists and how it should be used; governance tools track data provenance and ensure that data is used in accordance with company policies and regulations. Fifteen years ago, we frequently heard “save everything, and wring every bit of value you can out of your data.” In the 2020s, it’s hard to see that as a good, healthy attitude. An important part of technological health is a commitment to use data ethically and legally. We believe we see movement in that direction.

Over the coming months, we’ll investigate technical health in other areas (next up is Security). For data health, we can close with some observations:

  • Data can’t be the only factor in decision making; human judgment plays an important role. But using data simply to justify a human decision that’s already been made is also a mistake. Technical health means knowing when and how to use data effectively; it’s a continuum, not a choice. We believe that companies are on the path to understanding that.
  • Empowering staff to make their own data queries and perform their own analyses can help them become more productive and engaged. But this doesn’t happen on its own. People need to know what data is available to them, and what that data means. That’s the purpose of a data catalog. And the use of data has to comply with regulations and company policies; that’s the purpose of governance. Data catalogs and governance tools are making inroads, but they’ve only started. Technical health means empowering users with the tools they need to make effective, ethical, and legal use of data.

Healthy data improves processes, questions preconceived opinions, and shines a light on practices that are unfair or discriminatory. We don’t expect anyone to look at their company and say “our data practices deserve a gold star”; that misses the point. Maintaining a healthy relationship to data is an ongoing practice, and that practice is still developing. We are learning to make better decisions with data; we are learning to implement governance to use data ethically (to say nothing of legally). Data health means that you and your company are on the path, not that you’ve arrived. We’re all making the same journey.

Categories: Technology
Subscribe to LuftHans aggregator