You are here

Feed aggregator

Four short links: 16 July 2019

O'Reilly Radar - Tue, 2019/07/16 - 08:10

Quantum TiqTaqToe, Social Media and Depression, Incidents, and Unity ML

  1. Introducing a new game: Quantum TiqTaqToe -- This experience was essential to the birth of Quantum TiqTaqToe. In my quest to understand Unity and Quantum Games, I set out to implement a “simple” game to get a handle on how all the different game components worked together. Having a game based on quantum mechanics is one thing; making sure it is fun to play requires an entirely different skill set.
  2. Association of Screen Time and Depression in Adolescence (JAMA) -- Time-varying associations between social media, television, and depression were found, which appeared to be more explained by upward social comparison and reinforcing spirals hypotheses than by the displacement hypothesis. (via Slashdot)
  3. CAST Handbook -- How to learn more from incidents and accidents.
  4. ML-Agents -- Unity Machine Learning Agents Toolkit, open source.

Continue reading Four short links: 16 July 2019.

Categories: Technology

Managing machine learning in the enterprise: Lessons from banking and health care

O'Reilly Radar - Mon, 2019/07/15 - 04:00

A look at how guidelines from regulated industries can help shape your ML strategy.

As companies use machine learning (ML) and AI technologies across a broader suite of products and services, it’s clear that new tools, best practices, and new organizational structures will be needed. In recent posts, we described requisite foundational technologies needed to sustain machine learning practices within organizations, and specialized tools for model development, model governance, and model operations/testing/monitoring.

What cultural and organizational changes will be needed to accommodate the rise of machine and learning and AI? In this post, we'll address this question through the lens of one highly regulated industry: financial services. Financial services firms have a rich tradition of being early adopters of many new technologies, and AI is no exception:

Figure 1. Stage of adoption of AI technologies (by industry). Image by Ben Lorica.

Alongside health care, another heavily regulated sector, financial services companies have historically had to build in explainability and transparency to some of their algorithms (e.g., credit scores). In our experience, many of the most popular conference talks on model explainability and interpretability are those given by speakers from finance.

Figure 2. AI projects in financial services and health care. Image by Ben Lorica.

After the 2008 financial crisis, the Federal Reserve issued a new set of guidelines governing models—SR 11-7: Guidance on Model Risk Management. The goal of SR 11-7 was to broaden a set of earlier guidelines which focused mainly on model validation. While there aren’t any surprising things in SR 11-7, it pulls together important considerations that arise once an organization starts using models to power important products and services. In the remainder of this post, we'll list the key areas and recommendations covered in SR 11-7, and explain how they are relevant to recent developments in machine learning. (Note that the emphasis of SR 11-7 is on risk management.)

Sources of model risk

We should clarify that SR 11-7 also covers models that aren’t necessarily based on machine learning: "quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates." With this in mind, there are many potential sources of model risk, SR 11-7 highlighted incorrect or inappropriate use of models, and fundamental errors. Machine learning developers are beginning to look at an even broader set of risk factors. In earlier posts, we listed things ML engineers and data scientists may have to manage, such as bias, privacy, security (including attacks aimed against models), explainability, and safety and reliability.

Figure 3. Model risk management. Image by Ben Lorica and Harish Doddi. Model development and implementation

The authors of SR 11-7 emphasize the importance of having a clear statement of purpose so models are aligned with their intended use. This is consistent with something ML developers have long known: models built and trained for a specific application are seldom (off-the-shelf) usable in other settings. Regulators behind SR 11-7 also emphasize the importance of data—specifically data quality, relevance, and documentation. While models garner the most press coverage, the reality is that data remains the main bottleneck in most ML projects. With these important considerations in mind, research organizations and startups are building tools focused on data quality, governance, and lineage. Developers are also building tools that enable model reproducibility, collaboration, and partial automation.

Model validation

SR 11-7 has some specific organizational suggestions for how to approach model validation. The fundamental principle it advances is that organizations need to enable critical analysis by competent teams that are able to identify the limitations of proposed models. First, model validation teams should be comprised of people who weren’t responsible for the development of a model. This is similar to recommendations made in a recent report released by The Future of Privacy Forum and Immuta (their report is specifically focused on ML). Second, given the tendency to showcase and reward the work of model builders over those of model validators, appropriate authority, incentives, and compensation policies should be in place to reward teams that perform model validation. In particular, SR 11-7 introduces the notion of "effective challenge":

Staff conducting validation work should have explicit authority to challenge developers and users, and to elevate their findings, including issues and deficiencies. ... Effective challenge depends on a combination of incentives, competence, and influence.

Finally, SR 11-7 recommends that there be processes in place to select and validate models developed by third-parties. Given the rise of SaaS and the proliferation of open source research prototypes, this is an issue that is very relevant to organizations that use machine learning.

Model monitoring

Once a model is deployed to production, SR 11-7 authors emphasize the importance of having monitoring tools and targeted reports aimed at decision-makers. This is in line with our recent recommendation that ML operations teams provide dashboards with custom views for all principals (operations, ML engineers, data scientists, and business owners). They also cite another important reason to setup independent risk monitoring teams: the authors point out that in some instances, the incentive to challenge specific models might be asymmetric. Depending on the reward structure within an organization, some parties might be less likely to challenge models that help elevate their own specific key performance indicators (KPIs).

Governance, policies, controls

SR 11-7 highlights the importance of maintaining a model catalog that contains complete information for all models, including those currently deployed, recently retired, and under development. The authors also emphasize that documentation should be detailed enough so that “parties unfamiliar with a model can understand how the model operates, its limitations, and its key assumptions.” These are relevant to ML, and the early tools and open source projects for ML lifecycle development and model governance will need to be supplemented with tools that facilitate the creation of adequate documentation.

This section of SR 11-7 also has specific recommendations on roles that might be useful for organizations that are beginning to use more ML in products and services:

  • Model owners make sure that models are properly developed, implemented, and used. In the ML world, these are data scientists, machine learning engineers, or other specialists.
  • Risk-control staff take care of risk measurement, limits, monitoring, and independent validation. In the ML context, this would be a separate team of domain experts, data scientists, and ML engineers.
  • Compliance staff ensure there are specific processes in place for model owners and risk-control staff.
  • External regulators are responsible for making sure these measures are being properly followed across all the business units.
Aggregate exposure

There have been many examples of seemingly well-prepared financial institutions caught off-guard by rogue units or rogue traders who weren’t properly accounted for in risk models. To that end, SR 11-7 recommends that financial institutions consider risk from individual models as well as aggregate risks that stem from model interactions and dependencies. Many ML teams have not started to think of tools and processes for managing risks stemming from the simultaneous deployment of multiple models, but it’s clear that many applications will require this sort of planning and thinking. Creators of emerging applications that depend on many different data sources, pipelines, and models (e.g., autonomous vehicles, smart buildings, and smart cities) will need to manage risks in the aggregate. New digital-native companies (in media, e-commerce, finance, etc.) that rely very heavily on data and machine learning also need systems to monitor many machine learning models individually and in aggregate.

Health care and other industries

While we focused this post in guidelines written specifically for financial institutions, companies in every industry will need to develop tools and processes for model risk management. Many companies are already affected by existing (GDPR) and forthcoming (CCPA) privacy regulations. And, as mentioned, ML teams are beginning to build tools to help detect bias, protect privacy, protect against attacks aimed at models, and ensure model safety and reliability.

Health care is another highly regulated industry that AI is rapidly changing. Earlier this year, the U.S. FDA took a big step forward by publishing a Proposed Regulatory Framework for Modifications to AI/ML Based Software as a Medical Device. The document starts by stating that “the traditional paradigm of medical device regulation was not designed for adaptive AI/ML technologies, which have the potential to adapt and optimize device performance in real time to continuously improve health care for patients.”

The document goes on to propose a framework for risk management and best practices for evolving such ML/AI based systems. As a first step, the authors list modifications that impact users and thus need to be managed:

  • modifications to analytical performance (i.e., model re-training)
  • changes to the software’s inputs
  • changes to its intended use.

The FDA proposes a total product lifecycle approach that requires different regulatory approvals. For the initial system, a premarket assurance of safety and effectiveness is required. For real-time performance, monitoring is required—along with logging, tracking, and other processes supporting a culture of quality—but not regulatory approval of every change.

This regulatory framework is new and was published in order to receive comments from the public before a full implementation. It still lacks requirements for localized measurement of safety and effectiveness, as well as for the evaluation and elimination of bias. However, it’s an important first step for developing a fast-growing AI industry for health care and biotech with a clear regulatory framework, and we recommend that practitioners stay educated on it as it evolves.

Summary

Every important new wave of technologies brings benefits and challenges. Managing risks in machine learning is something organizations will increasingly need to grapple with. SR 11-7 from the Federal Reserve contains many recommendations and guidelines that map well over to the needs of companies that are integrating machine learning into products and services.

Related content:

Continue reading Managing machine learning in the enterprise: Lessons from banking and health care.

Categories: Technology

Four short links: 15 July 2019

O'Reilly Radar - Mon, 2019/07/15 - 01:00

Climbing Robot, Programming and Programming Languages, Media Player, and Burnout Shops

  1. NASA Climbing Robota four-limbed robot named LEMUR (Limbed Excursion Mechanical Utility Robot) can scale rock walls, gripping with hundreds of tiny fishhooks in each of its 16 fingers and using artificial intelligence to find its way around obstacles.
  2. Programming and Programming Languages — a new edition of a book that introduces programming and programming languages at the same time.
  3. IINAThe modern media player for macOS. Open source, and very good.
  4. Job Burnout in Professional and Economic Contexts (PDF) — In recent times, we are seeing the development of new 'burnout shops' that are not short-term projects, but are long-term models for doing business. A new word in my lexicon, on a subject of interest to me.

Continue reading Four short links: 15 July 2019.

Categories: Technology

Four short links: 12 July 2019

O'Reilly Radar - Fri, 2019/07/12 - 03:50

Hosting Hate, Releasing, Government Innovation, and Voice Cloning

  1. The Dirty Business of Hosting Hate Online (Gizmodo) -- an interesting rundown of who is hosting some of the noxious sites on the web.
  2. Releasing Fast and Slow -- Our research shows that: rapid releases are more commonly delayed than their non-rapid counterparts; however, rapid releases have shorter delays; rapid releases can be beneficial in terms of reviewing and user-perceived quality; rapidly released software tends to have a higher code churn, a higher test coverage, and a lower average complexity; challenges in rapid releases are related to managing dependencies and certain code aspects—e.g., design debt.
  3. Embracing Innovation in Government (OECD) -- a global review that explores how governments are innovating and taking steps to make innovation a routine and integrated practice across the globe.
  4. Learning to Speak Fluently in a Foreign Language: Multilingual Speech Synthesis and Cross-Language Voice Cloning -- We present a multispeaker, multilingual text-to-speech (TTS) synthesis model based on Tacotron that is able to produce high-quality speech in multiple languages. Moreover, the model is able to transfer voices across languages—e.g., synthesize fluent Spanish speech using an English speaker's voice, without training on any bilingual or parallel examples. Such transfer works across distantly related languages—e.g. English and Mandarin.

Continue reading Four short links: 12 July 2019.

Categories: Technology

PLUG Security meeting on 7/18

PLUG - Thu, 2019/07/11 - 20:00
At this month's PLUG Security meeting:
Donald McCarthy: passiveDNS For fun and Profit (part1)

For more information:
http://phxlinux.org/index.php/meetings/20-plug-security.html

Description:
If you DNS infrastructure has a bad day, your network has a bad day. If your DNS infrastructure has a good day, something else is bound to go wrong. PassiveDNS generally wont help you fix either.

PassiveDNS is a historical look at observed DNS queries over time. It is akin to The Internet Archive's Way Back Machine, but for DNS zones. Its utility as an operations and security tool is valuable and not easily replaced by another type of data.

In this presentation we will cover exactly what passiveDNS is and isn't, passiveDNS architecture, some security use cases, and if time allows some live demonstration.

In part 2 of the presentation (another month) I will demonstrate some passiveDNS tooling and more in depth practical knowledge to turn theoretical use cases into automated assistance for a SOC or NOC.

About Donald:
Donald "Mac" McCarthy is a 15 year veteran of the IT industry with the last 8 years focused on InfoSec. He has worked on a variety of different systems ranging from cash registers to super computers. It was while serving as a systems administrator for a scientific computing cluster that he discovered his passion for using linux for highly distributed complex tasks. His current focus is using linux with open source technologies like kafka and elastic search to build tooling for security analysts and network operations. He is a proud Veteran of the United States Army and recently relocated from Atlanta to the East Valley.

Four short links: 9 July 2019

O'Reilly Radar - Tue, 2019/07/09 - 04:40

Future of Work, GRANDstack, Hilarious Law Review Article, and The Platform Excuse

  1. At Work, Expertise Is Falling Out of Favor (The Atlantic) -- an interesting longform exploration of "the future of work" (aka automation, generalists, lifelong learning) in the context of the Navy's Littoral Combat Ship experiment. So much applicability to the business world ("experiment" becomes "must succeed flagship project" when CEO changes; chaos is opportunity to learn; etc.).
  2. GRANDstack -- GraphQL, React, Apollo, and Neo4j.
  3. The Most Important Law Review Article You’ll Never Read: A Hilarious (in the Footnotes) Yet Serious (in the Text) Discussion of Law Reviews and Law Professors (SSRN) -- the best discussion of foolish academic publishing measures you'll read today.
  4. The "Platform" Excuse is Dying (The Atlantic) -- The platform defense used to shut down the why questions: Why should YouTube host conspiracy content? Why should Facebook host provably false information? Facebook, YouTube, and their kin keep trying to answer: "We’re platforms!" But activists and legislators are now saying, "So what?"

Continue reading Four short links: 9 July 2019.

Categories: Technology

The circle of fairness

O'Reilly Radar - Tue, 2019/07/09 - 04:00

We shouldn't ask our AI tools to be fair; instead, we should ask them to be less unfair and be willing to iterate until we see improvement.

Fairness isn't so much about "being fair" as it is about "becoming less unfair." Fairness isn't an absolute; we all have our own (and highly biased) notions of fairness. On some level, our inner child is always saying: "But that's not fair." We know humans are biased, and it's only in our wildest fantasies that we believe judges and other officials who administer justice somehow manage to escape the human condition. Given that, what role does software have to play in improving our lot? Can a bad algorithm be better than a flawed human? And if so, where does that lead us in our quest for justice and fairness?

While we talk about AI being inscrutable, in reality it's humans who are inscrutable. In Discrimination in the Age of Algorithms, Jon Kleinberg, et al., argue that algorithms, while unfair, can at least be audited rigorously. Humans can't. If we ask a human judge, bank officer, or job interviewer why they made a particular decision, we'll probably get an answer, but we'll never know whether that answer reflects the real reason behind the decision. People often don’t know why they make a decision, and even when someone attempts an honest explanation, we never know whether there are underlying biases and prejudices they aren't aware of. Everybody thinks they're "fair," and few people will admit to prejudice. With an algorithm, you can at least audit the data that was used to train the algorithm and test the results the algorithm gives you. A male manager will rarely tell you he doesn't like working with women, or he can't trust people of color. Algorithms don't have those underlying and unacknowledged agendas; the agendas are in the training data, hiding in plain sight if we only search for them. We have the tools we need to make AI transparent–not explainable, perhaps, but we can expose bias, whether it’s hiding in the training data or the algorithm itself.

Auditing can reveal when an algorithm has reached its limits. Julia Dressel and Hany Farid, studying the COMPAS software for recommending bail and prison sentences, found that it was no more accurate than randomly chosen people at predicting recidivism. Even more striking, they built a simple classifier that matched COMPAS’s accuracy using only two features–the defendant’s age and number of prior convictions–not the 137 features that COMPAS uses. Their interpretation was that there are limits to prediction, beyond which providing a richer set of features doesn’t add any signal. Commenting on this result, Sharad Goel offers a different interpretation, that “judges in the real world have access to far more information than the volunteers...including witness testimonies, statements from attorneys, and more. Paradoxically, that informational overload can lead to worse results by allowing human biases to kick in.” In this interpretation, data overload can enable unfairness in humans. With an algorithm, it’s possible to audit the data and limit the number of features if that’s what it takes to improve accuracy. You can’t do that with humans; you can’t limit their exposure to extraneous data and experiences that may bias them.

Understanding the biases that are present in training data isn't easy or simple. As Kleinberg points out, properly auditing a model would require collecting data about protected classes; it's difficult to tell whether a model shows racial or gender bias without data about race and gender, and we frequently avoid collecting that data. In another paper, Kleinberg and his co-authors show there are many ways to define fairness that are mathematically incompatible with each other. But understanding model bias is possible, and if possible, it should be possible to build AI systems that are at least as fair as humans, if not more fair.

This process is similar to the 19th-century concept of the "hermeneutic circle." A literary text is inseparable from its culture; we can't understand the text without understanding the culture, nor can we understand the culture without understanding the texts it produced. A model is inseparable from the data that was used to train it; but analyzing the output of the model can help us to understand the data, which in turn enables us to better understand the behavior of the model. To philosophers of the 19th century, the hermeneutic circle implies gradually spiraling inward: better historical understanding of the culture that produces the text enables a better understanding of the texts the culture produced, which in turn enables further progress in understanding the culture, and so on. We approach understanding asymptotically.

I’m bringing up this bit of 19th-century intellectual history because the hermeneutic circle is, if nothing else, an attempt to describe a non-trivial iterative process for answering difficult questions. It’s a more subtle and self-reflective process than “fail forward fast” or even gradient descent. And awareness of the process is important. AI won’t bring us an epiphany in which our tools suddenly set aside years of biased and prejudiced history. That’s what we thought when we “moved fast and broke things”: we thought we could non-critically invent ourselves out of a host of social ills. That didn’t happen. If we can get on a path toward doing better, we are doing well. And that path certainly entails a more complex understanding of how to make progress. We shouldn't ask our AI tools to be fair; instead, we should ask them to be less unfair and be willing to iterate until we see improvement. If we can make progress through several possibly painful iterations, we approach the center.

The hermeneutic circle also reminds us that understanding comes from looking at both the particular and the general: the text and the context. That is particularly important when we’re dealing with data and with AI. It is very easy for human subjects to become abstractions–rows in a database that are assigned a score, like the probability of committing a crime. When we don’t resist that temptation, when we allow ourselves to be ruled by abstractions rather than remembering our abstractions represent people, we will never be “fair”: we’ve lost track of what fair means. It’s impossible to be fair to a table in a database. Fairness is always about individuals.

We're right to be skeptical. First, European thought has been plagued by the notion that European culture is the goal of human history. “Move fast and break things” is just another version of that delusion: we’re smart, we’re technocrats, we’re the culmination of history, of course we’ll get it right. If our understanding of "fairness" degenerates into an affirmation of what we already are, we are in trouble. It's dangerous to put too much faith in our ability to perform audits and develop metrics: it's easy to game the system, and it's easy to trick yourself into believing you've achieved something you haven't. I’m encouraged, though, by the idea that the hermeneutic circle is a way of getting things right by being slightly less wrong. It’s a framework that demands humility and dialog. For that dialog to work, it must take into account the present and the past, the individual and the collective data, the disenfranchised and the franchised.

Second, we have to avoid turning the process of fairness into a game: a circle where you're endlessly chasing your tail. It's easy to celebrate the process of circling while forgetting that the goal isn't publishing papers and doing experiments. It’s easy to say “we’re not making any progress, and we probably can’t make any progress, but at least our salaries are being paid and we’re doing interesting work.” It’s easy to play the circle game when it can be proven different definitions of fairness are incompatible, or when contemplating the enormous number of dimensions in which one might want to be fair. And we will have to admit that fairness is not an absolute concept that’s graven on stone tablets, but that is fundamentally situational.

It was easy for the humanistic project of interpretation to lose itself in the circle game because it never had tools like audits and metrics. It could never measure whether it was getting closer to its goal, and when you can't measure your progress, it's easy to get lost. But we can measure disenfranchisement, and we can ensure that marginalized people are included in our conversations, so we understand what being "fair" means to people who are outside the system. As Cathy O'Neil has suggested, we can perform audits to black-box systems. We can understand fairness will always be elusive and aspirational, and use that knowledge to build appeal and redress into our systems. We can't let the ideal of perfect fairness become an excuse for inaction. We can make incremental progress toward building a world that's better for all of us.

We'll never finish that project, in part because the issues we're tracking will always be changing, and our old problems will mutate to plague us in new ways. We’ll never be done because we will have to deal with messy questions like what “fair” means in any given context, and those contexts will change constantly. But we can make progress: having taken one step, we'll be in a position to see the next.

Continue reading The circle of fairness.

Categories: Technology

PLUG meeting on Jul 11th

PLUG - Mon, 2019/07/08 - 23:01
We'll have 2 presenters this month with a distribution theme.

Artemii Kropachev: Red Hat Enterprise Linux 8 Beta 1 Overview

Description:
Learn about the first version release of Red Hat Enterprise Linux in over four years. The latest release features unprecedented ease of deployment, ease of migration, and ease of management enabling you to upgrade existing customers and attract new ones.
Red Hat Enterprise Linux 8 gives organizations a stable, security-focused, and consistent foundation across hybrid cloud deployments—and the tools they need to deliver applications and workloads faster with less effort.

About Artemii:
Worldwide IT expert and international consultant with over 20 years of high level IT experience and expertise. I have trained, guided and consulted hundreds of architects, engineer, developers, and IT experts around the world since 2001. My architect-level experience covers DC, Clouds, DevOps, NFV solutions built on top of any Red Hat and Open Source technologies. I am one of the highest Red Hat Certified Specialists in the world.


der.hans: Hey Buster! Debian 10 released

Description:
Debian 10 brings with it many ch-ch-changes.

Reproduciable Builds, Wayland, AppArmor, nftables, cups.

10 hardware architectures, 59000 packages, 28,939 source packages, 11,610,055 source files, and 76 languages.

Stretch updates.

Get or upgrade to Debian 10 now.

Coming soon on Blu-ray.

About der.hans:
der.hans is a Free Software, technology and entrepreneurial veteran. He is a repeat author for the Linux Journal with his article about online privacy and security using a password manager as the cover article for the January 2017 issue.

He's chairman of the Phoenix Linux User Group (PLUG), BoF organizer for the Southern California Linux Expo (SCaLE), and founder of the Free Software Stammtisch and Stammtisch Job Nights.

He often presents at large community-led conferences (SCaLE, SeaGL, LFNW, Tübix) and many local groups.

https://floss.social/@FLOX_advocate
https://mastodon.social/@lufthans

Highlights from the O'Reilly Artificial Intelligence Conference in Beijing 2019

O'Reilly Radar - Mon, 2019/07/08 - 08:51

Experts explore the future of hiring, AI breakthroughs, embedded machine learning, and more.

Experts from across the AI world came together for the O'Reilly Artificial Intelligence Conference in Beijing. Below you'll find links to highlights from the event.

The future of hiring and the talent market with AI

Maria Zheng examines AI and its impact on people’s jobs, quality of work, and overall business outcomes.

The future of machine learning is tiny

Pete Warden digs into why embedded machine learning is so important, how to implement it on existing chips, and some of the new use cases it will unlock.

AI and systems at RISELab

Ion Stoica outlines a few projects at the intersection of AI and systems that UC Berkeley's RISELab is developing.

Top AI breakthroughs you need to know

Abigail Hing Wen discusses some of the most exciting recent breakthroughs in AI and robotics.

Data orchestration for AI, big data, and cloud

Haoyuan Li offers an overview of a data orchestration layer that provides a unified data access and caching layer for single cloud, hybrid, and multicloud deployments.

AI and retail

Mikio Braun takes a look at Zalando and the retail industry to explore how AI is redefining the way ecommerce sites interact with customers.

Why do we say AI should be cloud native?

Yangqing Jia reviews industry trends supporting the argument that AI should be cloud native.

--> Designing computer hardware for artificial intelligence

Michael James examines the fundamental drivers of computer technology and surveys the landscape of AI hardware solutions.

Toward learned algorithms, data structures, and systems

Tim Kraska outlines ways to build learned algorithms and data structures to achieve “instance optimality” and unprecedented performance for a wide range of applications.

Continue reading Highlights from the O'Reilly Artificial Intelligence Conference in Beijing 2019.

Categories: Technology

AI and retail

O'Reilly Radar - Mon, 2019/07/08 - 08:50

Mikio Braun takes a look at Zalando and the retail industry to explore how AI is redefining the way ecommerce sites interact with customers.

Continue reading AI and retail.

Categories: Technology

The future of hiring and the talent market with AI

O'Reilly Radar - Mon, 2019/07/08 - 08:50

Maria Zheng examines AI and its impact on people’s jobs, quality of work, and overall business outcomes.

Continue reading The future of hiring and the talent market with AI.

Categories: Technology

Top AI breakthroughs you need to know

O'Reilly Radar - Mon, 2019/07/08 - 08:50

Abigail Hing Wen discusses some of the most exciting recent breakthroughs in AI and robotics.

Continue reading Top AI breakthroughs you need to know.

Categories: Technology

Data orchestration for AI, big data, and cloud

O'Reilly Radar - Mon, 2019/07/08 - 08:50

Haoyuan Li offers an overview of a data orchestration layer that provides a unified data access and caching layer for single cloud, hybrid, and multicloud deployments.

Continue reading Data orchestration for AI, big data, and cloud.

Categories: Technology

Toward learned algorithms, data structures, and systems

O'Reilly Radar - Mon, 2019/07/08 - 08:50

Tim Kraska outlines ways to build learned algorithms and data structures to achieve “instance optimality” and unprecedented performance for a wide range of applications.

Continue reading Toward learned algorithms, data structures, and systems.

Categories: Technology

The future of machine learning is tiny

O'Reilly Radar - Mon, 2019/07/08 - 08:50

Pete Warden digs into why embedded machine learning is so important, how to implement it on existing chips, and shares new use cases it will unlock.

Continue reading The future of machine learning is tiny.

Categories: Technology

AI and systems at RISELab

O'Reilly Radar - Mon, 2019/07/08 - 08:50

Ion Stoica outlines a few projects at the intersection of AI and systems that UC Berkeley's RISELab is developing.

Continue reading AI and systems at RISELab.

Categories: Technology

Designing computer hardware for artificial intelligence

O'Reilly Radar - Mon, 2019/07/08 - 08:50

Michael James examines the fundamental drivers of computer technology and surveys the landscape of AI hardware solutions.

Continue reading Designing computer hardware for artificial intelligence.

Categories: Technology

Four short links: 8 July 2019

O'Reilly Radar - Mon, 2019/07/08 - 03:50

Algorithmic Governance, DevOps Assessment, Retro Language, and Open Source Satellite

  1. Algorithmic Governance and Political Legitimacy (American Affairs Journal) -- Mechanized judgment resembles liberal proceduralism. It relies on our habit of deference to rules, and our suspicion of visible, personified authority. But its effect is to erode precisely those pro­cedural liberties that are the great accomplishment of the liberal tradition, and to place authority beyond scrutiny. I mean “authori­ty” in the broadest sense, including our interactions with outsized commercial entities that play a quasi-governmental role in our lives. That is the first problem. A second problem is that decisions made by an algorithm are often not explainable, even by those who wrote the algorithm, and for that reason cannot win rational assent. This is the more fundamental problem posed by mechanized decision-making, as it touches on the basis of political legitimacy in any liberal regime.
  2. The 27-Factor Assessment Model for DevOps -- The factors are the cross-product of current best practices for three dimensions (people, process, and technology) with nine pillars (leadership, culture, app development/design, continuous integration, continuous testing, infrastructure on demand, continuous monitoring, continuous security, continuous delivery/deployment).
  3. Millfork -- a middle-level programming language targeting 6502- and Z80-based microcomputers and home consoles.
  4. FossaSat-1 (Hackaday) -- FossaSat-1 will provide free and open source IoT communications for the globe using inexpensive LoRa modules, where anyone will be able to communicate with a satellite using modules found online for under 5€ and basic wire mono-pole antennas.

Continue reading Four short links: 8 July 2019.

Categories: Technology

Four short links: 5 July 2019

O'Reilly Radar - Fri, 2019/07/05 - 06:10

Online Not All Bad, Emotional Space, Ted Chiang, Thread Summaries

  1. How a Video Game Community Filled My Nephew's Final Days with Joy (Guardian) -- you had a rough week. Treat yourself to this heart-warming story of people going the extra mile for someone.
  2. Self-Report Captures 27 Distinct Categories of Emotion Bridged by Continuous Gradients -- Although reported emotional experiences are represented within a semantic space best captured by categorical labels, the boundaries between categories of emotion are fuzzy rather than discrete. By analyzing the distribution of reported emotional states, we uncover gradients of emotion—from anxiety to fear to horror to disgust, calmness to aesthetic appreciation to awe, and others—that correspond to smooth variation in affective dimensions such as valence and dominance. Reported emotional states occupy a complex, high-dimensional categorical space. In addition, our library of videos and an interactive map of the emotional states they elicit are made available to advance the science of emotion. (via Dan Hon)
  3. Sci-Fi Author Ted Chiang on Our Relationship to Technology, Capitalism, and the Threat of Extinction (GQ) -- Right now I think we’re beginning to see a correction to the wild techno-boosterism that Silicon Valley has been selling us for the last couple decades, and that’s a good thing as far as I’m concerned. I wish we didn’t swing back and forth from the extremes of Pollyannaish optimism to dystopian pessimism; I’d prefer it if we had a more measured response throughout, but that doesn’t appear to be in our nature. +1 to this. I don't like the way we have spent 20 years imagining dystopias and then building them.
  4. Wikum -- Summarize large discussion threads.

Continue reading Four short links: 5 July 2019.

Categories: Technology

Four short links: 4 July 2019

O'Reilly Radar - Thu, 2019/07/04 - 06:50

Debugging AI, Serverless Foundations, YouTube Bans, and Pathological UI

  1. TensorWatch -- open source Microsoft, a debugging and visualization tool designed for data science, deep learning, and reinforcement learning.
  2. Formal Foundations of Serverless Computing -- the serverless computing abstraction exposes several low-level operational details that make it hard for programmers to write and reason about their code. This paper sheds light on this problem.
  3. YouTube Bans Videos Showing Hacking and Phishing (Kody) -- We made a video about launching fireworks over Wi-Fi for the 4th of July only to find out @YouTube gave us a strike because we teach about hacking, so we can't upload it. YouTube now bans: "Instructional hacking and phishing: Showing users how to bypass secure computer systems."
  4. User Inyerface -- an exercise in frustration.

Continue reading Four short links: 4 July 2019.

Categories: Technology

Pages

Subscribe to LuftHans aggregator