You are here

Feed aggregator

Notes from the frontier: Making AI work

O'Reilly Radar - Thu, 2018/10/11 - 10:00

Drawing on the McKinsey Global Institute’s research, Michael Chui explores commonly asked questions about AI and its impact on work.

Continue reading Notes from the frontier: Making AI work.

Categories: Technology

How social science research can inform the design of AI systems

O'Reilly Radar - Thu, 2018/10/11 - 04:35

The O’Reilly Data Show Podcast: Jacob Ward on the interplay between psychology, decision-making, and AI systems.

In this episode of the Data Show, I spoke with Jacob Ward, a Berggruen Fellow at Stanford University. Ward has an extensive background in journalism, mainly covering topics in science and technology, at National Geographic, Al Jazeera, Discovery Channel, BBC, Popular Science, and many other outlets. Most recently, he’s become interested in the interplay between research in psychology, decision-making, and AI systems. He’s in the process of writing a book on these topics, and was gracious enough to give an informal preview by way of this podcast conversation.

Continue reading How social science research can inform the design of AI systems.

Categories: Technology

The state of automation technologies

O'Reilly Radar - Wed, 2018/10/10 - 13:00

Ben Lorica and Roger Chen highlight recent trends in data, compute, and machine learning.

Continue reading The state of automation technologies.

Categories: Technology

Trust and transparency of AI for the enterprise

O'Reilly Radar - Wed, 2018/10/10 - 13:00

Ruchir Puri explains why trust and transparency are essential to AI adoption.

Continue reading Trust and transparency of AI for the enterprise.

Categories: Technology

Why we built a self-writing Wikipedia

O'Reilly Radar - Wed, 2018/10/10 - 13:00

Amy Heineike explains how Primer created a self-updating knowledge base that can track factual claims in unstructured text.

Continue reading Why we built a self-writing Wikipedia.

Categories: Technology

Highlights from the Artificial Intelligence Conference in London 2018

O'Reilly Radar - Wed, 2018/10/10 - 13:00

Watch highlights from expert talks covering artificial intelligence, machine learning, automation, and more.

People from across the AI world came together in London for the Artificial Intelligence Conference. Below you'll find links to highlights from the event.

The state of automation technologies

Ben Lorica and Roger Chen highlight recent trends in data, compute, and machine learning.

AI in production: The droids you’re looking for

Jonathan Ballon explains why Intel’s AI and computer vision edge technology will drive advances in machine learning and natural language processing.

AI and machine learning at Amazon

Ian Massingham discusses the application of ML and AI within Amazon, from retail product recommendations to the latest in natural language understanding.

Why we built a self-writing Wikipedia

Amy Heineike explains how Primer created a self-updating knowledge base that can track factual claims in unstructured text.

Trust and transparency of AI for the enterprise

Ruchir Puri explains why trust and transparency are essential to AI adoption.

AI for a better world

Ashok Srivastava draws upon his cross-industry experience to paint an encouraging picture of how AI can solve big problems.

Rethinking software engineering in the AI era

Yangqing Jia talks about what makes AI software unique and its connections to conventional computer science wisdom.

Bringing AI into the enterprise: A functional approach to the technologies of intelligence

Kristian Hammond maps out simple rules, useful metrics, and where AI should live in the org chart.

Fireside chat with Marc Warner and Louis Barson

Marc Warner and Louis Barson discuss the internal and external uses of AI in the UK government.

Building artificial people: Endless possibilities and the dark side

Supasorn Suwajanakorn discusses the possibilities and the dark side of building artificial people.

Deep learning at scale: A field manual

Jason Knight offers an overview of the state of the field for scaling training and inference across distributed systems.

The missing piece

Cassie Kozyrkov shares machine learning lessons learned at Google and explains what they mean for applied data science.

Notes from the frontier: Making AI work

Drawing on the McKinsey Global Institute’s research, Michael Chui explores commonly asked questions about AI and its impact on work.

Continue reading Highlights from the Artificial Intelligence Conference in London 2018.

Categories: Technology

Rethinking software engineering in the AI era

O'Reilly Radar - Wed, 2018/10/10 - 13:00

Yangqing Jia talks about what makes AI software unique and its connections to conventional computer science wisdom.

Continue reading Rethinking software engineering in the AI era.

Categories: Technology

AI and machine learning at Amazon

O'Reilly Radar - Wed, 2018/10/10 - 13:00

Ian Massingham discusses the application of ML and AI within Amazon, from retail product recommendations to the latest in natural language understanding.

Continue reading AI and machine learning at Amazon.

Categories: Technology

AI in production: The droids you’re looking for

O'Reilly Radar - Wed, 2018/10/10 - 13:00

Jonathan Ballon explains why Intel’s AI and computer vision edge technology will drive advances in machine learning and natural language processing.

Continue reading AI in production: The droids you’re looking for.

Categories: Technology

AI for a better world

O'Reilly Radar - Wed, 2018/10/10 - 13:00

Ashok Srivastava draws upon his cross-industry experience to paint an encouraging picture of how AI can solve big problems.

Continue reading AI for a better world.

Categories: Technology

Four short links: 10 October 2018

O'Reilly Radar - Wed, 2018/10/10 - 03:55

Better Education, Do You Need Blockchain?, Visualization Book, and Hiring Coders

  1. Generation of Greatness (Edwin Land) -- eye-wateringly sexist on the surface but (if you replace "boys" with "children" and "men" with "people") an astonishingly forward-thinking piece on education. I'd want to hire graduates of this approach. (via Javier Candero)
  2. Do You Need Blockchain? Flowchart -- from page 42 of the Blockchain Technology Overview report from NIST.
  3. Visualization Analysis and Design (Amazon) -- Tamara Munzner's systematic, comprehensive framework for thinking about visualization in terms of principles and design choices. The book features a unified approach, encompassing information visualization techniques for abstract data, scientific visualization techniques for spatial data, and visual analytics techniques for interweaving data transformation and analysis with interactive visual exploration. It emphasizes the careful validation of effectiveness and the consideration of function before form. (via review)
  4. Assessing Software Engineering Candidates (Bryan Cantrill) -- Joyent's guidance, originally published as a company RFD. While we advocate (and indeed, insist upon) interviews, they should come relatively late in the process; as much assessment as possible should be done by allowing the candidate to show themselves as software engineers truly work: on their own, in writing.

Continue reading Four short links: 10 October 2018.

Categories: Technology

What we learn from AI's biases

O'Reilly Radar - Tue, 2018/10/09 - 04:00

Our bad AI could be the best tool we have for understanding how to be better people.

In "How to Make a Racist AI Without Really Trying," Robyn Speer shows how to build a simple sentiment analysis system, using standard, well-known sources for word embeddings (GloVe and word2vec), and a widely used sentiment lexicon. Her program assigns "negative" sentiment to names and phrases associated with minorities, and "positive" sentiment to names and phrases associated with Europeans. Even a sentence like "Let's go get Mexican food" gets a much lower sentiment score than "Let's go get Italian food." That result isn't surprising, nor are Speer's conclusions: if you take a simplistic approach to sentiment analysis, you shouldn't be surprised when you get a program that embodies racist, discriminatory values. It's possible to minimize algorithmic racism (though possibly not eliminate it entirely), and Speer discusses several strategies for doing so.

I want to look at this problem the other way around. There's something important we can learn from this experiment, and from other examples of AI "gone wrong." AI never "goes wrong" on its own; all of our AI systems are built by humans, and reflect our values and histories.

What does it mean when you build an AI system in the simplest possible way that you end up with a racially biased result? I don't think many AI developers would build such systems intentionally. I am willing to believe that many are naive and take free data sources at face value. That is exactly what is happening here: GloVe, a widely used collection of word embeddings, brings a lot of baggage with it, as does word2vec. But, just as programmers are more likely to be naive than evil, I don't think GloVe was built by people trying to perpetuate stereotypes. They just collected English language samples. They're a reflection of language as it is used.

All of which means we're facing a deeper problem. Yes, Speer's naive sentiment analysis is racist, but not because of the algorithm. It's because of the data; and not because the data is wrong, but because the data is right. The data wasn't collected with malice aforethought; it just reflects how we use language. Our use of language is full of racial biases, prejudices, and stereotypes. And while I would not recommend that anyone build and deploy a naive system, I appreciate examples like this because they hold up a mirror to our own usage. If we're willing to listen, they teach us about the biases in our own speech. They're metrics for our own poor performance.

Fairness is, by nature, aspirational: it's forward-looking. We want to be fair; we rarely look at the past and take pride in how fair we were. Data is always retrospective; you can't collect data from the future. Every datum we have reflects some aspect of the past, which means it almost always reflects history of prejudice and racism, both overt and covert. Our language is likely to be a better metric for our attitudes than any public opinion poll. Nobody thinks they are a racist; but our language says otherwise, and our algorithms reflect that.

We can (and we need to) analyze almost every example of algorithmic unfairness in this way. COMPAS, the tool for recommending bail and jail sentences, reflects a history of law enforcement that has fallen much more heavily on minorities. Minorities don't often get second chances; they don't get policemen who look the other way after saying "aw, he's basically a good kid" or "don't let me catch you doing that again." Poor urban neighborhoods get labeled "high risk zones," though if you look at a map of white collar crime, you'll see something much different. While COMPAS is a bad tool in the courtroom, it's an excellent tool for understanding the reality of how law enforcement works, and it's unfortunate it hasn't been used that way. (It might also be less unfair than predominantly white judges and juries, but that's another question.) Many of the problems around face recognition for dark-skinned people arise because cameras have long been designed to optimize for light skin tones. That's less a reflection on our technical capabilities than our cultural priorities. Amazon's initial same-day delivery service, which excluded heavily black and hispanic neighborhoods, doesn't reflect some evil intent; it reflects a long history of red-lining and other practices that forced minorities into ghettos. Exclusion jumped out of the data, and it's important to understand the histories that gave us that data.

When you get to the bottom of it, these aren't problems with the algorithms, or even with the data; they're problems with the ultimate source of the data, and that's our own actions. If we want better AI, we must be better people. And some of our bad AI could be the best tool we have for understanding how to be better people.

Continue reading What we learn from AI's biases.

Categories: Technology

Four short links: 9 October 2018

O'Reilly Radar - Tue, 2018/10/09 - 03:55

Lost Lessons, Metaphors to Monads, Future of Work, and Awesome Starts at The Top

  1. Neither Paper Nor Digital Does Reading Well -- Develop a familiarity with, for example, Alan Kay’s or Douglas Engelbart’s visions for the future of computing and you are guaranteed to become thoroughly dissatisfied with the limitations of every modern OS. Reading up hypertext theory and research, especially on hypertext as a medium, is a recipe for becoming annoyed at The Web. Catching up on usability research throughout the years makes you want to smash your laptop agains the wall in anger. And trying to fill out forms online makes you scream "it doesn’t have to be this way!" at the top of your lungs. That software development doesn’t deal with research or attempts to get at hard facts is endemic to the industry. (via Daniel Siegel)
  2. The Unreasonable Effectiveness of Metaphor (YouTube) -- Julia Moronuki, author of Haskell from First Principles, sneaks up on the idea of monads by starting with how linguists and cognitive scientists understand metaphors. (via @somegoob)
  3. World Development Report 2019: The Changing Nature of Work -- In countries with the lowest human capital investments today, our analysis suggests that the workforce of the future will only be one-third to one-half as productive as it could be if people enjoyed full health and received a high-quality education.
  4. Chairman of Nokia Learned Deep Learning -- I realized that as a long-time CEO and chairman, I had fallen into the trap of being defined by my role: I had grown accustomed to having things explained to me. Instead of trying to figure out the nuts and bolts of a seemingly complicated technology, I had gotten used to someone else doing the heavy lifting. [...] After a quick internet search, I found Andrew Ng’s courses on Coursera, an online learning platform. Andrew turned out to be a great teacher who genuinely wants people to learn. I had a lot of fun getting reacquainted with programming after a break of nearly 20 years. Once I completed the first course on machine learning, I continued with two specialized follow-up courses on deep learning and another course focusing on convolutional neural networks, which are most commonly applied to analyzing visual imagery. Yow.

Continue reading Four short links: 9 October 2018.

Categories: Technology

Meeting info for 10/11

PLUG - Mon, 2018/10/08 - 11:35
This month we will get an introduction to the command line from Ryan Hermens.  This would be a good meeting to bring friends and family that have an interest in Linux.

Ryan Hermens: Command Line Tools Seminar

Description:
This seminar will be an interactive presentation on core command lines tools. It is aimed at the beginner to help users make the jump from GUI tools to using the power of the command line. Examples of some of the subjects covered include package managers, linux permissions, users and groups, navigating the file system, creating and editing files, background and foreground job management, linux signals, and more. To get the most out of this seminar, please come with a laptop with Linux already installed.

Biography:
Ryan has 7 years of experience doing devops work, ranging from full stack development to cybersecurity operations. He has worked for Microchip, Intel, and Charles Schwab. Currently, he works with Truveris acting as lead security engineer. He holds degrees in both Computer Science and Computer Systems Engineering from Arizona State University, graduating magna cum laude.

Four short links: 8 October 2018

O'Reilly Radar - Mon, 2018/10/08 - 04:10

Stripe Stats, Worker Ethics, FPGA Futures, and Internet Archive Stats

  1. The Story of Stripe (Wired UK) -- Over the past year, 65% of UK internet users and 80% of U.S. users have bought something from a Stripe-powered business.
  2. Tech Workers Want to Know: What Are We Building This For? (NYT) -- about time. I see plenty of places mandating their young kids are taught coding. Who's mandating their coders take ethics classes so they have an ability to think critically about the applications of what they develop?
  3. Inference: The Future of FPGA (Next Platform) -- Inference, which is almost exclusively run on Xeon servers in the data center these days, therefore represents maybe 1% of the workload in the server installed base and has driven a little less than 1% of the server spending, by our math. [...] But as organizations figure out how to use machine learning frameworks to build neural networks and then algorithms that they embed into their applications, there will be a lot more inference going on, and this will become a representative workload driving a lot of chip revenues.
  4. Internet Archive Stats -- 22PB of Internet Archive growing 4PB/y, including four million books, 200 million hours of broadcast news, and 300,000 playable classic video games, 1.5 billion pages crawled/week, 200 staffers.

Continue reading Four short links: 8 October 2018.

Categories: Technology

Four short links: 5 October 2018

O'Reilly Radar - Fri, 2018/10/05 - 03:55

Supply Chain Security, ML in FB Marketplace, Datasette Ideas, and Scraper DSL

  1. Motherboard Supply Chain Compromise (Bloomberg) -- fascinating story of Chinese compromise of SuperMicro motherboards, causing headaches for AWS, Apple, and the U.S. military, among many others. See also tech for spotting these things and some sanity checking on the article's claims.
  2. How Facebook Marketplace Uses Machine Learning -- nice. It's increasingly clear there's not much that's user-facing that can't benefit from machine learning to prompt, augment, and check user input.
  3. Interesting Ideas in Datasette (Simon Willison) -- solid technical reflection on non-obvious approaches and techniques in his project.
  4. Ferret -- interesting approach: a DSL for writing web scrapers.

Continue reading Four short links: 5 October 2018.

Categories: Technology

Four short links: 4 October 2018

O'Reilly Radar - Thu, 2018/10/04 - 03:20

Autonomy and UI, Replicating ML Research, FPGA Dev, and Standard Notes

  1. UI for Self-Driving Cars -- I'd never thought about it, but Ford has: how does a self-driving car signal its intentions to humans (and/or other autonomous vehicles around)? Through our testing, we believe these signals have the chance to become an accepted visual language that helps address an important societal issue in how self-driving vehicles interact with humans.
  2. Reproducing Machine Learning Research -- there's good news—reproducibility breaks down in three main places: the code, the data, and the environment. I’ve put together this guide to help you narrow down where your reproducibility problems are, so you can focus on fixing them.
  3. Open Source FPGA Dev Guide -- in case you've been curious about kicking the tires. (Yes, I know FPGAs don't have tires, please don't write in.)
  4. Standard Notes -- what to use if you're nervous about entrusting your data to someone else's product roadmap (EverNote or OneNote or Keep). Free, open source, and completely encrypted. Ticks all the boxes: 2FA, automated backups to cloud storage, versioning, cross-platform (Mac, Windows, iOS, Android, Linux), offline access...

Continue reading Four short links: 4 October 2018.

Categories: Technology

Experts reveal infrastructure trends to watch

O'Reilly Radar - Thu, 2018/10/04 - 03:00

A new report examines the state of infrastructure and anticipated near-term developments through the eyes of infrastructure experts.

Trends like server virtualization, containers, serverless, and hardware abstraction are shifting the infrastructure landscape. Functions as a service (FaaS) and infrastructure as a utility are also gaining traction. These changes mean infrastructure experts and the organizations that employ them must evolve as the industry evolves.

With that in mind, O’Reilly recently examined the state of infrastructure and anticipated near-term developments through the eyes of infrastructure experts. In the resulting free report, “Infrastructure Now 2018,” the collected insights from these experts highlight what matters now and what's around the corner.

Takeaways from the report include:

  • Democratization and standardization—while ensuring security—are key to successfully keeping pace with evolving infrastructure. Whether you’re building tools or choosing new technology to work into your platform, the tools must be accessible to a wide range of skill sets, compatible with (or easily ingested into) existing systems, and cost effective.
  • Reducing complexity is the overwhelming trend expected in the next 10 years: from containers and serverless, to cloud services, to “easily composable business applications,” the infrastructure-as-a-service (IaaS) movement is expected to continue and expand.
  • Evolving infrastructure and the trend toward abstraction is going to require changes in roles for people in DevOps, Site Reliability Engineering (SRE), and operations positions. This shift is largely looked upon with optimism, but the experts anticipate a move away from specialized positions toward a need for generalists and full-stack engineers.
  • Not everyone interviewed for the report agreed, but it appears legacy infrastructure is here to stay, and new legacy infrastructure woes are anticipated. One expert predicts “a spaghetti ball of interconnected microservices,” and another pointed out that “everything becomes legacy as soon as it hits production.”
  • Improvements in containers and serverless technology top the list of expectations for the next 12 months. Some experts are already seeing signs that infrastructure as a utility is imminent.

For more on these topics and other key infrastructure issues, download the full report.

Continue reading Experts reveal infrastructure trends to watch.

Categories: Technology

Practical ethics

O'Reilly Radar - Wed, 2018/10/03 - 13:00

Laura Thomson shares Mozilla’s approach to data ethics, review, and stewardship.

Continue reading Practical ethics.

Categories: Technology

O’Reilly Radar: Systems engineering tool trends

O'Reilly Radar - Wed, 2018/10/03 - 13:00

Roger Magoulas shares insights from O'Reilly's online learning platform that point toward shifts in the systems engineering ecosystem.

Continue reading O’Reilly Radar: Systems engineering tool trends.

Categories: Technology

Pages

Subscribe to LuftHans aggregator