O'Reilly Artificial Intelligence Newsletter

1. Uber wants to know: How much will you pay?

Uber drivers have been complaining that the gap between the fare a rider pays and what the driver receives is getting wider. Here's why: Uber has been testing a new pricing system that charges riders different amounts, based on what their machine learning models say the riders are willing to shell out. (via Intelligent Bits)

2. A new key to the Tower of Babel

Ninety-five percent of the world's population communicate using just 100 languages. Unfortunately, that other 5% speak about 6,900 different languages. Nearly a third of the languages on earth are in danger of dying out, and online translation services work for fewer than 100 of the 7,000 languages that exist. But a recent machine translation technique could change that.

3. Is AI color-blind?

You were expecting another ethics article, weren't you? Nope. We're talking paint chips. Janelle Shane trained a neural network to use about 7,700 Sherwin-Williams paint colors (along with their RGB values) to generate new paint colors—and name them. The results were perhaps more entertaining than inspirational.

+ Didn't like Stanky Bean? In Part 2, Janelle changed the temperature setting for more charming results (Baterswort, Queen Slime, or Furgy Brown, for example).

4. Real-time face detection

5. Why AI and heathcare must play together

"Some technologists and data scientists believe, ardently, that the future is already here–that we already have the ability to use AI to solve important problems in healthcare, and that it’s intransigent docs who are standing in the way." But not all health professionals are excited by AI, or even convinced of its value. David Shaywitz explores what it will take to bring these viewpoints together.
In collaboration with Intel Saffron

Transparency in AI decision making

State-of-the-art machine learning techniques offer extraordinary performance on everything from text analysis to feature classification, but they often function as "black boxes" that obscure their decision-making. Join Andy Hickl in a free 60-minute webcast to explore the problem of interpretability in AI and address techniques for building transparent artificial intelligence applications with explainable outcomes.
Thursday, June 22 | 10am PT
Learn more →

6. Curiosity killed the cat. But it may save AI.

Curiosity leads children to explore the world around them without specific goals in mind and use what they discover to help them understand it. But algorithms aren't inherently curious. So how do you train AI to explore without specific rewards? In this paper, researchers describe training AI to "predict the consequence of its own actions in a visual feature space learned by a self-supervised inverse dynamics model" and then recognize and be "rewarded" by its own failure to do so. In VizDoom and Super Mario Bros., of course.

7. Meet the scientists who teach ethics to machines

How do you teach an AI ethics? You can't preprogram rules for responding to every possible scenario. "Imagine if the US founders had frozen their values, allowing slavery, fewer rights for women, and so forth? Ultimately, we want a machine able to learn for itself," said Gary Marcus, a cognitive scientist at NYU. Instilling artificial guilt by trial-and-error leading to bad results is slow (and dangerous). Can you train an AI to understand the moral of the story? Here is a long, but fascinating, look at the scientists trying to teach ethics to AI.

8. SketchRNN model released in Magenta

Sketch-RNN, a generative model for vector drawings, is now available in Magenta. It comes with 50,000 drawings as training data. There's also a Juptyer Notebook tutorial.

What is Jana Eggers looking forward to?

Jana Eggers, President of Nara Logics explains which session at the upcoming O'Reilly AI Conference in San Francisco she is anticipating most.

I've often said if I wasn't in AI now, I'd be in security. Aaron Goldstein's talk on combating cyber threats with AI lets me live in both worlds. With our reliance on technology continuing to increase, this "in the trenches" talk is at the top of my list.”

What session will you find the most intriguing at AI SF (Sept 17–20)? Find out here.

9. How to automate the design of ML models

In this blog post, Google researchers look at ways to automate the design of machine learning models. They've found evolutionary algorithms and reinforcement learning algorithms to have great promise, but here they focus on the early results from the latter.

10. Fake news and AI voice cloning: BFF

Canadian startup Lyrebird has announced that it has developed a platform capable of mimicking human voice with a fraction of the audio samples required by other platforms, such as Google DeepMind and Adobe Project VoCo—in fact, 60 seconds will suffice. What could possibly go wrong?
Read more about AI at oreilly.com →