N-Shot Learning: Learning More with Less Data

  

To approach an issue as complex as this one, we need to first define it clearly. In the N-shot learning field, we have n labeled examples of each K classes, i.e. N∗K total examples which we call support set S. We also have to classify Query Set Q, where each example lies in one of the K classes. N-shot learning has three major sub-fields: zero-shot learning, one-shot learning, and few-shot learning, which each deserve individual attention.

BAIDU’s ERNIE 2.0

  

ERNIE 2.0 is a continual pre-training framework. Continual learning aims to train the model with several tasks in sequence so that it remembers the previously learned tasks when learning the new ones.

Plato : A Flexible Conversational AI Platform

  

At Uber AI, we developed the Plato Research Dialogue System, a platform for building, training, and deploying conversational AI agents that allows us to conduct state of the art research in conversational AI and quickly create prototypes and demonstration systems, as well as facilitate conversational data collection. We designed Plato for both users with a limited background in conversational AI and seasoned researchers in the field by providing a clean and understandable design, integrating with existing deep learning and Bayesian optimization frameworks (for tuning the models), and reducing the need to write code.

A dataset for Self Driving!

  

Hey researchers! We’ve released a self-driving dataset and we’ll be launching a competition to see what you can do with it. Hear from Luc Vincent, our EVP of Autonomous Technology, about what this means for the industry.

https://t.co/V4dfjhFwk6

Say hello to nodes.io!

  

We use it to create interactive installations, visualise data, experiment with generative algorithms, build custom design tools and more.

Generative Deep Learning: Teaching Machines to Paint, Write, Compose, and Play

  

If you’re looking to play with Generative Modelling, David Foster’s book is 🔑. You’ll learn to build state-of-the-art deep learning models that can paint, write beautiful prose, compose music and play games.

Convolutional Reservoir Computing for World Models.

  

While many RL models have achieved high performance, most demand high computational costs and require significant training time. Using reservoir computing, anyone can quickly train models with much lower computational costs, and importantly, can build highly accurate models

What happens when we train RNNs to model fonts as vector (SVG) drawings?

  

We train a powerful generative model of fonts as SVG instead of pixels. This highly structured format enables manipulation of font styles and style transfer between characters at arbitrary scales!

Survey on Commonsense reasoning in NLP

  

Commonsense knowledge and commonsense reasoning are some of the main bottlenecks in machine intelligence. In the NLP community, many benchmark datasets and tasks have been created to address commonsense reasoning for language understanding. These tasks are designed to assess machines’ ability to acquire and learn commonsense knowledge in order to reason and understand natural language text. As these tasks become instrumental and a driving force for commonsense research, this paper aims to provide an overview of existing tasks and benchmarks, knowledge resources, and learning and inference approaches toward commonsense reasoning for natural language understanding. Through this, our goal is to support a better understanding of the state of the art, its limitations, and future challenges.

State of AI - 2019

  

A bit late to the party here, but “State of AI Report 2019” is a nice overall ambitious attempt at summarizing AI (link: https://www.stateof.ai/) stateof.ai for “research” a bit too much RL and a bit too little vision. Interesting that vision is patented so much more than other areas (p85)