Spinning-Up-Basic

  

Basic versions of agents from Spinning Up in Deep RL written in PyTorch. Designed to run quickly on CPU on Pendulum-v0 from OpenAI Gym.

Superposition of many models into one

  

It turns out you can fit a lot more than one model in one set of parameters…and train them independently.

Probabilistic NAS

  

In neural architecture search (NAS), the space of neural network architectures is automatically explored to maximize predictive accuracy for a given task. Despite the success of recent approaches, most existing methods cannot be directly applied to large scale problems because of their prohibitive computational complexity or high memory usage.

Introducing PlaNet: A Deep Planning Network for RL

  

Research into how artificial agents can improve their decisions over time is progressing rapidly via reinforcement learning (RL). For this technique, an agent observes a stream of sensory inputs (e.g. camera images) while choosing actions (e.g. motor commands), and sometimes receives a reward for achieving a specified goal. Model-free approaches to RL aim to directly predict good actions from the sensory observations, enabling DeepMind’s DQN to play Atari and other agents to control robots. However, this blackbox approach often requires several weeks of simulated interaction to learn through trial and error, limiting its usefulness in practice.

How powerful are Graph Neural Networks?

  

Classical ML tasks in networks: § Node classification § Predict a type of a given node § Link prediction § Predict whether two nodes are linked § Community detection § Identify densely linked clusters of nodes § Network similarity § How similar are two (sub)networks

Better Language Models and Their Implications

  

We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.

How to Develop Word Embeddings in Python with Gensim

  

Word embeddings are a modern approach for representing text in natural language processing.

Word embedding algorithms like word2vec and GloVe are key to the state-of-the-art results achieved by neural network models on natural language processing problems like machine translation.

Effective TF 2.0

  

To take a closer look at what’s changed, and to learn about best practices, check out the new Effective TensorFlow 2.0 guide (published on GitHub). This article provides a quick summary of the content you’ll find there. If any of these topics interest you, head to the guide to learn more!

Spektral: a library for doing DL on graph data in Keras

  

Spektral is a framework for relational representation learning, built in Python and based on the Keras API. The main purpose of this project is to provide a simple, fast, and scalable environment for fast experimentation.

Ludwig: Code-Free Deep Learning Toolbox by Uber AI

  
  • based on #TensorFlow
  • data type-based approach
  • extensive control over model building and training, if needed
  • easy to add new models and data types
  • visualizations to understand performance