Self-Tuning Networks

  

Hyperparameter optimization can be formulated as a bilevel optimization problem, where the optimal parameters on the training set depend on the hyperparameters. We aim to adapt regularization hyperparameters for neural networks by fitting compact approximations to the best-response function, which maps hyperparameters to optimal weights and biases.

Quasi-Recurrent Neural Networks

  

Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep’s computation on the previous timestep’s output limits parallelism and makes RNNs unwieldy for very long sequences.

https://ai.googleblog.com/2019/03/rnn-based-handwriting-recognition-in.html

DeepFashion2

  

DeepFashion2 is a comprehensive fashion dataset. It contains 491K diverse images of 13 popular clothing categories from both commercial shopping stores and consumers. It totally has 801K clothing clothing items, where each item in an image is labeled with scale, occlusion, zoom-in, viewpoint, category, style, bounding box, dense landmarks and per-pixel mask.There are also 873K Commercial-Consumer clothes pairs. The dataset is split into a training set (391K images), a validation set (34k images), and a test set (67k images).

https://github.com/switchablenorms/DeepFashion2

Concurrent Meta RL

  

State-of-the-art meta reinforcement learning algorithms typically assume the setting of a single agent interacting with its environment in a sequential manner. A negative side-effect of this sequential execution paradigm is that, as the environment becomes more and more challenging, and thus requiring more interaction episodes for the meta-learner, it needs the agent to reason over longer and longer time-scales.

Introducing TF Privacy

  

Today, we’re excited to announce TensorFlow Privacy (GitHub), an open source library that makes it easier not only for developers to train machine-learning models with privacy, but also for researchers to advance the state of the art in machine learning with strong privacy guarantees.

Xception: DL with Depthwise Separable Convolutions

  

We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers.

Neural MMO

  

We’re releasing a Neural MMO — a massively multiagent game environment that supports numerous populations of agents:

Exploring Neural Networks with Activation Atlases

  

Feature visualizations, letting us see through the eyes of the neural network. The hidden layers of neural networks are quite fun to inspect.

What are the computations of Brain

  

Different parts of the brain learns in different ways. Model-free, model-based, and memory-based are all just different sides of the same coin, according to Kenji Doya

https://ai.googleblog.com/2019/03/rnn-based-handwriting-recognition-in.html

TF Extended (TFX) Open-Sourcedthejusjose@outlook.com

  

TFX is a Google-production-scale ML platform based on TensorFlow. It provides a configuration framework and shared libraries to integrate common components needed to define, launch, and monitor your ML system.