10 Apr 2019, Prathyush SP
  
Conditional GANs are at the forefront of natural image synthesis. The main drawback of such models is the necessity for labeled data. In this work we exploit two popular unsupervised learning techniques, adversarial training and self-supervision, and take a step towards bridging the gap between conditional and unconditional GANs.
09 Apr 2019, Prathyush SP
  
Recurrent neural network grammars (RNNG) are generative models of language which jointly model syntax and surface structure by
incrementally generating a syntax tree and sentence in a top-down, left-to-right order. Supervised RNNGs achieve strong language
modeling and parsing performance, but require an annotated corpus of parse trees.
09 Apr 2019, Prathyush SP
  
They show that simple regularized autoencoders can also learn a smooth, meaningful latent space without having to force the bottleneck code to conform to some arbitrarily chosen prior (ie Gaussian for VAEs).
09 Apr 2019, Prathyush SP
  
In this post, we go through the main building blocks of transformers and capsule networks and try to draw a connection between different components of these two models. Our main goal here is to understand if these models are inherently different and if not, how they relate.
06 Apr 2019, Prathyush SP
  
There are functions that Neural ODEs cannot represent. Proposes augmented Neural ODEs which are more expressive, empirically reduce computational cost & improve generalization.
04 Apr 2019, Prathyush SP
  
We released our new interactive annotation approach, which outperforms Polygon-RNN++ and is 10x faster.
03 Apr 2019, Prathyush SP
  
I’d like to invite you to join me on an exploration through different approaches to initializing layer weights in neural networks. Step-by-step, through various short experiments and thought exercises, we’ll discover why adequate weight initialization is so important in training deep neural nets. Along the way we’ll cover various approaches that researchers have proposed over the years, and finally drill down on what works best for the contemporary network architectures that you’re most likely to be working with.
03 Apr 2019, Prathyush SP
  
This dataset code generates mathematical question and answer pairs, from a range of question types at roughly school-level difficulty. This is designed to test the mathematical learning and algebraic reasoning skills of learning models.
https://github.com/deepmind/mathematics_dataset
02 Apr 2019, Prathyush SP
  
How to turn a collection of small building blocks into a versatile tool for solving regression problems.
01 Apr 2019, Prathyush SP
  
Training large deep neural networks on massive datasets is very challenging. One promising approach to tackle this issue is through the use of large batch stochastic optimization. However, our understanding of this approach in the context of deep learning is still very limited. Furthermore, the current approaches in this direction are heavily hand-tuned.