07 May 2019, Prathyush SP
  
Based on MNASNet, found by architecture search, we applied additional methods to go even further (quantization friendly SqueezeExcite & Swish + NetAdapt + Compact layers). Result: 2x faster and more accurate than MobileNetV2
03 May 2019, Prathyush SP
  
In this article we will introduce the idea of “decrappification”, a deep learning method implemented in fastai on PyTorch that can do some pretty amazing things, like… colorize classic black and white movies—even ones from back in the days of silent movies
30 Apr 2019, Prathyush SP
  
A common misconception is that the risk of overfitting increases with the number of parameters in the model. In reality, a single parameter suffices to fit most datasets
https://github.com/Ranlot/single-parameter-fit/
21 Apr 2019, Prathyush SP
  
Adversarial attacks on machine learning models have seen increasing interest in the past years. By making only subtle changes to the input of a convolutional neural network, the output of the network can be swayed to output a completely different result. The first attacks did this by changing pixel values of an input image slightly to fool a classifier to output the wrong class. Other approaches have tried to learn “patches” that can be applied to an object to fool detectors and classifiers. Some of these approaches have also shown that these attacks are feasible in the real-world, i.e. by modifying an object and filming it with a video camera. However, all of these approaches target classes that contain almost no intra-class variety (e.g. stop signs). The known structure of the object is then used to generate an adversarial patch on top of it.
19 Apr 2019, Prathyush SP
  
They show that wave-based physical systems can be trained to operate as an RNN, and can passively process information in their native domain, without analog-to-digital conversion.
https://arxiv.org/abs/1906.02715
16 Apr 2019, Prathyush SP
  
Neural Tangents is a set of tools that can be used to probe the linearized training dynamics of neural networks. There are two, dual, perspectives that are explored here: linearization of training in weight space, and linearization of training in function space.
15 Apr 2019, Prathyush SP
  
In natural images, information is conveyed at different frequencies where higher frequencies are usually encoded with fine details and lower frequencies are usually encoded with global structures. Similarly, the output feature maps of a convolution layer can also be seen as a mixture of information at different frequencies.
11 Apr 2019, Prathyush SP
  
By parametrizing fully-convolutional nets with a single high order #tensor, we are able to leverage redundancies & get SOTA results on various tasks, with comparatively less parameters.
11 Apr 2019, Prathyush SP
  
Most existing studies on learning local features focus on the patch-based descriptions of individual keypoints, whereas neglecting the spatial relations established from their keypoint locations. In this paper, we go beyond the local detail representation by introducing context awareness to augment off-the-shelf local feature descriptors.
10 Apr 2019, Prathyush SP
  
Weight decay doesn’t regularize, if you use batchnorm. Well it does, but not how you think. See this paper from @RogerGrosse’s team. Originally mentioned by van Laarhoven (2017) and explored by Hoffer et al (2018).