Introducing MobileNetV3

  

Based on MNASNet, found by architecture search, we applied additional methods to go even further (quantization friendly SqueezeExcite & Swish + NetAdapt + Compact layers). Result: 2x faster and more accurate than MobileNetV2

Decrappification, DeOldification, and Super Resolution

  

In this article we will introduce the idea of “decrappification”, a deep learning method implemented in fastai on PyTorch that can do some pretty amazing things, like… colorize classic black and white movies—even ones from back in the days of silent movies

Real numbers, data science and chaos: How to fit any dataset with a single parameter

  

A common misconception is that the risk of overfitting increases with the number of parameters in the model. In reality, a single parameter suffices to fit most datasets

https://github.com/Ranlot/single-parameter-fit/

Fooling automated surveillance cameras

  

Adversarial attacks on machine learning models have seen increasing interest in the past years. By making only subtle changes to the input of a convolutional neural network, the output of the network can be swayed to output a completely different result. The first attacks did this by changing pixel values of an input image slightly to fool a classifier to output the wrong class. Other approaches have tried to learn “patches” that can be applied to an object to fool detectors and classifiers. Some of these approaches have also shown that these attacks are feasible in the real-world, i.e. by modifying an object and filming it with a video camera. However, all of these approaches target classes that contain almost no intra-class variety (e.g. stop signs). The known structure of the object is then used to generate an adversarial patch on top of it.

Wave Physics as an Analog Recurrent Neural Network

  

They show that wave-based physical systems can be trained to operate as an RNN, and can passively process information in their native domain, without analog-to-digital conversion.

https://arxiv.org/abs/1906.02715

Neural Tangents - Linearizing Neural Network training.

  

Neural Tangents is a set of tools that can be used to probe the linearized training dynamics of neural networks. There are two, dual, perspectives that are explored here: linearization of training in weight space, and linearization of training in function space.

Drop an Octave

  

In natural images, information is conveyed at different frequencies where higher frequencies are usually encoded with fine details and lower frequencies are usually encoded with global structures. Similarly, the output feature maps of a convolution layer can also be seen as a mixture of information at different frequencies.

T-Net

  

By parametrizing fully-convolutional nets with a single high order #tensor, we are able to leverage redundancies & get SOTA results on various tasks, with comparatively less parameters.

ContextDesc: Local Descriptor Augmentation with Cross-Modality Context

  

Most existing studies on learning local features focus on the patch-based descriptions of individual keypoints, whereas neglecting the spatial relations established from their keypoint locations. In this paper, we go beyond the local detail representation by introducing context awareness to augment off-the-shelf local feature descriptors.

Three Mechanisms of Weight Decay Regularization

  

Weight decay doesn’t regularize, if you use batchnorm. Well it does, but not how you think. See this paper from @RogerGrosse’s team. Originally mentioned by van Laarhoven (2017) and explored by Hoffer et al (2018).