Super cool resource: Papers With Code

  

Papers With Code now includes 950+ ML tasks, 500+ evaluation tables (including SOTA results) and 8500+ papers with code.

Hotels-50K: A Global Hotel Recognition Dataset

  

DoWhy – Python library to estimate causal effects

  

ATOMIC - An Atlas of Machine Commonsense for If-Then Reasoning

  

We present ATOMIC, an atlas of everyday commonsense reasoning, organized through 877k textual descriptions of inferential knowledge. Compared to existing resources that center around taxonomic knowledge, ATOMIC focuses on inferential knowledge organized as typed if-then relations with variables (e.g., “if X pays Y a compliment, then Y will likely return the compliment”).

https://homes.cs.washington.edu/~msap/atomic/data/sap2019atomic.pdf

The introduction to Probability, Statistics by Berkeley

  

Basic Probability Random variables, conditional probabilities, Bayes rule • Naive Bayes • Multiple tests • Examples - OCR • Sampling • Distributions (categorial, normal, uniform) • Central limit theorem

Neural networks can be vulnerable to adversarial noise

  

Recent work has shown that it is possible to train deep neural networks that are verifiably robust to norm-bounded adversarial perturbations. Most of these methods are based on minimizing an upper bound on the worst-case loss over all possible adversarial perturbations. While these techniques show promise, they remain hard to scale to larger networks. Through a comprehensive analysis, we show how a careful implementation of a simple bounding technique, interval bound propagation (IBP), can be exploited to train verifiably robust neural networks that beat the state-of-the-art in verified accuracy. While the upper bound computed by IBP can be quite weak for general networks, we demonstrate that an appropriate loss and choice of hyper-parameters allows the network to adapt such that the IBP bound is tight. This results in a fast and stable learning algorithm that outperforms more sophisticated methods and achieves state-of-the-art results on MNIST, CIFAR-10 and SVHN. It also allows us to obtain the first verifiably robust model on a downscaled version of ImageNet.

An AI that 'de-biases' algorithms

  

We’ve learned in recent years that AI systems can be unfair, which is dangerous when they’re increasingly being used to do everything from predict crime to determine what news we consume. Last year’s study showing the racism of face-recognition algorithms demonstrated a fundamental truth about AI: if you train with biased data, you’ll get biased results.

Practical DL for Coders, 2019 edition

  

After the first lesson you’ll be able to train a state-of-the-art image classification model on your own data. After completing this lesson, some students from the in-person version of this course (where this material was recorded) published new state-of-the-art results in various domains! The focus for the first half of the course is on practical techniques, showing only the theory required to actually use these techniques in practice. Then, in the second half of the course, we dig deeper and deeper into the theory, until by the final lesson we will build and train a “resnet” neural network from scratch which approaches state-of-the-art accuracy.

Nauta - A multi-user, distributed computing environment for running DL model training experiments

  

The Nauta software provides a multi-user, distributed computing environment for running deep learning model training experiments. Results of experiments, can be viewed and monitored using a command line interface, web UI and/or TensorBoard*. You can use existing data sets, use your own data, or downloaded data from online sources, and create public or private folders to make collaboration among teams easier.

Recent Advances in Efficient Computation of DCNs

  

Recent Advances in Efficient Computation of Deep Convolutional Neural Networks

Survey of the following topics: – network pruning – low-rank approximation – network quantization – teacher-student networks – compact network design and hardware accelerators