Natural Questions - Open-domain question answering systems

  

Open-domain question answering (QA) is a benchmark task in natural language understanding (NLU) that aims to emulate how people look for information, finding answers to questions by reading and understanding entire documents. Given a question expressed in natural language (“Why is the sky blue?”), a QA system should be able to read the web (such as this Wikipedia page) and return the correct answer, even if the answer is somewhat complicated and long. However, there are currently no large, publicly available sources of naturally occurring questions (i.e. questions asked by a person seeking information) and answers that can be used to train and evaluate QA models.

Active Learning with Partial Feedback

  

It is expensive to obtain full information from labeler: e.g. picking fine-grained category. Can the algorithm actively decide what to ask labeler? Can we move on to a different example in a strategic way?

Automating project management with DL

  

This is extremely cool Mudano, a project management company, shows how to use fastai’s pretrained NLP @pytorch language model and transfer learning to rapidly build a classifier for project status reports.

Manifold: A Model-Agnostic Visual Debugging Tool for ML

  

To make the model iteration process more informed and actionable, we developed Manifold, Uber’s in-house model-agnostic visualization tool for ML performance diagnosis and model debugging. Taking advantage of visual analytics techniques, Manifold allows ML practitioners to look beyond overall summary metrics to detect which subset of data a model is inaccurately predicting. Manifold also explains the potential cause of poor model performance by surfacing the feature distribution difference between better and worse-performing subsets of data. Moreover, it can display how several candidate models have different prediction accuracies for each subset of data, providing justification for advanced treatments such as model ensembling

Looking Back at Google’s Research Efforts in 2018

  

2018 was an exciting year for Google’s research teams, with our work advancing technology in many ways, including fundamental computer science research results and publications, the application of our research to emerging areas new to Google (such as healthcare and robotics), open source software contributions and strong collaborations with Google product teams, all aimed at providing useful tools and services. Below, we highlight just some of our efforts from 2018, and we look forward to what will come in the new year.

Interested in decision making under uncertainty?

  

Python toolkit for emulation and decision making under uncertainty Accessible and built with reusable components Independent of the modelling framework. Use it with MXNet, TensorFlow, GPy

Explainability Vs Interpretability In AI & ML

  

Over the last few years, there have been several innovations in the field of artificial intelligence and machine learning. As technology is expanding into various domains right from academics to cooking robots and others, it is significantly impacting our lives. For instance, a business or finance user is using a machine learning technology as a black box — that means they don’t know what lies within.

NeurIPS 2018

  

Newly indexed videos have been linked in our our schedule. All videos from NeurIPS 2018

Super-resolution GANs for improving the texture resolution of old games

  

In case you don’t know what ESRGAN is it stands for Enhanced Super Resolution Generative Adverserial Networks’s it’s a AI technique that improves older games textures. Paper

Exponentially weighted average weights

  

How to Create an Equally, Linearly, and Exponentially Weighted Average of Neural Network Model Weights in Keras