23 Jan 2019, Prathyush SP
  
Open-domain question answering (QA) is a benchmark task in natural language understanding (NLU) that aims to emulate how people look for information, finding answers to questions by reading and understanding entire documents. Given a question expressed in natural language (“Why is the sky blue?”), a QA system should be able to read the web (such as this Wikipedia page) and return the correct answer, even if the answer is somewhat complicated and long. However, there are currently no large, publicly available sources of naturally occurring questions (i.e. questions asked by a person seeking information) and answers that can be used to train and evaluate QA models.
23 Jan 2019, Prathyush SP
  
It is expensive to obtain full information from labeler: e.g. picking fine-grained category. Can the algorithm actively decide what to ask labeler? Can we move on to a different example in a strategic way?
19 Jan 2019, Prathyush SP
  
This is extremely cool Mudano, a project management company, shows how to use fastai’s pretrained NLP @pytorch language model and transfer learning to rapidly build a classifier for project status reports.
17 Jan 2019, Prathyush SP
  
To make the model iteration process more informed and actionable, we developed Manifold, Uber’s in-house model-agnostic visualization tool for ML performance diagnosis and model debugging. Taking advantage of visual analytics techniques, Manifold allows ML practitioners to look beyond overall summary metrics to detect which subset of data a model is inaccurately predicting. Manifold also explains the potential cause of poor model performance by surfacing the feature distribution difference between better and worse-performing subsets of data. Moreover, it can display how several candidate models have different prediction accuracies for each subset of data, providing justification for advanced treatments such as model ensembling
17 Jan 2019, Prathyush SP
  
2018 was an exciting year for Google’s research teams, with our work advancing technology in many ways, including fundamental computer science research results and publications, the application of our research to emerging areas new to Google (such as healthcare and robotics), open source software contributions and strong collaborations with Google product teams, all aimed at providing useful tools and services. Below, we highlight just some of our efforts from 2018, and we look forward to what will come in the new year.
17 Jan 2019, Prathyush SP
  
Python toolkit for emulation and decision making under uncertainty
Accessible and built with reusable components
Independent of the modelling framework. Use it with MXNet, TensorFlow, GPy
15 Jan 2019, Prathyush SP
  
Over the last few years, there have been several innovations in the field of artificial intelligence and machine learning. As technology is expanding into various domains right from academics to cooking robots and others, it is significantly impacting our lives. For instance, a business or finance user is using a machine learning technology as a black box — that means they don’t know what lies within.
12 Jan 2019, Prathyush SP
  
Newly indexed videos have been linked in our our schedule. All videos from NeurIPS 2018
07 Jan 2019, Prathyush SP
  
In case you don’t know what ESRGAN is it stands for Enhanced Super Resolution Generative Adverserial Networks’s it’s a AI technique that improves older games textures. Paper
07 Jan 2019, Prathyush SP
  
How to Create an Equally, Linearly, and Exponentially Weighted Average of Neural Network Model Weights in Keras