How-To


Researchers Unveil Working Memory Graph Architecture for Reinforcement Learning

The Pure AI editors keep you abreast of the latest machine learning advancements by explaining a new neural-based architecture for solving reinforcement learning (RL) problems. WMG uses a deep neural technique developed for natural language processing problems called Transformer architecture, and it significantly outperformed baseline RL techniques in experiments on several difficult benchmark problems.

Rainbow LED Graphic

The AutoML-Zero System Automatically Generates Machine Learning Programs

The Pure AI editors explain a new paper that describes how a computer program can automatically generate a machine learning algorithm, which can create a machine learning prediction model.

White Blue Whale Graphic

Homomorphic Encryption Makes Slow But Steady Progress

Homomorphic encryption allows computation directly on encrypted data, an approach that has so far been plagued by poor peformance compared to operations on unencrypted data. But it's getting there, Pure AI editors explain.

Understanding Variational Autoencoders – for Mere Mortals

Here's an explanation of variational autoencoders -- one of the fundamental types of deep neural networks -- used for synthetic data generation.

Comparing 4 ML Classification Techniques: Logistic Regression, Perceptron, Support Vector Machine, and Neural Networks

Learn about four of the most commonly used machine learning classification techniques, used to predict the value of a variable that can take on discrete values.

green shapes graphic

New Relic Launches AI for DevOps (AIOps)

New Relic unveiled a new suite of artificial intelligence- and machine learning-based on-call DevOps capabilities Tuesday.

blue nebula graphic

Researchers Release Open Source Counterfactual Machine Learning Library

Microsoft researchers released an open source code library for generating machine learning counterfactuals, used for scenarios such as loan applications. We spoke with Dr. Amit Sharma, one of the project leaders, and asked him to explain what machine learning counterfactuals are and why they're important.

zeros and ones repeated image

Researchers Explore Deep Neural Memory for Natural Language Tasks

Researchers at Microsoft have demonstrated a new type of computer memory that outperformed many existing systems when applied to a well-known benchmark set of natural language processing (NLP) problems. The memory architecture is called metalearned neural memory (MNM), or more generally, deep neural memory.

Ugly Neon Pattern Graphic

Understanding Neural Word Embeddings

The data scientists at Microsoft Research explain how word embeddings are used in natural language processing -- an area of artificial intelligence/machine learning that has seen many significant advances recently -- at a medium level of abstraction, with code snippets and examples.

Neurosymbolic AI Advances State of the Art on Math Word Problems

Researchers at Microsoft have demonstrated a new technique called Neurosymbolic AI which has shown promising results when applied to difficult scenarios such as algebra problems stated in words. The PureAI editors were given a sneak peek at the draft of a research paper that describes the work.

Featured

Upcoming Training Events