Our 5 Favorite ML/AI Talks from Build 2020

Microsoft Build is an annual conference aimed at software engineers and web developers who use Microsoft and open source technologies. The 2020 edition of Build was held from Tuesday May 19 through Thursday May 21. The event was completely online and free rather than in-person (and expensive) due to the COVID-19 pandemic.

As recently as 2016, the Build conference didn't even list machine learning and artificial intelligence (ML/AI) as a session category in the conference agenda. The 2020 edition of Build featured approximately 40 ML/AI sessions. The PureAI editors reviewed all these sessions and selected five that you might be interested in, regardless of your background. All of the 2020 Build presentations, including the ones that are described in this article, can be accessed at

"Train and Deploy ML Models at Scale Using Azure Machine Learning"
by Chris Lauren & Sabina Cartacio

Azure Machine Learning isn't a specific product. Instead, the term refers to the entire collection of ML/AI technologies and services that can be accessed on Microsoft's Azure cloud system. As such, it's difficult to get a good understanding of exactly what Azure Machine Learning is.

This 30-minute presentation begins with Chris giving an excellent overview of three of the most important parts of Azure Machine Learning: creating an ML model completely automatically using the AutoML command line system, creating an ML model using the GUI-based Azure Machine Learning Studio (or the replacement Azure Machine Learning Designer, currently in preview mode), and creating a customized ML model using Jupyter notebooks on an Azure virtual machine.

 Figure 1: Train and Deploy ML Models at Scale Using Azure Machine Learning
[Click on image for larger view.] Figure 1: Train and Deploy ML Models at Scale Using Azure Machine Learning (source: Microsoft).

Next, Chris explains the overall ML/AI development process and how it is common to most Azure Machine Learning services. This is followed by a complete end-to-end demo of creating a question-and-answer natural language processing system with Python code and the TensorFlow library. The Q&A model uses the popular BERT (Bidirectional Encoder Representations from Transformers) technique on an Azure GPU cluster.

The presentation concludes with a demo of Automated ML. Automated ML is a general term that usually refers to any system that accepts training data and then generates an ML prediction model, using only minimal developer input. The demo forecasts the number of questions submitted to the Stack Overflow web site at any given point in time.

"The Future of Tech, With Kevin Scott and Guests"
by Kevin Scott and Peter Lee

This keynote presentation covers a lot of ground and is mostly inspirational as opposed to a nuts-and-bolts how-to type of presentation. Kevin Scott is an executive vice president who was recently placed in charge of Microsoft Technology and Research following the retirement of the widely respected Dr. Harry Shum. Dr. Peter Lee is a corporate vice president who was recently placed in charge of the Microsoft Research Labs following the reassignment of the former executive in charge to an internal role with smaller scope.

The presentation begins with Kevin giving a very interesting description of the history of computer science, leading up to the current ML/AI revolution. Next, Peter and a guest discuss how ML/AI can be used in health care scenarios, including the COVID-19 pandemic.

 Figure 2: The Future of Tech, With Kevin Scott and Guests
[Click on image for larger view.] Figure 2: The Future of Tech, With Kevin Scott and Guests (source: Microsoft).

Next, the presentation addresses natural language processing, currently the most active area of ML/AI research and advances. Luis Vargas gives a nice demo of self-supervised learning. Traditionally, the term unsupervised learning usually refers to ML techniques such as clustering, which don't require human-assigned labels, and which use the entire source dataset. Self-supervised learning refers to many types of natural language processing where part of the source dataset is used to predict another part of the source dataset.

Next, Kevin has a conversation with Sam Altman, the CEO and co-founder along with Elon Musk of the OpenAI organization. The primary topic is Microsoft's Azure super computing resources and how they might be used to tackle the General Artificial Intelligence problem.

The presentation concludes with some demos of ML/AI on the Edge/IoT (Internet of Things). Most of these concluding demo scenarios have been presented before.

"Build Python Apps in Azure Faster with Visual Studio Code"
by Nicolas Garfinkel

This is a low-level presentation that walks through a demo step-by-step, with an emphasis on the Visual Studio Code integrated development environment (IDE). The demo creates a visualization for GitHub issues using the Python language and Azure Cognitive Services. You can think of Azure Cognitive Services as a large collection of pre-built prediction models for image recognition, and text and natural language processing. Azure Cognitive Services are accessed by sending an input (such as an image) over the Internet to a specified service. The service replies with information about the input, such as what type of animal it is.

 Figure 3: Build Python Apps in Azure Faster with Visual Studio Code
[Click on image for larger view.] Figure 3: Build Python Apps in Azure Faster with Visual Studio Code (source: Microsoft).

For many years, Microsoft Visual Studio was effectively the only IDE for programming Microsoft technology applications. Over time, Visual Studio became larger and larger, and more and more complex. According to several developers we've talked to, starting around 2015, many developers started becoming mildly but increasingly uncomfortable with the size and complexity of Visual Studio. Visual Studio Code is a relatively simple open source IDE that was created in part as a response to the size and complexity of Visual Studio.

Although Microsoft Visual Studio still exists and is widely used, Visual Studio Code is growing in popularity among developers. This presentation provides a very good explanation of exactly what Visual Studio Code is and how it can be used to develop ML/AI systems.

"How to Explain Text Models with InterpretML - Deep Dive"
by Minsoo Thigpen

An interesting area of ML/AI is model interpretability. A classic example is a scenario where some sort of ML model is used, in part, to approve or deny a loan application. If a loan application is denied, it's natural to want to understand why the ML model didn't approve the loan request.

There are several ways to go about ML model interpretability. One approach is to use counterfactual reasoning. An example is, "If your income had been $15,000 greater and your debt had been $6,000.00 less and you lived in Iowa instead of New York, then your loan would be been approved by the ML system."

 Figure 4: How to Explain Text Models with InterpretML - Deep Dive
[Click on image for larger view.] Figure 4: How to Explain Text Models with InterpretML - Deep Dive (source: Microsoft).

Instead of counterfactual reasoning, a different, more direct approach is to use what are sometimes called Explainers. In this presentation, Minsoo shows a demo of InterpretML. InterpretML is an open source Python library.

The presentation demo is a human resources job application scenario where hiring managers want to analyze the text in a job applicant's resume and be alerted to red flags -- words or phrases that cause concern.

InterpretML can show how the prediction model changes for different subsets of data and can calculate measure of importance of the input predictors (which are words or phrases in the demo).

"Teaching Industrial Autonomous Systems"
by Scott Stanfield

This presentation addresses the topic of training industrial robotic systems to perform useful tasks. The highlight of the presentation is a demo of a software system that learns to balance a ping pong ball (OK, "table tennis ball" if you want to be technical) on a moving circular base.

 Figure 5: Teaching Industrial Autonomous Systems
[Click on image for larger view.] Figure 5: Teaching Industrial Autonomous Systems (source: Microsoft).

It's not feasible to code such a system using an if-then rule-based program. The presentation touches on the related ideas of reinforcement learning, simulation, machine teaching, and feedback loops.

Scott describes Project Bonsai, part of Microsoft's autonomous systems platform, which enables engineers to build complex industrial control systems without data science expertise.

Wrapping Up
In addition to the five Build presentations described in this article, you can find many other interesting talks related to ML/AI with Microsoft and open source technologies at Most of these talks have solid technical content that might be directly or indirectly useful to you. There are also a handful of very informal, chit-chat style recordings that may be of interest to you.