News

Q&A with MIT Scientist Catherine Havasi: AI Predictions, Worries and More

Catherine Havasi is on a mission to help computers understand the world more like a person and less like a machine. A computational linguist by training, she was one of the co-founders with Marvin Minsky and Push Singh of the MIT Media Lab's Open Mind Common Sense project, which led to the creation of the natural language AI program ConceptNet. She's currently a visiting scientist at MIT working on computational creativity, and group director at the Media Lab focused on natural language processing (NLP) and people analytics.

Havasi lives at the cutting edge of AI, but it's a practical edge. In many interviews and conference talks she has expressed her keen desire to make AI and machine learning available to the enterprise. Toward that end, she co-founded Cambridge, Mass.-based Luminoso, and recently signed on as AI Science Lead at Agorai.

You've spent -- and spend -- a lot of your time on research in academia, but you haven't stayed there.
I'm passionate about pushing the frontier of AI, but I'm also very excited about making AI something that makes a difference in the world. I'm working to democratize AI for small and medium businesses -- even larger businesses -- to make it something that makes everybody's life better.

In fact, you've identified customer service as a great way for companies to get started with AI technologies. Why is that the sweet spot?
With customer service, and anything involving a concrete KPI [key performance indicator], it's easy for someone who's not technical to see the impact and understand it. Being able see that, okay, we changed the KPI that mattered -- that's something that gets people who are skeptical onboard. It's a language a lot of people speak.

Also, one of the things that's really important when companies are starting with AI is picking something that's more of an acceptable risk. AI for customer service is a very practical solution targeted at a tiny niche that's easy to operationalize really well. And it's an area where there's fairly well-developed UIs, and you don't have to understand the technology to be able to use it.

Besides advances in processing power and Big Data, why are we making so much progress on AI now?
We started applying it to real business problems. Alongside all the flashy Deep Mind demos, AI is being used to save power in datacenters. As well as it can play video games, it's technology that can save energy and real money for companies. That kind of stuff is going to be very important. The right way to push AI research forward is to understand how it solves problems in the world.

What is "common-sense computing"?
"Common sense" refers to the things we know about the world and take for granted that the people around us know. It's something that allows us to be brief and have reasonable conversations with others. We learn these things very early, and we build complicated things on top of the simple things -- we understand complicated things by making analogies to the simpler concepts. If we want computers to be able to learn and generalize, we need to establish this base layer, which makes it possible for them to understand complicated concepts

What does it mean to "democratize AI"?
If you think about the resources it takes to build and train and test these big, headline-grabbing AI systems, the data required, that's not something most people have. Even your average Fortune 1000 companies -- few organizations, in fact, outside of a Google or a Facebook -- have those kinds of resources. We need to create AI that an average business can use to compete, both with the larger companies, but also to make products and services and experiences for their customers. Everyone should have access to this, not just the giant tech companies.

AI, machine learning, and deep learning are topics of discussion that, for the moment, seem separate and distinct from software development. But you've said that you expect that separation to disappear.
This idea, that AI is just going to become part of software -- not something that's in a little box by itself -- is very exciting. You won't be double clicking on an application on your desktop that's an "AI app." There will be AI in spreadsheets, AI in fraud detection tools. Many, if not all of your apps will have capabilities enabled by AI. It'll simply be part of the state-of-the-art. It's going to be something that everyone is going to touch.

What should software developers be doing right now to prepare for this eventuality?
There's definitely a lot of interesting material available online, no matter what your current skill level, that will help you to start learning about AI. Online courses in data science, machine learning, and deep learning are available. Lots of ways to improve your understanding. Getting started with tagging, for example.

If AI is going to part of every application, then every developers needs to know how to use AI. It doesn't scale any other way. So, usability is key. Developers need APIs and data structures that don't require them to have PhDs in machine learning. And we're seeing that happening.

You're not just passionate about these technologies, you're optimistic about them.
I am about some things. I'm optimistic about what AI can do to make a difference in the world right now.

Predictions about the evolution of AI vary widely, from "we're a hundred years from anything approaching general intelligence" to "the Singularity is right around the corner." Where do you stand on what's coming and when?
I'm not sure it's going to take a hundred years for us to see the AI we see in the movies, but I do believe that it is quite far off. Even among ourselves [AI scientists and researchers] we talk about the kinds of things we have yet to see before we can even have realist conversations about it.

Is there anything we should be worried about?
Worry about privacy, data security, and bias in AI. Worry about how we're going to innovate in regulated environments while maintaining privacy and security. These are things we have to get right before we start worrying about robots taking over the world.

About the Author

John has been covering the high-tech beat from Silicon Valley and the San Francisco Bay Area for nearly two decades. He serves as Editor-at-Large for Application Development Trends (www.ADTMag.com) and contributes regularly to Redmond Magazine, The Technology Horizons in Education Journal, and Campus Technology. He is the author of more than a dozen books, including The Everything Guide to Social Media; The Everything Computer Book; Blobitecture: Waveform Architecture and Digital Design; John Chambers and the Cisco Way; and Diablo: The Official Strategy Guide.

Featured