News
Apache TVM ML Compiler Framework Becomes Top-Level Project
- By John K. Waters
- 12/02/2020
The Apache TVM open source machine learning (ML) compiler framework for CPUs, GPUs, and specialized accelerators has graduated to Top-Level Project (TLP) status within the Apache Software Foundation (ASF). This is the ASF's first full stack software and hardware co-optimization project. Its goal is "to enable ML engineers to optimize and run computations efficiently on any hardware backend," the project's website states.
Apache TVM's full-stack framework was designed to enable deep learning applications to deploy across an array of hardware modules, platforms, and systems, including mobile phones, wearables, specialized chips, and embedded devices. Its list of features includes:
- compilation and minimal runtimes commonly unlock ML workloads on existing hardware.
- automatically generates and optimizes tensor operators on backends, CPUs, GPUs, browsers, microcontrollers, FPGAs, ASICs, and more.
- deep learning compilation models in Keras, Apache MXNet (incubating), PyTorch, Tensorflow, CoreML, and DarkNet, among other libraries. Supports block sparsity, quantization, random forests/classical ML, memory planning, MISRA-C compatibility, Python prototyping, and more.
- build out production stacks using C++, Rust, Java, or Python. Deploy deep learning workloads across diverse hardware devices.
The TVM project originated in 2017 as a research project at Washington University and entered the Apache Incubator in March 2019. The foundation's Incubator process is the official entry path for projects and code bases whose supporters want them to become part of the ASF. This is the first stage of an ASF project's evolution, They're vetted to make sure they comply with the ASF legal standards and their support communities adhere to the ASF's guiding principles.
As a TLP, TVM becomes a first-class-citizen in the ASF and will now be able to receive more contributions from the open source community.
"It is amazing to see how the Apache TVM community members come together and collaborate under The Apache Way," said Tianqi Chen, Vice President of Apache TVM, in a statement. "Together, we are building a solution that allows machine learning engineers to optimize and run computations efficiently on any hardware backend."
Apache TVM is currently in use at a number of organizations, including Alibaba Cloud, AMD, ARM, AWS, Carnegie Mellon University, Cornell University, Edge Cortix, Facebook, Huawei, Intel, ITRI, Microsoft, NVIDIA, Oasis Labs, OctoML, Qualcomm, University of California/Berkeley, UCLA, University of Washington, Xilinx, and more.
"It has been fantastic to see Apache TVM's fast adoption among hardware vendors and ML end-users," said Luis Ceze, CEO of OctoML and Professor at the University of Washington, in a statement, being well on its way to becoming a de-facto industry standard."
About the Author
John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS. He can be reached at jwaters@converge360.com.