News

The Week in AI: ElevenLabs Voice Isolator, Salesforce's INDICT, Oracle's HeatWave, More

This edition of our weekly roundup of AI products and services includes a new suite of pre-trained LLMS from Meta, the GA release of the Databricks Assistant, a novel framework for enhancing the safety and effectiveness of LLM-generated code from Salesforce, an AI-based tool designed to strip background noise for film, podcast, and interview post-production from ElevenLabs, and more!

Meta introduced the Large Language Model Compiler (LLM Compiler), a suite of pre-trained models aimed at optimizing code. Leveraging the capabilities of Code Llama, the LLM Compiler enhances the understanding of compiler intermediate representations (IRs), assembly language, and optimization techniques, addressing a critical gap in the application of LLMs to software engineering, particularly in the niche but crucial domain of code and compiler optimization. To overcome the notoriously intensive resource demands of LLM training, Meta's LLM Compiler has been trained on a substantial corpus of 546 billion tokens of LLVM-IR and assembly code. The model has undergone rigorous instruction fine-tuning to accurately interpret compiler behavior, the company said. Released under a bespoke commercial license to encourage broad reuse, LLM Compiler is available in two sizes: 7 billion and 13 billion parameters. The suite also includes fine-tuned versions designed to excel at optimizing code size and converting x86_64 and ARM assembly back into LLVM-IR. These versions achieve 77% of the optimization potential of an autotuning search and a 45% disassembly round trip, with a 14% exact match rate, the company said.

ElevenLabs, an AI audio research and deployment company, released an AI-based tool called Voice Isolator, which was designed to strip background noise for film, podcast, and interview post-production. Available now on the ElevenLabs platform, Voice Isolator is a free web app designed to allow creators to remove unwanted ambient noise and sounds from any piece of content, from a film to a podcast or YouTube video. At its core, the product works in the post-production stage, where the users upload the content they want to enhance. Once a file is uploaded, the underlying models process it, detect and remove the unwanted noise, and extract clear dialogue as output, the company explained. ElevenLabs says the product extracts speech with a level of quality similar to that of content recorded in a studio. Voice Isolator accepts files up to 500MB and 1 hour in length. The company said it plans to release an API for Voice Isolator in the coming weeks.

Salesforce Research unveiled INDICT, a novel framework designed to enhance the safety and effectiveness of code generated by large language models (LLMs). The INDICT framework was designed to utilize a system that involves internal dialogues between two critics: one concentrating on safety and the other on helpfulness. This mechanism allows the model to receive comprehensive feedback and refine its output iteratively. These critics are supported by external knowledge sources, including relevant code snippets, web searches, and code interpreters, to deliver more informed critiques. The framework's operational process is divided into two main stages, the company said, preemptive and post-hoc feedback. During the preemptive stage, the safety critic assesses potential risks associated with the generated code, while the helpfulness critic ensures the code meets the intended task requirements. This stage includes querying external knowledge sources to enhance the critics' evaluations. In the post-hoc stage, the generated code is reviewed after execution, allowing the critics to provide additional feedback based on observed outcomes. This dual-stage approach helps the model anticipate potential issues and learn from execution results, improving future outputs, the company said.

Meta open-sourced four language models last week that implement an emerging machine learning (ML) approach known as multi-token prediction. Unlike traditional LLMs, which generate the text or code they output one token at a time, Meta's new open-source models generate four tokens at a time. They accomplish this by using a processing technique known as "multi-token prediction," which the company believes can make LLMs both faster and more accurate. Meta’s four new models are geared toward code generation tasks and feature 7 billion parameters each. Two were trained on 200 billion tokens’ worth of code samples while the other pair received 1 trillion tokens apiece. In a paper accompanying the models, Meta said it had also developed a yet-unreleased fifth LLM with 13 billion parameters. Meta’s models produce four tokens at once, which, according to the company's researchers, might mitigates the limitations of the "teacher-forcing" approach, which involves assigning a model a task, such as generating a piece of code, and then providing it with the correct answer if it makes a mistake.

Oracle launched HeatWave GenAI, enhancing its HeatWave database with generative AI capabilities aimed at helping enterprises combine proprietary data with AI cost-effectively. This new feature is available at no extra cost to existing HeatWave customers and includes in-database large language models (LLMs), automated vector storage, scalable vector search, and HeatWave Chat for natural language interaction. Oracle’s HeatWave, a MySQL database, was designed to enable data querying and analysis without the need for external environments. Oracle’s integration of in-database LLMs, including models from Cohere, Meta, and Mistral AI, aims to streamline generative AI tool development by eliminating the need for data export. The release of HeatWave GenAI marks a significant advancement, providing substantial value and enhancing database functionality with built-in AI capabilities, the company said. Oracle plans to continue evolving based on customer feedback, ensuring the platform meets future enterprise needs.

Databricks officially launched its generative AI assistant, Databricks Assistant, making it generally available to all customers. This new tool, first announced in July 2023, allows users to prepare and analyze data using natural language, reducing the need for coding skills. Alongside this, the AI-Generated Comments feature, part of the Databricks Unity Catalog, is now also available. This feature uses AI to describe data tables and columns, enhancing the accuracy of AI-generated responses. The company said the Databricks Assistant attracted 150,000 active monthly users during its preview phase, and that those users experienced productivity gains of up to 50%. These tools are designed to "democratize" data management by reducing the need for technical skills, although expert oversight remains essential to ensure accuracy. Databricks, known for pioneering the data lakehouse architecture with its Delta Lake storage format, continues to expand its platform capabilities. The company’s acquisition of MosaicML in June 2023 has strengthened its AI and machine learning offerings, positioning it as a leader in the generative AI space.

 

Featured