News

The Week in AI: Google Bard Now Multilingual, Amazon Debuts Rufus, OpenAI Launches New Models, More

It's been another lively week in AI. Here's what pinged our radar:

Google Bard Chatbot Gemini Upgrade
Google rolled out the Gemini upgrade to its Bard chatbot in December, but the capabilities of the multimodal large language model were available only in English. This week, the company announced that was is now providing global access to the capabilities of Gemini Pro.

Gemini Pro in Bard will now be  available in more than 40 languages and more than 230 countries and territories.

Google unveiled the Gemini model in December 2023, optimized for three sizes—Ultra, Pro, and Nano—so it could run on a range of platforms. data centers to mobile devices.

Since we know people want the ability to corroborate Bard’s responses," Bard product lead Jack Krawczyk said in a blog post, "we’re also expanding our double-check feature, which is already used by millions of people in English, to more than 40 languages."

Bard powered by Gemini Pro will be available in more than 230 countries.

OpenAI Launches Two New ML Embedding Models
OpenAI unveiled two new embedding models this week: a smaller and highly efficient text-embedding-3-small model, and a larger and more powerful text-embedding-3-large.

An embedding is a numerical representations of concepts converted to number sequences. Embeddings make it easier for machine learning (ML) models and other algorithms to understand the relationships between content and to perform tasks, such as clustering or retrieval.

text-embedding-3-small provides a significant upgrade over its predecessor, the text-embedding-ada-002 model released in December 2022. OpenAI says it's not deprecating text-embedding-ada-002. "[S]o while we recommend the newer model, customers are welcome to continue using the previous generation model," the company said in a blog post.

The company says text-embedding-3-large is its best performing model. "Comparing text-embedding-ada-002 to text-embedding-3-large: on MIRACL, the average score has increased from 31.4% to 54.9%, while on MTEB, the average score has increased from 61.0% to 64.6%," the company said.

Amazon Launches "Rufus" Shopping Assistant
Amazon launched a new, generatively-AI-powered conversational "shopping experience" within its mobile app. Dubbed Rufus, this new "expert shopper" has been trained on the company's massive product catalog, customer reviews, community Q&As, and "information from across the web." It was launched this week in beta to a small subset of customers in Amazon’s mobile app. Rufus will progressively roll out to additional U.S. customers in the coming weeks, the company said.

Google Maps Gets a GenAI Boost
This week, Google Maps introduced a generative AI feature to enhance place discovery. Initially rolling out to select U.S. Local Guides, this feature utilizes large-language models to analyze detailed information on over 250 million places. It aims to make personalized suggestions based on user queries, taking into consideration such factors as preferences and reviews from the Maps community.

"This experimental capability introduces a whole new way for people to more easily discover places and explore the world with Maps," said Miriam Daniel, Google Maps VP and GM, in a blog post. "This is just the beginning of how we’re supercharging Maps with generative AI, and we’re excited to start with our passionate community of Local Guides as we shape the future of Maps together."

LangChain Adds Tag and Metadata Grouping to LLM Dev Platform
LangChain, the company behind the open-source LangChain framework, has added tag and metadata grouping capabilities to its LangSmith platform for building production-grade large-language-model (LLM) applications.

LangSmith was designed to allow developers to debug, test, evaluate, and monitor chains and intelligent agents built on any LLM framework and seamlessly integrate with LangChain. This new set of monitoring features makes it possible A/B test and compare different versions of an LLM application. LangSmith users can send a dictionary with tags and/or metadata in invoke to any Runnable. The same concept works in TypeScript, the company says. This capability will allow users to get a sense quickly of how different versions of an application are performing, relative to each other w.r.t important metrics (latency, time-to-first-token, and feedback scores).

"Simply send up metadata or tags along with your runs, then head to the monitoring section in any project details page and group by metadata or tags," the company explained in a blog post.

Info-Tech Study Show GenAI Could Boost Chem and Pharma Productivity
The industry watchers at Info-Tech Research Group, an up-and-coming info tech research and advisory firm focused on the needs of CIOs and IT leaders, says generative AI has the potential to "revolutionize operations and boost productivity" in the chemical and pharmaceutical manufacturing industries.

In a newly published paper ("Generative AI Use Case Library for the Chemical & Pharmaceutical Manufacturing Industry") the analysts encourage manufacturers in these sectors to "leverage generative AI, with its ability to analyze vast amounts of data, generate optimized solutions, and enhance decision-making processes, to unlock new opportunities for growth, efficiency, innovation, and ultimately, competitive differentiation."

The firm offers a kind of blueprint in the report, highlighting how Gen AI can significantly impact specific functional areas within the industries to stay competitive. It covers the potential impact of Gen AI on the supply chain, product and materials design, new product introduction, generative chemistry, and clinical trials.

"Today, manufacturing leaders are striving to make sense of the new realm of Gen AI, with its endless acronyms and complex jargon, while seeking ways to stay ahead of their competition," said Shreyas Shukla, principal research director at Info-Tech Research Group, in a statement. "Gen AI is a game changer – a powerful tool and partner in innovation that can amplify human capabilities and alleviate the burden of mundane tasks, enabling a stronger focus on more strategic pursuits."

Cognizant Flowsource Gen AI Dev Platform Released
Cognizant, a leading provider of consulting, information technology, and business process outsourcing services, this week announced Flowsource, a generative AI (gen AI)-enabled platform that aims to fuel the next generation of software engineering for enterprises.

Cognizant Flowsource was designed to provide a unified engineering platform that connects the work of all software delivery stakeholders and the development community. Gen AI-enabled tooling and process orchestration is woven throughout the developer experience, the company says, allowing team members to work faster and with more focus in a measurable and quantifiable manner. The teams can for example, self-service with templates to provision code and environments, automate testing and documentation, leverage enterprise knowledge bases to drive code and component re-use, and speed coding processes with trained copilots.

The platform integrates all stages of the software development lifecycle and incorporates digital assets and tools to help cross-functional engineering teams deliver high quality code faster, with increased control and transparency.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at jwaters@converge360.com.

Featured