News
The Week in AI: OpenAI's GPT-4 Turbo with Vision, Meta's Next MTIA, Azip's AI-Design Pipeline, More
- By Pure AI Editors
- 04/15/2024
This edition of our weekly roundup of AI product and services news includes OpenAI's announcement that GPT-4 Turbo with Vision is now available to the public through its API, UserTesting's release of a new AI-powered survey solution, Sama's new Red Team Service, and a new tool from Azip that creates AI models using AI models, and more.
OpenAI made its enhanced GPT-4 Turbo with Vision model generally available to the public last week through its API, according to an announcement on the company's X account. GPT-4 Turbo with Vision now allows the model to take in images and answer questions about them. It allows for JSON mode and function calling, which generates a JSON code snippet. This feature can automate tasks within connected apps, such as executing purchases or sending emails, to streamline user workflows. This development represents a significant upgrade to the capabilities of the GPT-4 Turbo large language model (LLM), the company said. OpenAI offered several use cases for the GPT-4 Turbo with Vision, including:
- Cognition's AI coding agent, Devin, which automates coding tasks.
- The Healthify app, which offers nutritional assessments and advice based on meal photos.
- TLDraw, a UK-based startup, utilizes the technology to transform drawings on its virtual whiteboard into functional websites.
Meta Platforms, Inc. shared the details of the next generation of its Meta Training and Inference Accelerator (MTIA), which the company introduced last year as part of its family of custom-made chips for Meta AI workloads. This version offers significant performance improvements over its predecessor, MTIA v1, the company said, and helps power its ranking and recommendation ads models. MTIA v1 was "an important step in improving the compute efficiency of our infrastructure and better supporting our software developers as they build AI models that will facilitate new and better user experiences," the company said. The next version of MTIA is part of Meta's broader full-stack development program for custom, domain-specific silicon created specifically to address its workloads and systems. This new version more than doubles the compute and memory bandwidth of the previous solution, the company said, "while maintaining our close tie-in to our workloads."
Data annotation and model validation startup Sama introduced a new service called Sama Red Team, designed to enhance the safety and reliability of AI models. The service employs machine learning engineers, applied scientists, and human-AI interaction designers to ensure AI models are fair, secure, and legally compliant. The Sama Red Team pre-emptively identifies and addresses generative AI model issues around public safety, privacy, and legal adherence. It conducts rigorous testing across text, image, and voice search modalities to uncover potential vulnerabilities before public deployment. Sama Red Team tests models on four key competencies: fairness, privacy, public safety, and compliance.
- In fairness testing, teams simulate real-world scenarios that may compromise a model’s built-in safeguards and result in offensive or discriminatory content.
- For privacy testing, Sama experts craft prompts designed to make a model reveal sensitive data, such as Personal Identifiable Information (PII), passwords or proprietary information about the model itself.
- In public safety testing, teams act as adversaries and mimic real-world threats to safety, including cyberattacks, security breaches or mass-casualty events.
- For compliance testing, Sama Red Team simulates scenarios to test a model’s ability to detect and prevent unlawful activities such as copyright infringement or unlawful impersonation.
UserTesting, Inc., a SaaS leader in experience research and insights, introduced a new feedback engine alongside improvements to its AI capabilities, designed to enhance the way organizations collect and analyze customer feedback. The newly launched UserTesting FeedBack Engine provides flexible and comprehensive tools for capturing audience perspectives throughout the product development lifecycle. This includes AI-powered surveys and extensive testing capabilities, such as live interviews and usability testing, which allow for a deeper understanding of customer experiences. The company also expanded its AI features with the introduction of AI themes that automate the analysis of open-ended survey responses. The company also enhanced its audience solutions to ensure access to high-quality research participants through an extended network of more than 40 partners.
Azip, Inc., a provider of AI-for-edge solutions, introduced a fully automated AI-design pipeline that allows users to create AI models using AI models. Dubbed "Azipline," it was developed through a collaboration among researchers from MIT, UC Berkeley, UC Davis, and UC San Diego to provide an AI-design automation method that employs a variety of cutting-edge techniques, including unsupervised learning, generative models, and transfer learning. This method integrates data-centric, model-centric, and system-centric approaches to enhance efficiency in AI applications, the company said. One advancement this approach provides is a fully automated keyword-spotting design pipeline, which benefits from generative models and scalable neural architecture searches, enabling the development of high-performing models optimized for varied acoustic environments.
OpenAI also announced its first office in Asia coinciding with the release of GPT-4 custom model optimized for the Japanese language. Among other advantages, a local presence moves the company closer to such businesses as Daikin, Rakuten, and Toyota Connected, all of which are using ChatGPT Enterprise to automate complex business processes, assist in data analysis, and optimize internal reporting, the company said. The new OpenAI Japan will be led by its new president, Tadao Nagasaki. OpenAI is kicking off the new effort by providing local businesses with early access to the Japanese GPT-4. This custom model offers improved performance in translating and summarizing Japanese text, the company said, and operates up to three times faster than its predecessor. OpenAI says it will release the custom model more broadly in the API in the coming months.
NetApp, the intelligent data infrastructure company, released a preview of its GenAI toolkit reference architecture for retrieval-augmented generation (RAG) operations using Google Cloud Vertex AI platform. The toolkit, along with the accompanying reference architecture, speeds the implementation of RAG operations while enabling secure, consistent, and automated workflows that securely connect data stored in NetApp Volumes with Google Cloud Vertex AI platform, the company said. The result is a greater ability to generate unique, high-quality, and ultra-relevant insights and automations. The NetApp GenAI toolkit, which will be available as a public preview in the second half of 2024, helps optimize RAG processes with unique capabilities, including:
- NetApp ONTAP allows customers to include data from any environment to power their RAG efforts with common operational processes.
- NetApp’s BlueXP classification service automatically tags data to support streamlined data cleansing for both the ingest and inferencing phases of the data pipeline, ensuring that the right data is used for queries and that sensitive data is not exposed to the model out of policy.
- ONTAP Snapshot delivers near-instant creation of space-efficient, in-place copies of vector stores and databases, allowing immediate roll back to a previous version if data is corrupted or forward if point-in-time analysis is needed.
- ONTAP FlexClone technology can create instant clones of vector index stores to safely make uniquely relevant data instantly available for different queries for different users, without impacting the core production data.