How Will LLM Updates Affect Azure OpenAI? Microsoft Explains

Microsoft has newly added the GPT 3.5 Turbo ("gpt-35-turbo") and GPT 3.5 Turbo-16k ("gpt-35-turbo-16k") models to its Azure OpenAI service.

Microsoft this week shed some light on how updates to large language models (LLMs) will be reflected in its Azure OpenAI service, how admins can manage those updates, and some privacy considerations when using Azure OpenAI.  

The LLMs created by the generative artificial intelligence (AI) company OpenAI, and used by Microsoft in its Azure services, get updated and have version numbers. Microsoft on Friday announced that new 0613-version updates of the GPT 3.5 and GPT 4 models have been released for use in the Azure OpenAI service.

Older large language model versions, such as GPT 3.5-Turbo version 0301 and GPT 4 version 0314, are set to "expire no earlier than October 15th, 2023," per Microsoft's "Azure OpenAI Service Models document."

Both GPT 3.5 and GPT 4 can "understand" user text prompts and generate "natural language" responses to those prompts. GPT 4 is additionally capable of generating code, too, although Azure OpenAI also supports Codex models that are specifically designed for generating code from text prompts.

More nuances about the many available models in the Azure OpenAI service can be found in Microsoft's "Service Models" document."

New GPT Models
Microsoft has newly added the GPT 3.5 Turbo ("gpt-35-turbo") and GPT 3.5 Turbo-16k ("gpt-35-turbo-16k") models to its Azure OpenAI service. Both of those models are now at version 0613.

Microsoft also newly added 0613 versions of GPT 4 and GPT 4-32k to the Azure OpenAI service. However, the use of these more accurate GPT 4 models isn't readily available to the public. GPT 4 use in the Azure OpenAI service is just available by request, with prospective users needing to fill out this form.

Automatic updates to these new 0613 versions of GPT 3.5 and GPT 4 will arrive "in two weeks," Microsoft indicated.

Given that new versions of GPT 3.5 and GPT 4 will be arriving, the older versions will expire and an automatic upgrade will be triggered. If organizations don't want such automatic upgrades to occur, then they will have to "set the model upgrade option to expire through the API," although Microsoft won't be publishing guidance on how to do that until "September 1."

Controlling Model Updates
Organizations using the Azure OpenAI service have some control over the model updates.

It's possible to "pin" a particular model version, or organizations can use Microsoft's auto-update default setting. These settings can be managed using Azure AI Studio, which also has a function that will list the deprecation dates for particular model versions.

The auto-update capability is just available for "select model deployments," Microsoft's "Service Models" document explained. While Microsoft generally recommended using the default setting, organizations should opt to choose when upgrades should occur when using "embeddings."

An "embedding" is a special data format for functionalities -- namely, Microsoft has "similarity, text search and code search" embeddings. This data format is said to be "easily utilized by machine learning models and algorithms," per the "Service Models" document.

Azure OpenAI Use and Privacy
Microsoft repeated privacy assurances regarding the use of Azure OpenAI services in its Friday announcement. It doesn't use customer-supplied data to improve Microsoft products or fine tune its models, and doesn't disclose customer data to "third parties."

However, the prompts submitted by Azure OpenAI users are still subject to Microsoft's "abuse monitoring" process, which includes human reviews by Microsoft employees when a prompt or AI-generated completion content gets flagged.  

Microsoft additionally stores such customer-generated information for 30 days, which is done to detect and address abuses. Organizations that don't want such oversight from Microsoft have to fill out a form to request an exemption from Microsoft's abuse monitoring process.

Here's how Microsoft explained the storing of customer-supplied information for abuse monitoring, per its "Data, Privacy and Security for Azure OpenAI Service" document:

To detect and mitigate abuse, Azure OpenAI stores all prompts and generated content securely for up to thirty (30) days. (No prompts or completions are stored if the customer is approved for and elects to configure abuse monitoring off, as described below.)

The document further explained that the models themselves are stateless, meaning that "no prompts or generations are stored in the model."

Microsoft's assurances on Azure OpenAI and privacy seem to be plainly stated, although other messaging, such as this announcement that organizations are going to need zero trust security to use AI, offer some doubts. Microsoft also recently floated the idea that data governance tools may be needed to use emerging AI tools such as Microsoft 365 Copilot, which is now at the limited private preview stage.

About the Author

Kurt Mackie is senior news producer for 1105 Media's Converge360 group.