News
Microsoft Phi-3 Updates Include Fine-Tuning, Serverless Perks
Organizations can now customize Microsoft's Phi-3 small language models (SLMs) using their proprietary data.
Microsoft described the new capability in a blog post last week, among other updates to its family of Phi-3 SLMs. Released in April, Phi-3 models are designed to be lightweight enough to work in edge devices, consuming fewer compute resources than regular large language models (LLMs).
There are four flavors of Phi-3 models. In order of smallest parameter size to largest, they are Phi-3-mini, Phi-3-vision, Phi-3-small and Phi-3-medium. Of these, only Phi-3-vision is multimodal (i.e., supports image-based inputs in addition to text). The others are text-only.
As of last week, Phi-3-mini and Phi-3-medium can be fine-tuned by users to leverage their organizations' data, making their outputs more contextually relevant to their specific business needs than a general-purpose model right out of the box.
Per Microsoft's blog, "Given their small compute footprint, cloud and edge compatibility, Phi-3 models are well suited for fine-tuning to improve base model performance across a variety of scenarios including learning a new skill or a task (e.g. tutoring) or enhancing consistency and quality of the response (e.g. tone or style of responses in chat/Q&A)."
The new fine-tuning capability comes on the heels of a June update to Phi-3-mini (both the 4,000- and 128,000-token context length versions) that brought improvements to "instruction following, structured output, and reasoning."
Also new to Phi-3 as of last week is serverless endpoint support, starting with Phi-3-small. This feature means organizations can now run Phi-3-small via the Azure AI platform as a managed service without having to manually provision, scale or maintain the necessary server infrastructure themselves.
Microsoft plans to extend this serverless endpoint capability to Phi-3-vision "soon."
More information on the Phi-3 model family is available here.