Microsoft has unveiled a member of its Phi family of generative models recently. The model named Phi-4 boasts enhancements in several aspects as compared to the earlier devices, especially when it comes to tackling mathematical challenges.
As of Thursday night, this improvement is not available for all but it is mainly attributed to the superior quality of the training data and now, Phi-4 is accessible in a very limited capacity.
It is exclusively available on the newly introduced Azure AI Foundry development platform by Microsoft. It is solely intended for research purposes under a Microsoft research license agreement.
The model represents the latest compact language of Microsoft, featuring 14 billion parameters and it is all set to compete with other small models. Models like Gemini 2.0 Flash, GPT-4o mini and Claude 3.5 Haiku are included in the small models mentioned.
These smaller AI models have generally been faster and even more cost-effective with their performance steadily improving over the past few years.
Enhanced performance to the incorporation of ‘high-quality synthetic datasets’ in conjunction with top-notch human-generated content together with some undisclosed post-training refinements.
A lot of research labs are right now focusing on the advancements that they can achieve with post-training techniques and with synthetic data. Alexandr Wang, CEO of Scale AI made a remark in a tweet on Thursday that they have reached a pre-training data wall which echoes several recent reports on the subject.
Majorly, Phi-4 is the very first model in the Phi series by Microsoft to be launched after the exit of Sebestian Bubeck, who was earlier the vice president of AI at Microsoft. Not only this but Bubeck was also a major figure and had a significant role in the development of Phi models, later he departed the company in October and eventually joined the team at OpenAI.