Microsoft’s most capable new Phi 4 AI model rivals the performance of far larger systems

Microsoft has launched many new “Open AI” models on Wednesday, the most competitive with Openai’s O3-MINI at least one standard.
All new licensed models-PHI 4 Mini logic, logic 4, and PLUS-Phi 4 Plus-are “thinking” models, which means that they are able to eliminate more fact examination solutions to complex problems. They are saving the Microsoft Phi “Small Model” family, which the company launched a year ago to provide a basis for AI developers for applications on the edge.
Phi 4 MINI logic has been trained in about one million artificial mathematics problems caused by the Deepseek model of artificial intelligence. Microsoft says about 3.8 billion teachers in size, PHI 4 Mini Reasoning for educational applications, such as “compact lessons” on light devices, says.
Parameters are almost compatible with problem solving skills in the model, and models with parameters generally perform better than those that contain less parameters.
Phi 4 Reasoning, a model of 14 billion teachers, was trained using “high-quality” web data in addition to “coordinated demonstrations” from the above O3-MINI from Openai. It is better for mathematics, science and coding applications, according to Microsoft.
For Phi 4 Indudeing Plus, the Microsoft model was pre-adapted to the thinking model to achieve better accuracy in certain tasks. Microsoft claims that Phi 4 Reasoning Plus is close to R1 performance levels, a model that contains much more parameters (671 billion). The company’s internal standard also contains Phi 4 Reasoning in addition to the O3-MINI matching on omnimath, which is a test of mathematics skills.
Phi 4 Mini Reasoning, Phi 4 Weathering and Phi 4 Indudaling Plus are available on the AI Dev platform that embraces with detailed technical reports.
TECHRUNCH event
Berkeley, California
|
June 5
Book now
Using distillation, reinforcement learning, high -quality data, this [new] “It’s small enough for low technology environments, while maintaining strong thinking capabilities that compete with much larger models. This mixture allows even limited resources to perform complex thinking tasks efficiently,” Microsoft wrote in a blog post.
Don’t miss more hot News like this! Click here to discover the latest in Technology news!
2025-05-01 03:23:00