Microsoft Charts a New AI Course with Home-Grown Models, Reducing Reliance on OpenAI

Credit: microsoft.ai

Key Points

  • Microsoft introduced its first homegrown AI models, MAI-Voice-1 and MAI-1-preview, to reduce reliance on OpenAI.

  • The MAI-1-preview model is trained in-house using 15,000 Nvidia H100 GPUs and is being tested on LMArena.

  • Mustafa Suleyman, co-founder of DeepMind, leads Microsoft’s consumer AI efforts, intensifying competition with OpenAI.

  • Microsoft aims to orchestrate specialized models, potentially moving away from OpenAI dependency.

Microsoft’s AI division announced its first homegrown AI models, MAI-Voice-1 and MAI-1-preview, a strategic move designed to lessen its deep reliance on partner OpenAI and integrate its own technology directly into products like its Copilot assistant.

  • Under the hood: The new MAI-1-preview is Microsoft’s first foundation model trained end-to-end in-house. It was developed using a cluster of approximately 15,000 Nvidia H100 GPUs and is already being publicly tested on the AI benchmark site LMArena. The company plans to roll the model out within its Copilot assistant—a product previously powered entirely by OpenAI’s tech.

  • The Suleyman effect: The move follows the high-profile hiring of Mustafa Suleyman, co-founder of DeepMind, to lead the company’s consumer AI efforts. Developing its own foundation models now puts Microsoft in more direct competition with OpenAI, a company it has invested over $13 billion in but also listed as a competitor in its annual report last year.

While Microsoft says it will continue to use a mix of models, its ambitions are clear. The company’s goal is to “orchestrate a range of specialized models,” a strategy that could see it eventually move away from its deep reliance on OpenAI. The move is backed by serious hardware, including a next-generation GB200 chip cluster, cementing its commitment to building AI infrastructure from the silicon up.

  • The wider view: Microsoft is building large-scale models to rival OpenAI, but it’s also playing a different game. The company recently detailed its new Phi-4 family of small language models, which are designed for high performance on smaller devices, signaling a dual strategy that covers both massive cloud AI and efficient, on-device intelligence.