Microsoft has re-prioritised its AI strategy, increasing demand for accelerated computing capacity and re-engaging with OpenAI. The company is pursuing a multi-faceted approach to secure near-term compute resources, including self-build data centres, leasing, and exploring “Neocloud” options.
This shift follows a reported slowdown in data centre construction and a reassessment of commitments to OpenAI approximately one year ago. During this period, OpenAI reportedly diversified its compute contracts, engaging directly with other providers including Oracle, CoreWeave, Nscale, SB Energy, Amazon, and Google. This diversification by OpenAI was a strategic move to ensure robust and scalable compute power for its rapidly evolving AI models, particularly its large language models (LLMs) which are notoriously compute-intensive. Microsoft’s renewed focus suggests a recognition of the critical need for dedicated and substantial AI infrastructure to maintain its competitive edge.
The company’s current strategy involves securing capacity through various means. This includes accelerating self-build data centre projects, which were previously slowed, and leasing existing infrastructure. Microsoft is also investigating “Neocloud” arrangements and seeking locations in less conventional areas to meet immediate demand. Specific figures regarding these capacity expansion efforts are available to subscribers of the SemiAnalysis Datacenter Model. The “Neocloud” concept, while not fully detailed, implies a potential for more flexible, distributed, or even hybrid cloud solutions tailored specifically for AI workloads, possibly leveraging edge computing or specialized regional deployments. This approach aims to circumvent the bottlenecks associated with traditional, large-scale data centre builds and to tap into available resources more rapidly.
Microsoft’s AI investments are reportedly experiencing renewed acceleration, driven by high demand for accelerated computing. The company’s Tokenomics model forecasts continued growth in Azure in the coming quarters and years, with Microsoft participating across the entire AI Token Economic Stack. This indicates a comprehensive strategy that spans from the foundational hardware and infrastructure to the AI models themselves and the economic models that underpin their deployment and monetisation. The “AI Token Economic Stack” suggests a deep dive into the financial and operational efficiencies required to make AI services sustainable and profitable at scale.
Accelerated Compute Demand
In hardware development, Microsoft has access to intellectual property for custom AI chips developed by OpenAI. These ASICs are considered significant in their current development trajectory. There is a possibility that Microsoft may utilise these chips to serve OpenAI models, mirroring its approach to accessing OpenAI’s foundational models while also developing its own. This dual strategy allows Microsoft to leverage cutting-edge AI capabilities from its partner while simultaneously building internal expertise and proprietary solutions. The development of custom ASICs is a crucial step in optimising performance and reducing the cost per inference for AI models, a key factor in the widespread adoption of AI services.
Microsoft AI is working on its own foundation models, aiming for vertical integration within the AI sector. The objective is to reduce reliance on third-party components and deliver AI services at a lower cost compared to competitors. This effort includes the development of its own custom chip, the Maia, although OpenAI’s ASIC developments are currently viewed as having a more promising trajectory by SemiAnalysis. The Maia chip represents Microsoft’s commitment to designing hardware tailored for its specific AI workloads, potentially offering unique advantages in terms of power efficiency and integration with its Azure cloud platform. However, the current assessment by SemiAnalysis highlights the rapid progress and potential of OpenAI’s ASIC designs, suggesting a complex and dynamic landscape for AI hardware innovation.
The company’s AI strategy encompasses efforts in cloud infrastructure, model development, and hardware, with a focus on securing compute resources and achieving greater cost efficiency and control over its AI supply chain. The renewed partnership with OpenAI and the aggressive pursuit of compute capacity indicate a strategic push to maintain and expand its position in the AI market. This comprehensive approach, addressing compute, models, and custom hardware, positions Microsoft to navigate the increasingly competitive AI landscape and to capitalise on the burgeoning demand for AI-powered solutions across enterprise sectors. The timeline for these expanded compute deployments and the full integration of custom silicon will be critical indicators of Microsoft’s success in this accelerated AI race.









