In January 2026, Apple made one of the most significant strategic decisions in its history: partnering with Google to power the next generation of Siri using Gemini AI models, in a multi-year agreement worth approximately $1 billion annually. The partnership, which will see a 1.2-trillion-parameter Gemini model running on Apple's Private Cloud Compute servers, represents Apple's acknowledgment that it needs external AI expertise to compete in the rapidly evolving AI assistant landscape. After extensive evaluation of OpenAI's ChatGPT, Anthropic's Claude, and Google's Gemini, Apple determined that "Google's technology provides the most capable foundation for Apple Foundation Models."
The revamped Siri, codenamed internally as "Glenwood," will debut in iOS 26.4 in February 2026, with a public release in March or early April. The new Siri will be available on iPhone 15 Pro and newer models, featuring dramatically improved personalization, context awareness, on-screen understanding, and the ability to take actions across apps. This transformation represents Apple's most significant AI initiative since the launch of Apple Intelligence in 2024, and it signals a fundamental shift in how Apple approaches AI development.
The partnership is particularly notable because Apple has historically preferred to develop technology in-house, controlling every aspect of the user experience. The decision to partner with Google—a company that competes with Apple in multiple areas including mobile operating systems, cloud services, and consumer hardware—demonstrates the urgency Apple feels about catching up in AI. The company's own AI models, reportedly around 1.5 billion parameters, have lagged behind competitors, and Apple Intelligence features have faced delays.
The Evaluation Process: Why Gemini Won Over ChatGPT
Apple's decision to choose Google Gemini over OpenAI's ChatGPT wasn't made lightly. According to Ars Technica's reporting, Apple conducted extensive evaluations of multiple AI models, testing capabilities, performance, privacy features, and long-term development potential. The evaluation process, which began in 2024 and continued through 2025, considered not just current capabilities but also the trajectory of each company's AI development.
Several factors tipped the scales in Gemini's favor. Model capability was a primary consideration, with Gemini models consistently ranking among the most capable on various benchmarks. Google's Gemini 3 models, released in 2025, demonstrated superior performance across multiple tasks including reasoning, multimodal understanding, and long-context processing. This performance advantage became increasingly important as Apple evaluated what capabilities would be necessary for a truly intelligent assistant.
Multimodal capabilities were another key factor. Gemini's ability to seamlessly process text, images, audio, and video in a unified model architecture aligned with Apple's vision for Siri to understand and interact with all types of content. This multimodal foundation enables features like understanding what's on your screen, processing photos and videos, and integrating with Apple's extensive ecosystem of apps and services.
Long-term development trajectory also favored Google. Apple evaluated not just current capabilities but where each company's AI development was heading. Google's massive investment in AI infrastructure, extensive research capabilities, and commitment to continuous model improvement suggested that Gemini would maintain its advantage over time. The company's ability to scale AI models across billions of devices—experience gained from deploying AI features in Google Search, Gmail, and other services—also demonstrated the scalability Apple would need.
Privacy architecture was crucial. While all three companies offered privacy-focused solutions, Google's approach to running models on customer infrastructure aligned with Apple's Private Cloud Compute architecture. This compatibility enabled Apple to maintain its privacy standards while leveraging Google's AI capabilities, ensuring that user data would remain isolated from Google's infrastructure.
The financial terms—approximately $1 billion annually—represent a significant investment but also reflect the value Apple places on having the best AI foundation. This investment is substantially larger than what Apple might have paid for OpenAI's models, suggesting that Apple viewed Gemini as worth the premium.
The Technical Architecture: 1.2 Trillion Parameters on Private Cloud Compute
The technical architecture of the new Siri represents a dramatic upgrade from Apple's current system. The company's existing cloud-based Siri processing uses a custom model with approximately 1.5 billion parameters—a model that has struggled to keep pace with competitors. The new system will use a custom Gemini model with 1.2 trillion parameters, representing an 800-fold increase in model size and corresponding capabilities.
According to reports from Wccftech, this massive model will run on Apple's Private Cloud Compute (PCC) servers, ensuring that user data remains encrypted and isolated from Google's infrastructure. This architecture enables Apple to leverage Google's AI capabilities while maintaining its privacy standards—a critical requirement for any Apple service.
The three-part architecture includes a request planner that understands user intent and breaks down complex requests into steps, a knowledge retrieval system that accesses relevant information from Apple's services and the web, and a summarizer that presents information in natural, conversational language. Google Gemini powers both the planner and summarizer, while Apple's own Foundation Models handle on-device processing of personal data.
This hybrid approach—using Google's models for general intelligence while keeping personal data processing on-device with Apple's models—enables the best of both worlds: cutting-edge AI capabilities for general queries and tasks, combined with Apple's privacy-focused on-device processing for personal information. This architecture also allows Apple to gradually replace Google's models with its own as they improve, providing a path toward eventual independence.
The computational requirements are substantial. Running a 1.2-trillion-parameter model requires significant cloud infrastructure, which is why Apple is using its Private Cloud Compute servers. These servers, built with custom Apple silicon and hardened operating systems, provide the computational power necessary for advanced AI processing while maintaining Apple's privacy and security standards.
Privacy and Data Isolation: Maintaining Apple's Standards
One of the most critical aspects of the partnership is how Apple maintains its privacy standards while using Google's AI models. According to Apple's Private Cloud Compute documentation, the system ensures that data processed through PCC "isn't accessible to anyone other than the user—not even to Apple." This architecture is essential for the Gemini partnership, as it enables Apple to use Google's models without compromising user privacy.
When Siri needs to process a request that requires cloud-based AI capabilities, only request-relevant data is sent to Private Cloud Compute servers. This data is processed using Gemini models running on Apple's infrastructure, ensuring that Google never has access to user data. The data is not stored or made accessible to Apple after fulfilling the request, and it's not retained for training or improvement purposes.
Apple collects only limited metadata about requests—such as approximate size, features used, and processing time—but not content details. This metadata is not linked to user accounts, providing an additional layer of privacy protection. Users can also enable transparency logging to monitor how their data is processed by Apple Intelligence, providing visibility into the system's operations.
This privacy architecture addresses one of the primary concerns about the partnership: how Apple can use Google's AI while maintaining its reputation for privacy. By running Gemini models on Apple's own infrastructure with strict data isolation, Apple ensures that user data never reaches Google's servers or is used for Google's purposes. This approach enables Apple to leverage Google's AI capabilities while maintaining its privacy standards.
However, the privacy architecture also adds complexity and cost. Running Google's models on Apple's infrastructure requires significant computational resources and ongoing maintenance. This investment reflects Apple's commitment to privacy, but it also represents a substantial operational cost that wouldn't exist if Apple simply sent data to Google's servers.
The iOS 26.4 Release: A New Era for Siri
The revamped Siri will debut as part of iOS 26.4, which will be available in beta in February 2026 and released to the general public in March or early April. According to MacRumors reporting, Apple plans to unveil the new Siri in the second half of February, with demonstrations of the new functionality. It remains unclear whether Apple will hold a full event or private media briefings, but the unveiling will mark a significant moment in Apple's AI journey.
The new Siri will be available on iPhone 15 Pro and newer models, reflecting the computational requirements of the advanced AI features. This device limitation ensures that the new Siri can leverage both on-device processing with Apple's Foundation Models and cloud-based processing with Gemini, providing the best possible experience while maintaining performance standards.
Key features of the new Siri include dramatically improved personalization, with the assistant understanding your preferences, habits, and context across apps and services. The assistant will be context-aware, understanding what's on your screen, what you're doing, and what you might need next. This context awareness enables Siri to provide relevant suggestions and take actions proactively, rather than waiting for explicit commands.
On-screen understanding is another major capability. Siri will be able to see and understand what's displayed on your iPhone screen, enabling features like asking questions about content, extracting information, and taking actions based on what you're viewing. This capability transforms Siri from a voice assistant that responds to commands into an intelligent assistant that understands your entire digital context.
Multi-step task completion enables Siri to handle complex requests that require multiple steps across different apps. For example, you could ask Siri to "find a restaurant for dinner, check my calendar, make a reservation, and add it to my calendar," and the assistant would complete all these steps automatically. This capability represents a significant advancement over current Siri, which typically handles single-step requests.
The new Siri will also feature deeper per-app controls, enabling the assistant to take actions within apps on your behalf. This capability requires app developers to integrate with Siri's new capabilities, but it enables a level of automation and assistance that wasn't previously possible.
Apple's AI Struggles: Why the Partnership Was Necessary
The partnership with Google represents Apple's acknowledgment that its own AI development has lagged behind competitors. According to Fortune's analysis, Apple's internal AI models have struggled to match the performance of GPT, Gemini, and other leading models, and Apple Intelligence features have faced delays and limitations.
Apple's approach to AI has historically been more conservative than competitors. The company has prioritized privacy, on-device processing, and user control, which are valuable principles but have also limited the capabilities Apple can offer. While competitors have been deploying large language models and advanced AI features, Apple has been more cautious, developing smaller models optimized for on-device processing.
This conservative approach has created a gap between Apple's AI capabilities and what users expect from modern AI assistants. Siri, in particular, has faced criticism for being less capable than Google Assistant, Amazon Alexa, and other competitors. The partnership with Google represents Apple's recognition that it needs to catch up quickly, and that developing competitive AI models internally would take too long.
The partnership also reflects the reality that AI development has become extremely resource-intensive. Training and maintaining state-of-the-art AI models requires massive computational resources, extensive datasets, and large teams of researchers. While Apple has significant resources, competing directly with Google, Microsoft, and OpenAI in AI development would require a massive investment and years of development time.
By partnering with Google, Apple can immediately access cutting-edge AI capabilities while continuing to develop its own models internally. This approach enables Apple to compete in the short term while building toward long-term independence. However, it also creates a dependency on Google that Apple will need to manage carefully.
The Competitive Landscape: Impact on OpenAI and Others
Apple's decision to choose Gemini over ChatGPT has significant implications for the competitive landscape. According to Fortune's reporting, the partnership represents a major win for Google and could spell trouble for OpenAI, which had been positioning ChatGPT as a potential foundation for Apple's AI features.
The partnership validates Google's AI development and positions Gemini as a leading choice for enterprise and consumer AI applications. With Apple's endorsement, Google can point to one of the world's most valuable companies choosing Gemini over competitors, strengthening its position in the AI market. This validation could influence other companies' decisions about which AI models to use.
For OpenAI, the loss of the Apple partnership represents a significant missed opportunity. While ChatGPT remains available on Apple devices as an optional user-initiated service, it's now positioned as a secondary layer rather than the core foundation for system-level intelligence. This positioning limits OpenAI's ability to reach Apple's massive user base through system-level integration.
However, the competitive landscape is still evolving. OpenAI continues to develop and improve ChatGPT, and the company's models remain highly capable. The loss of the Apple partnership doesn't eliminate OpenAI as a competitor, but it does represent a setback in the company's efforts to become the default AI foundation for consumer devices.
The partnership also highlights the importance of multimodal capabilities in AI development. Google's investment in unified multimodal models—models that can process text, images, audio, and video seamlessly—appears to have been a key differentiator. This suggests that future AI competition will increasingly focus on multimodal capabilities, not just text processing.
The Financial Terms: $1 Billion Annually
The financial terms of the partnership—approximately $1 billion annually—represent one of the largest AI licensing agreements in history. According to CNBC's reporting, this multi-year agreement reflects both the value Apple places on having the best AI foundation and the cost of accessing state-of-the-art AI models at scale.
For Google, the partnership represents a significant revenue stream and validation of its AI capabilities. The $1 billion annual payment, while substantial, is likely just the beginning, as the partnership could expand to include additional Apple Intelligence features beyond Siri. This revenue helps justify Google's massive investment in AI development and infrastructure.
For Apple, the $1 billion annual cost represents a significant investment but also reflects the urgency of competing in AI. The company could have chosen less expensive options, but it prioritized having the best possible AI foundation for Siri and Apple Intelligence. This investment suggests that Apple views AI as critical to its future competitiveness.
The financial terms also reflect the computational costs of running large AI models. The 1.2-trillion-parameter Gemini model requires substantial cloud infrastructure, and Apple's Private Cloud Compute architecture adds additional costs. These costs are passed through in the licensing agreement, but they also represent the price of maintaining Apple's privacy standards.
However, the partnership is structured as a multi-year agreement, suggesting that both companies expect it to be long-term. This long-term commitment enables Google to plan for ongoing development and support, while Apple can build features and services knowing that the AI foundation will be available for years to come.
Apple Foundation Models: The Path to Independence
While Apple is partnering with Google for the immediate future, the company hasn't abandoned its goal of developing competitive AI models internally. According to Ars Technica's reporting, Apple still aims to develop its own language models that can eventually replace third-party models.
The current implementation uses what Apple internally calls Apple Foundation Models v10, a 1.2-trillion-parameter model based on Google's Gemini. More advanced versions, including v11, are planned for iOS 27, suggesting that Apple is already working on the next generation of its AI models. This development timeline indicates that Apple views the Google partnership as a bridge to its own AI capabilities, not a permanent solution.
Apple's approach to developing its own models will likely focus on the same principles that have guided its technology development: privacy, on-device processing, and integration with Apple's ecosystem. The company's custom silicon, including the A-series and M-series chips, provides computational capabilities that could enable more advanced on-device AI processing in the future.
However, developing competitive AI models is extremely challenging. Apple will need to match or exceed the capabilities of models like Gemini, which have been developed with massive resources and extensive research. This challenge is compounded by Apple's commitment to privacy and on-device processing, which limits the data and computational resources available for training.
The path to independence will likely be gradual. Apple may start by replacing specific components of the Gemini-powered system with its own models, gradually increasing the proportion of Apple-developed AI over time. This incremental approach enables Apple to maintain competitive capabilities while building toward independence.
The User Experience: What Changes for iPhone Users
For iPhone users, the partnership will bring significant changes to how Siri works and what it can do. The most immediate change will be dramatically improved capabilities, with Siri able to handle more complex requests, understand context better, and provide more accurate and helpful responses.
Personalization will be a major improvement. The new Siri will understand your preferences, habits, and patterns, enabling it to provide more relevant suggestions and take actions that align with your needs. This personalization will work across Apple's ecosystem, with Siri understanding your relationship with apps, services, and content.
Context awareness will transform how Siri interacts with you. The assistant will understand what you're doing, what's on your screen, and what you might need next. This awareness enables Siri to be proactive rather than reactive, suggesting actions and information before you ask for them.
On-screen understanding enables new types of interactions. You can ask Siri questions about what's displayed on your screen, extract information from images and documents, and take actions based on visual content. This capability is particularly powerful for tasks like reading documents, understanding images, and interacting with apps.
Multi-step task completion means you can ask Siri to handle complex requests that require multiple steps. Instead of asking Siri to perform each step individually, you can describe the entire task and let Siri figure out how to complete it. This capability makes Siri feel more like a true assistant rather than a command interpreter.
However, these improvements come with device limitations. The new Siri will only be available on iPhone 15 Pro and newer models, reflecting the computational requirements of the advanced AI features. This limitation means that many iPhone users won't have access to the new capabilities, at least initially.
The Strategic Implications: Apple's AI Future
The partnership with Google represents a fundamental shift in Apple's approach to AI. The company has historically preferred to develop technology in-house, controlling every aspect of the user experience. The decision to partner with Google—a company that competes with Apple in multiple areas—demonstrates how critical AI has become to Apple's future.
The partnership also reflects the reality that AI development has become extremely resource-intensive and competitive. While Apple has significant resources, competing directly with Google, Microsoft, and OpenAI would require a massive investment and years of development time. By partnering with Google, Apple can compete immediately while continuing to develop its own capabilities.
However, the partnership also creates dependencies and strategic risks. Apple is now relying on Google for a critical component of its user experience, which creates vulnerability if the partnership encounters issues. The company will need to manage this dependency carefully while building toward independence.
The partnership also has implications for Apple's competitive position. While it enables Apple to compete in AI, it also means that Apple's AI capabilities are dependent on a competitor's technology. This dependency could limit Apple's ability to differentiate its AI features from competitors who also use Google's models.
Looking forward, Apple's success in AI will depend on its ability to develop competitive models internally while leveraging the Google partnership in the short term. The company's path to independence will be challenging, but it's essential for maintaining control over its user experience and competitive differentiation.
Conclusion: A New Chapter for Siri and Apple Intelligence
Apple's partnership with Google to power Siri using Gemini AI models represents one of the most significant strategic decisions in the company's history. The multi-year agreement, worth approximately $1 billion annually, enables Apple to immediately access cutting-edge AI capabilities while continuing to develop its own models internally.
The revamped Siri, powered by a 1.2-trillion-parameter Gemini model running on Apple's Private Cloud Compute servers, will debut in iOS 26.4 in February 2026. The new assistant will feature dramatically improved personalization, context awareness, on-screen understanding, and multi-step task completion, transforming Siri from a voice command system into a true AI assistant.
The partnership reflects Apple's acknowledgment that its own AI development has lagged behind competitors, and that catching up requires external expertise. By choosing Gemini over ChatGPT after extensive evaluation, Apple has validated Google's AI capabilities while positioning itself to compete in the rapidly evolving AI landscape.
However, the partnership also creates dependencies and strategic considerations. Apple will need to manage its relationship with Google carefully while building toward independence with its own AI models. The path forward will be challenging, but it's essential for Apple's future competitiveness.
As iOS 26.4 approaches and the new Siri begins rolling out to iPhone 15 Pro users, we're witnessing a new chapter in Apple's AI journey. The partnership with Google enables Apple to compete today while building for tomorrow, but the ultimate test will be whether Apple can develop competitive AI models internally and achieve the independence it seeks.
One thing is certain: with the Gemini partnership, Apple has taken a major step toward making Siri a true AI assistant that can understand context, take actions, and provide the kind of intelligent assistance that users have come to expect from modern AI systems. The transformation will reshape how millions of iPhone users interact with their devices, and it will determine whether Apple can maintain its position as a leader in consumer technology in the age of AI.




