- Apple selects Google’s TPUv5p and TPUv4 for AI infrastructure, diverging from Nvidia’s GPUs.
- The decision involves 2,048 TPUv5p chips for iPhone models and 8,192 TPUv4 chips for servers.
- Apple’s choice highlights a shift in AI chip preferences and the integration of advanced AI features.
Apple has made a surprising move by opting for Google’s tensor processing units (TPUs) over Nvidia’s graphics processing units (GPUs) for its AI needs.
The research paper reveals that Apple is employing 2,048 TPUv5p chips for its iPhone and device AI models, and 8,192 TPUv4 chips for server-based AI tasks.
Apple’s AI Strategy Shifts: Embracing Google’s TPUs
In a notable departure from conventional AI chip choices, Apple has chosen to utilize Google’s tensor processing units (TPUs) instead of Nvidia’s graphics processing units (GPUs). Historically, Nvidia has held a dominant position in the AI chip market due to its advanced GPU technology, but Apple’s recent decision indicates a shift in preferences.
Apple’s use of 2,048 TPUv5p chips for iPhone AI models and 8,192 TPUv4 chips for server-side AI work reflects a strategic decision to leverage Google’s cloud environment for AI development. This shift is expected to enhance Apple’s ability to develop more advanced AI models, as indicated by their recent integration of OpenAI’s ChatGPT technology into their software.
Apple’s choice to utilize Google’s TPUs over Nvidia’s GPUs marks a significant shift in AI chip preferences, reflecting a strategic decision to enhance their AI capabilities through Google’s cloud infrastructure. This move highlights the evolving landscape of AI technology and the growing diversity of solutions available for advancing artificial intelligence.
“Apple’s decision to use Google’s TPUs instead of Nvidia’s GPUs signals a new era in AI chip selection and reflects their commitment to leveraging cutting-edge technology for advanced AI models.”