谷歌需要英伟达;AWS 与 OpenAI 的关系说不准|Google Cloud Next
2023 年 8 月 30 日,Information 报道,谷歌在其年度企业软件会议 Google Cloud Next 上展示了其下一代人工智能芯片。但它仍然需要宣传另一种胜利:与英伟达合作,在其自己的硬件旁边提供该公司的最先进的 AI 芯片。英伟达 CEO 黄仁勋甚至在会议上露面——当然穿着他标志性的皮夹克——回答 Google Cloud CEOThomas Kurian 的问题,解释该公司的芯片如何使谷歌的客户受益。
这种现象反应了谷歌的现实。许多 AI 工程师更喜欢使用英伟达的硬件,但这种硬件供不应求。但是,这些开发人员会在任何地方获得计算能力,鉴于这种专门的硬件供不应求的行业范围内的短缺,根据几位与会者的说法。“计算能力永远不足,成本永远不够低,” AI 初创企业 Cohere 的高级副总裁 Saurabh Baji 在周二的一个小组讨论中说。
这就是谷歌所面临的平衡问题。它需要英伟达的硬件来吸引客户来使用其平台。但它也想推动其近年来大力投资的 Tensor Processing Units 的使用。谷歌正在采取措施,使开发人员更容易使用两种芯片。谷歌用于构建大型机器学习模型的软件现在除了谷歌的芯片外,还可以为英伟达的硬件工作,该公司周二宣布了这一消息。
(作者 Jon Victor 是 The Information 的一名记者,报道 Alphabet。)
Google unveiled its next-generation artificial intelligence chip at its annual enterprise software conference, Google Cloud Next, on Tuesday. But it still felt the need to tout a different kind of win: a partnership with Nvidia to offer that company’s state-of-the-art AI chips through Google Cloud alongside its own hardware. Nvidia CEO Jensen Huang even appeared on stage at the conference—wearing his signature leather jacket of course—to field questions from Google Cloud CEO Thomas Kurian about how the company’s chips would benefit Google’s customers.
The dynamic reflects a cold reality for Google. Many AI engineers prefer to use Nvidia’s hardware, which is in short supply. But beggars can’t be choosers. Those same developers will take computing capacity wherever they can get it, given the industry-wide shortages for such specialized hardware, according to several attendees. “There’s never enough compute, and it’s never at a low enough cost,” said Saurabh Baji, senior vice president of engineering at AI startup Cohere, at a panel discussion on Tuesday.
That has Google playing a balancing act. It needs Nvidia’s hardware to draw customers to its platform. But it also wants to drive usage of its Tensor Processing Units, the specialized chips in which it has invested heavily in recent years. Google is taking steps to make it easier for developers to use both chips. Google’s software for building large machine-learning models will now work for Nvidia hardware in addition to Google’s chips, the company announced Tuesday.
评论