Highlights –

  • According to Google, some of the new C3 series’ early users have noticed a 20% performance boost while working on specific workloads.
  • The E2000 has characteristics that can speed up data transfer between servers and storage systems and its networking functionalities.

The cloud division of Google LLC today unveiled a new series of cloud instances that are powered by a specialized chip known as an infrastructure processing unit.

The C3 series, the new instance series, is available for public preview. It’s an upgraded version of the existing C2 instances available on Google Cloud, which are meant to execute high-performance workloads like Artificial Intelligence (AI) apps. According to Google, some of the new C3 series’ early users have noticed a 20% performance boost while working on specific workloads.

Specialized chip

The new C3 instances run on servers with Infrastructure Processing Units (IPUs) and specialized chips. Google Cloud built the chips by collaborating with Intel Corp. as a part of an effort that was revealed last June by Intel. This May, Intel launched the first IPU created as part of the partnership, the E2000.

As per Reuters, the C3 instances announced today are based on the E2000. According to reports, Intel has the choice to market the technology to clients besides Google Cloud.

Apart from running customer applications and support functions, a cloud provider’s servers carry out auxiliary tasks such as network traffic processing. Typically, the central processing unit of a server handles such auxiliary duties. Google’s custom E2000 chip frees up several additional processing tasks from a server’s CPU, improving server performance and accelerating user applications.

The E2000 is primarily designed to handle networking-related activities. The chip can encode the cloud application’s generated traffic and accelerate the process of traffic reaching its destination. Additionally, Intel has equipped the E2000 with characteristics that let servers transfer data to and from flash storage more quickly.

The E2000 boasts a variety of computing units. Each module is designed to do a specific computational task at its best.

Intel says that the E2000 can be built with up to 16 CPU crores based on the Neoverse N1 processor design from Arm Ltd. The Neoverse N1 is 30% more energy-efficient than Arm’s previous generation chip and is designed primarily for usage in data center servers. The processor design was first introduced in 2019 and has since been adopted by other significant cloud providers.

Intel merged the CPU cores in the E2000 with several processing modules created for networking tasks. One of the modules, optimized to encrypt network traffic, is based on the QuickAssist technology found in Intel’s top-tier Xeon series of server CPUs. A dual-core processor is also present, intended to facilitate infrastructure management tasks.

In addition to its networking functionalities, the E2000 has characteristics that can speed up data transfer between servers and storage systems. The chip supports the NVMe-oF protocol to access flash storage. The protocol allows certain storage-related calculations to be conducted without using a system’s operating system or CPU. This enhances performance.

Google Cloud claims that the E2000 can process up to 200 gigabits of network data per second for its new C3 cloud instances. The chip might be used in conjunction with Google Cloud’s Hyperdisk block storage system, unveiled last month. The search engine behemoth claims that Hyperdisk and E2000 chips can enable C3 instances to process 80% more input and output operations per second per vCPU than rivals.

Sapphire Rapids

The new C3 instances also include CPUs from Intel’s fourth-generation Xeon processor series and the E2000 IPU. Sapphire Rapids is a frequent name for the processor series. It will comprise CPUs with up to dozens of cores and is anticipated to debut early next year.

Recently, Intel detailed that chips in the Sapphire Rapids series will feature various onboard accelerators and circuits designed to execute particular computing tasks. One of the accelerators is designed with data encryption in mind. The remaining modules concentrate on use cases, including Artificial Intelligence (AI) and load balancing, or the process of evenly dividing tasks among servers to improve performance.

Experts’ Take

According to Nirav Mehta, senior director of product management for cloud infrastructure solutions at Google Cloud, “And compared with the previous generation C2, C3 VMs with Hyperdisk deliver 4x higher throughput and 10x higher IOPS. Now, you don’t have to choose expensive, larger compute instances just to get the storage performance you need for data workloads such as Hadoop and Microsoft SQL Server.”

According to Nick McKeown, senior vice president, and general manager of Intel’s network and edge group, “A first of its kind in any public cloud, C3 VMs will run workloads on 4th Gen Intel Xeon Scalable processors while they free up programmable packet processing to the IPUs securely at line rates of 200Gb/s. This Intel and Google collaboration enables customers through infrastructure that is more secure, flexible, and performant.”