Site icon Medical Market Report

Startup Makes Fastest AI Chip in The World, Receives USD 250 Million Funding

Startup Makes Fastest AI Chip in The World, Receives USD 250 Million Funding

Cerebras Systems Inc., the creator of a wafer-sized A.I. processor with 2.6 trillion transistors, has received USD 250 million in investments to advance its innovation and broaden industry use. Alpha Wave Ventures as well as Abu Dhabi Growth Fund supplied the investment. Cerebras Systems announced the investment round today, stating that it is now worth more than USD 4 billion, increasing from USD 2.4 billion in 2019. In most cases, businesses deploy graphics processing units in their AI initiatives. The most powerful GPU on the marketplace currently has around 54 billion transistors. The WSE-2 microprocessor from Cerebras Systems has 2.6 trillion transistors, making it the “quickest AI system on the Planet.”

WSE-2 is an abbreviation for Wafer Scale Engine-2, a reference to the startup’s unique design on which the CPU is built. The traditional method of chip manufacture involves slicing up to a dozen processors of a silicon wafer and afterward separating them. Cerebras Systems employs a completely different approach: the firm carves a single, huge processor in the silicon wafer, rather than breaking it up into small components. The WSE-2’s 2.6 trillion transistors are grouped into 850,000 cores. The chip’s cores, as per Cerebras Systems, are tuned for the precise sorts of arithmetic computations that neural networks utilize to transform raw information into insights.

The WSE-2 uses 40 gigabytes of fast onboard storage to retain the information being processed by a neural network. According to Cerebras Systems, the WSE-2 boasts 123x more cores and 1,000 times more on-chip storage than the closest GPU. As per the business, the chip’s excellent characteristics translate into various benefits for users, most notably greater computing performance. A corporation would need to install dozens or hundreds of standard GPU servers to match the performance delivered by a WSE-2 processor. To coordinate their operations, GPU computers in an AI cluster must continually communicate information with one another.

This data exchange process causes computations to be delayed: Each GPU must wait for the data it requires to complete a computation to come from another server before processing can start. The information does not have to move between 2 separate servers with the WSE-2, but merely from one area of the chip to the other, which is far smaller in length. The shorter journeys decrease processing times. According to Cerebras Systems, the consequence is an improvement in the rate at which neural networks can operate. The WSE-2’s 40 gigabytes of onboard RAM provides another speed improvement.

As the chip on which neural networks function cannot contain all of the data, they frequently store information in external storage devices. To be decoded, information must flow between the chip as well as the external storage. The WSE-2, on the other hand, can keep all of a neural network’s information in its onboard storage, avoiding the computational inefficiencies related to data going off the chip to an external component. Cerebras Systems sells the WSE-2 as a component of the CS-2 system. The technology, as per the firm, may substitute dozens or even hundreds of standard GPU servers.

“The Cerebras crew, as well as our outstanding customers, have accomplished remarkable technological advances that are revolutionizing AI, making feasible what was previously considered as inconceivable,” stated Andrew Feldman, co-founder, and CEO of Cerebras Systems. Cerebras Systems intends to increase its existing staff of 400 to 600 by the end of next year as a result of the fundraising round. The firm will prioritize employing more engineers to assist product development activities.

Exit mobile version