Friday, June 14, 2024

This Blog Will Show You About The New Advanced Innovation In Thailand

Thailand, the special night location for the...

Who Called Me 02045996873?

Presentation: In a world loaded up with computerized...

WhatsApp Number 0131 561 4532: Unmasking the Mystery

The conundrum of WhatsApp number 0131 561...

Nvidia’s new A.I. chip claims it will drop the costs of running LLMs

TechnologyNvidia's new A.I. chip claims it will drop the costs of running LLMs

[ad_1]

Nvidia president and CEO Jensen Huang speaks at the COMPUTEX forum in Taiwan. “Everyone is a programmer. Now, you just have to say something to the computer.” (Photo by Walid Berrazeg/SOPA Images/LightRocket via Getty Images)

Sopa Images | Lightrocket | Getty Images

Nvidia announced a new chip designed to run artificial intelligence models on Tuesday as it seeks to fend off competitors in the AI hardware space, including AMD, Google and Amazon.

Currently, Nvidia dominates the market for AI chips with over 80% market share, according to some estimates. The company’s specialty is graphics processing units, or GPUs, which have become the preferred chips for the large AI models that underpin generative AI software, such as Google’s Bard and OpenAI’s ChatGPT. But Nvidia’s chips are in short supply as tech giants, cloud providers and startups vie for GPU capacity to develop their own AI models.

Nvidia’s new chip, the GH200, has the same GPU as the company’s current highest-end AI chip, the H100. But the GH200 pairs that GPU with 141 gigabytes of cutting-edge memory, as well as a 72-core ARM central processor.

“We’re giving this processor a boost,” Nvidia CEO Jensen Huang said in a talk at a conference on Tuesday. He added, “This processor is designed for the scale-out of the world’s data centers.”

The new chip will be available from Nvidia’s distributors in the second quarter of next year, Huang said, and should be available for sampling by the end of the year. Nvidia representatives declined to give a price.

Oftentimes, the process of working with AI models is split into at least two parts: training and inference.

See also  How the Mideast is preparing for post-oil, EV era of transportation

First, a model is trained using large amounts of data, a process that can take months and sometimes requires thousands of GPUs, such as, in Nvidia’s case, its H100 and A100 chips. Then the model is used in software to make predictions or generate content, using a process called inference. Like training, inference is computationally expensive, and it requires a lot of processing power every time the software runs, like when it works to generate a text or image. But unlike training, inference takes place near-constantly, while training is only required when the model needs updating.

“You can take pretty much any large language model you want and put it in this and it will inference like crazy,” Huang said. “The inference cost of large language models will drop significantly.”

Nvidia’s new GH200 is designed for inference since it has more memory capacity, allowing larger AI models to fit on a single system, Nvidia VP Ian Buck said on a call with analysts and reporters on Tuesday. Nvidia’s H100 has 80GB of memory, versus 141GB on the new GH200. Nvidia also announced a system that combines two GH200 chips into a single computer for even larger models.

“Having larger memory allows the model to remain resident on a single GPU and not have to require multiple systems or multiple GPUs in order to run,” Buck said.

The announcement comes as Nvidia’s primary GPU rival, AMD, recently announced its own AI-oriented chip, the MI300X, which can support 192GB of memory and is being marketed for its capacity for AI inference. Companies including Google and Amazon are also designing their own custom AI chips for inference.

See also  Twitter accuses Meta of stealing trade secrets for its new Threads app

[ad_2]

Source link

Check out our other content

Check out other tags:

Most Popular Articles