Next-gen GPUs propel Nvidia forward in the world of simulation and gaming

0

Nvidia made several announcements at its GTC event this week, highlighted by its Ada-powered GeForce RTX 40-series GPUs. Nvidia CEO Jensen Huang in a keynote said GPUs would provide a substantial performance boost that would benefit developers of games and other simulated environments.

During the presentation, Huang put the new GPU through its paces in a fully interactive Racer RTX simulation, a fully ray-traced simulation, with all the action physically modeled. Ada’s advancements include a new streaming multiprocessor, an RT Core with twice the radius-triangle intersection throughput, and a new Tensor Core with the Hopper FP8 Transformer Engine and 1.4 petaflops of Tensor processor power.

Ada also presents the latest version of NVIDIA DLSS Technology, DLSS 3, which uses AI to generate new images by comparing new images with previous images to understand scene changes. This ability improves game performance by up to 4 times compared to brute force rendering.

The GeForce RTX will be available in several configurations. The top-of-the-line 4090, aimed at high-performance gaming applications, will sell for $1,599 starting in mid-October. The GeForce RTX 4080 will arrive in November with two configurations. The 16GB GeForce RTX 4080, priced at $1,199, has 9,728 CUDA cores and 16GB of high-speed Micron GDDR6X memory.

Nvidia will also offer the GeForce RTX 4080 in a 12GB configuration with 7,680 CUDA cores, for $899.

Omniverse SaaS cloud

The company also announced new cloud services to support AI workflows. NVIDIA has announced its first software and infrastructure-as-a-service offering, called NVIDIA Omniverse Cloud, which enables artists, developers, and enterprise teams to build, publish, operate, and experience metaverse applications anywhere. By using Omniverse Cloud, people and teams can design and collaborate on 3D workflows without the need for local computing power.

Omniverse Cloud services run on Omniverse Cloud Computer, a computing system comprised of NVIDIA OVX for graphics and physics simulation, NVIDIA HGX for advanced AI workloads and the NVIDIA Graphics Delivery Network (GDN), a network of globally distributed data centers to deliver high-performance, low-latency metaverse graphics to the edge.

Rise of LLMs

Huang also noted during his keynote the growing role of large language models, or LLMs, in AI applications, powering the processing engines used in social media, digital advertising, e-commerce and search. . He added that large language models based on the Transformer deep learning model, first introduced in 2017, are now driving cutting-edge AI research because they are able to learn to understand language. human without supervision or labeled data sets.

To make it easier for researchers to apply this “incredible” technology to their work, Huang announced the Nemo LLM Service, a cloud service managed by NVIDIA to tailor pretrained LLMs to perform specific tasks. For drug and bioscience researchers, Huang also announced BioNeMo LLM, a service for creating LLMs that include chemicals, proteins, DNA and RNA sequences.

Huang announced that NVIDIA is working with The Broad Institute, the world’s largest producer of human genomic information, to make NVIDIA Clara libraries, such as NVIDIA Parabricks, Genome Analysis Toolkit, and BioNeMo, available on the platform. -Broad’s Terra Cloud shape.

To power these AI applications, Nvidia will begin shipping its NVIDIA H100 Tensor Core GPU, along with Hopper’s next-generation Transformer Engine, in the coming weeks. According to the company, partners building systems include Atos, Cisco, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Lenovo and Supermicro. Additionally, Amazon Web Services, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure will begin deploying H100-based instances in the cloud starting next year.

Powering AV Systems

For autonomous vehicles, Huang introduced DRIVE Thor, which combines Hopper’s Transformer Engine, Ada’s GPU, and Grace’s incredible CPU.

The Thor superchip delivers 2,000 teraflops of performance, replacing Atlan on the DRIVE roadmap and providing a seamless transition from DRIVE Orin, which has 254 TOPS of performance and is currently in production vehicles. The Thor processor will power advanced robotics, medical instruments, industrial automation and AI systems, according to Huang.

Spencer Chin is editor of Design News and covers electronics beat. He has many years of experience in the development of components, semiconductors, subsystems, power and other facets of electronics, both from a business/supply chain perspective than technological. He can be reached at [email protected]

Share.

About Author

Comments are closed.