Highlights:

  • One of the main difficulties in creating Omniverse processes that span numerous tools and programs is processing latency.
  • To store data, use artificial intelligence (AI), and construct connectors with various standardized tools and apps utilizing the USD format to represent and distribute data throughout the system, this digital twin will collaborate with Lockheed’s OpenRosetta3D.

To expand the Omniverse into scientific applications on top of High-Performance Computer (HPC) systems, Nvidia has announced several vital advancements and partnerships. This will help connect the data silos currently spread across various apps, models, instruments, and user experiences to create scientific digital twins.

The progress Nvidia has made in creating the Omniverse for entertainment, business, infrastructure, robotics, self-driving cars, and medicine will be strengthened by this work.

The Omniverse platform uses specialized connectors to instantly align and convert 3D data from various formats and applications. Changes made to one tool, application, or sensor are dynamically mirrored in other tools and views that take a different angle on the same structure, factory, road, or human body.

Scientists use it to simulate fusion reactors, cellular interactions, and planetary systems. Today, scientists invest a lot of time manually adjusting the 3D rendering engines, model configuration, and data representation after converting data between tools.

To streamline this procedure, Nvidia intends to use the USD (Universal Scene Description) format as an intermediate data tier.

Dion Harris, the accelerated computing product manager at Nvidia, explained, “The USD format allows us to have a single standard by which you can represent all those different data types in a single complex model. You could go in and somehow build an API specifically for a certain type of data, but that process would not be scalable and extendable to other use cases or other sorts of data regimes.”

Here are the major updates:

  • Systems with Nvidia A100 and H100 Tensor Core GPUs can now connect to visualization tools for scientific computing through Nvidia Omniverse.
  • Supports large digital twins for science and industry using Nvidia OVX and Omniverse Cloud.
  • Improves Holoscan to enable scientific as well as medical use cases. Researchers will find it simpler to create workflows for processing sensor data for Holoscan thanks to new APIs for C and Python.
  • Added connections to Neural VDB for large-scale sparse volumetric representation, Nvidia IndeX for volumetric rendering, Kitware’s ParaView for visualization, and Nvidia Modulus for physics-ML.
  • MetroX-3 increases the Nvidia Quantum-2 InfiniBand Platform’s range by up to 25 kilometers. This will facilitate linking scientific equipment dispersed over a sizable institution or university.
  • Data management at the edge will be orchestrated with the aid of Nvidia BlueField-3 DPUs.

Building bigger twins

One of the main difficulties in creating Omniverse processes that span numerous tools and programs is processing latency.

Creating live connections across multiple file formats or tools demands substantial computing horsepower, even though translating between a handful is one thing.

With support for Nvidia OVX and Omniverse Cloud, businesses can expand composable digital twins over additional building blocks, potentially reducing latency when running larger models like Nvidia A100 and H100.

To demonstrate how these new capabilities can simulate more features of data centers, Nvidia has produced a demo.

They reported on work to replicate data center network hardware and software earlier this year. To communicate designs across several engineering teams, they can combine engineering designs created using software like Autodesk Revit, PTC Creo, and Trimble SketchUp.

In Patch Manager, these can be used in conjunction with port maps to arrange cabling and physical connectivity throughout the data center.

Then, Nvidia Modulus can produce quicker surrogate models to perform what-if analysis in real-time, and Cadence 6SigmaDCX can assist in analyzing heat flows.

Nvidia and Lockheed Martin are collaborating on a project for the National Oceanic and Atmospheric Administration.

To monitor the environment and compile data from ground stations, satellites, and sensors into one model, they intend to employ the Omniverse as a component of an Earth Observation Digital Twin.

This could enhance our knowledge of glacial melting, simulate climatic consequences, evaluate drought risk, and stop wildfires.

To store data, use Artificial Intelligence (AI), and construct connectors with various standardized tools and apps utilizing the USD format to represent and distribute data throughout the system, this digital twin will collaborate with Lockheed’s OpenRosetta3D.

Lockheed’s Agatha 3D viewer, built on Unity, will receive these translated native data formats from Nvidia Nucleus and convert them to the USD format to show data from various sensors and models.

Digital twins, according to Harris, will enter a new phase because of these improvements, moving from passively reflecting a model of the world to actively influencing it.

A two-way link will combine IoT, AI, and the cloud to operate field-based equipment. For instance, Nvidia and Lockheed Martin are collaborating to use digital twins to focus satellites on regions with a higher risk of forest fires.

Harris said, “We are just scratching the surface of digital twins.”