NVIDIA Omniverse™ is an easily extensible, open platform built for virtual collaboration and real-time physically accurate simulation.

Nvidia has long been the graphics leader. Now Nvidia is cleaning up the voice-over facial features glitch that we all have gotten laughs out of over the years. Remember watching films where the audio and video just weren’t in sync?  It either made you chuckle or it drove you crazy. People who work in graphics tell you the project is only as good as the tools you employ to build it.

Nvidia has a new toy called Audioface2. The first reviews are from a gamer who also writes about computer gaming.  So far, the reviews are good.

Audio2Face is an impressive-looking auto rigging process that runs within Nvidia’s open real-time simulation platform, Omniverse. It has the ability to take an audio file, and apply surprisingly well-matching animations to the included Digital Mark 3D character model.

It does this automatically, works well with most languages, and can be adjusted using sliders for more detail.

As an added bonus, Nvidia on Demand is the company’s tutorial website, and it features numerous videos on Omniverse and Audio2Face. The first video on Audio2Face is pretty far back in March 2020, but more recent tutorials even detail how to export the process to other tools like Unreal Engine 4.

Take a peek at the video below to get a first-hand look at the upgrade Nvidia calls AudioFace2.

If you are a gamer then perhaps you really do understand this addition to your gaming system. If you are not a gamer, it’s a bit confusing. The article announcing the AudioFace2 came from pcgamer.com. Hope Corrigan reviews the app in her piece.

After watching a few videos, it does seem to depend a bit on the quality of the audio used. And while it is impressive, there are still a few more improvements that will come along soon that will put the cherry on top of this system.

Corrigan went on to explain:

“To do this, Nvidia has implemented a deep neural network that matches the facial animations to the audio in real-time. When you first start the program it may take a moment to build the Tensor RT Engine which optimizes the neural net for your hardware. From there, you should be able to see changes in real-time or bake them in, as long as your hardware can hack it.”

Remember the better your hardware the better this system will function for you and your projects.

read more at pcgamer.com