Thursday, 23 July 2020

Nvidia: AI upscaling graphics beats native 4K and is faster and cheaper


Artificial intelligence could revolutionize computer graphics like the first 3D accelerators. With the PC game “Death Stranding”, Nvidia proves that the technology is suitable for everyday use and brings massive advantages.


With Deep Learning Super Sampling (DLSS) , Nvidia trains an artificial neural network with high-resolution ideal recordings of a video game. With DLSS 2.0, Nvidia uses an AI network that is generally trained for game upscaling, which only needs to be fine-tuned for the respective game and integrated into the rendering process. With DLSS 1.0, every game needed its own AI training.

Compared to the first DLSS experiment, Nvidia improved another crucial point in the training process with DLSS 2.0: The neural network is now taught with images in 16K resolution (15,360 × 8,640) instead of the previous 4K resolution. As a comparison, Nvidia provides low-resolution images during training. The training itself takes place on a supercomputer.

Through training, the AI ​​learns the difference between low and high resolution as well as the details of 16K resolution. When scaling up, it can supplement these details and requires significantly less power than the native display in high resolution.

DLSS 2.0 in Death Stranding: Better than native 4K resolution
According to the international PC trade press, this AI upscaling works so well in the new Death Stranding that the image quality of the image scaled up by the AI ​​even beats the native 4K display.

Digital Foundry, for example, writes: "If you don't use DLSS, you won't see the game in its most beautiful form." Intuitively, this does not initially appear logical. However, if you keep in mind that the AI ​​supplements the image with details of a 16K resolution, it makes sense.

The testers at DSOGaming or Toms Hardware also praise the high image quality. “In many cases the visual impression with AI upsampling is as good as with the native resolution and often slightly better. In addition, there were no graphic errors with DLSS during gaming, ”writes Computerbase .
The following video shows the DLSS image compared to native 4K (from 15:40).

https://youtu.be/ggnvhFSrPGE

Even more important than the somewhat better image quality is the massive performance gain that Digital Foundry measures: DLSS 4K runs around 38 percent faster than native 4K rendering in the nicer quality mode. AI upscaling is 70 percent faster in performance mode.

AI upscaling: The graphics world will change
This increase in speed results in significantly higher refresh rates: the AI ​​modes scale up a native resolution that is between 50 and 67 percent of the displayed resolution. The graphics card is relieved of a massive load - and there is a realistic chance that the efficiency of the AI ​​rendering is still at the beginning.

Microsoft's Playfab team, for example, is experimenting internally with AI upscaling, which is supposed to work like a “magic compression technology”.

"You can't tell the difference between a hand-made high-resolution texture and the machine-enlarged texture based on a low-resolution texture, so you can start with a low-resolution texture and then let the machine do the rest," Microsoft team leader James said Gwertzman.



The comparison shows a texture rendered using AI. It can be clearly seen that the AI ​​does not simply inflate the image with neighboring pixels of similar color, but supplements new details.

Game designers may have to design only a small part of their assets in all details in the future - as soon as there are enough image examples for the AI ​​training, the AI ​​could learn the style and render the remaining, low-resolution textures to the same quality level. This would save a lot of time and money in the development process.

Training with videos instead of screenshots: Oculus is up to AI upscaling for VR
AI upscaling is made for VR and AR : Here the render load is particularly high due to the 3D and 360-degree display with high refresh rates. A combination of eye tracking and AI upscaling could ensure photo-realistic graphics quality that still does not require a high-end PC.

Oculus researchers recently presented a paper on AI upscaling specifically for VR apps . The AI ​​was trained with high-resolution videos instead of images and should deliver better results than DLSS. In addition, the technology runs on any hardware instead of only with selected Nvidia graphics cards.

It would also be conceivable that only very low-resolution graphics are streamed via the cloud, which are then reproduced locally by an AI in high image quality. The vision of everywhere XR streaming would come a good deal closer.

No comments:

Post a Comment