It's time for the PCPer Mailbag, our (mostly) weekly show where Ryan and the team answer your questions about the tech industry, the latest and greatest GPUs, the process of running a tech review website, and more!
On today's show:
00:40 – AMD ray tracing vs. NVIDIA?
03:00 – RTX tensor and RT cores?
04:55 – DirectX Raytracing vs. NVIDIA RTX?
08:18 – Ray tracing vs. Raytracing vs. Ray-tracing?
09:22 – HDMI 2.1 missing from RTX?
10:52 – Isolating drives in a dual-boot PC?
12:47 – Underclocking Ryzen for lower TDP?
14:37 – Cooling M.2 NVMe and Optane?
16:01 – Tariffs and PC hardware prices?
17:18 – RIP optical media?
Want to have your question answered on a future Mailbag? Leave a comment on this post or in the YouTube comments for the latest video. Check out new Mailbag videos each Friday!
Be sure to subscribe to our YouTube Channel to make sure you never miss our weekly reviews and podcasts, and please consider supporting PC Perspective via Patreon to help us keep videos like our weekly mailbag coming!
It’s nice to mention that AI
It’s nice to mention that AI based AA(Anti Aliasing) done on Nvidia’s Turing Tensor Cores. And AI procesing will be there for all sorts of Image, Sound, Physics/other AI processing also. But the little documentation that Nvidia has already released has made it known that it’s the Tensor Core based AI Ray Tracing output Denoising done via the trained denoising AI algorithm runing on the Tensor Cores that’s actually what makes that “Real Time” ray tracing possible, given the limited amount of rays that can be cast in the milliseconds time scale available for each frame.
So for example at 60 FPS the frame rate time for doing the ray tracing calculations on the RT cores and then denoising that output via the Trained AI denoising algorithm runing on the Tensor Cores is 16.67ms: (1/60fps)*1000=16.67 or just 1000/60 as there are 1000 milliseconds in one second.
So how many complete Ray Paths can be calculated per second on those RT cores will differ for the specific RTX 2080Ti or RTX 2080/2070 SKU. So then look at the number of ray calculations needed for each complete Ray Path and that’s a variable amount of Ray Interaction Calculations per completed Ray Path depending on ray refraction, ray reflection, other ray interactions needed to trace the ray’s complete path of interactions with objects in the scene.
So the RT cores’ GigaRays per second metric needs to be devided by 1000 and then multiplied by the frame time(16.67ms at 60 FPS for example) to give the number of rays calculations aviable for the 16.67ms time period available. And each ray interaction(reflection, refraction, other) requires some ray interaction calculations with each completed ray traced path going to be a randomly completed task depending on the Ray’s reflections/refractions and other interactions with the meshes/materials in a scene.
So each completed ray path that is traced requires more or less calculations depending on the ray’s overall path and how many Objects/Object’s Materials that ray interacts with as it completes its path through the scene.
So with that limited number of Ray Paths Traced that output is going to be very grainy and in need of that Tensor Core hosted AI that’s doing the PDQ(Pretty Damn Quick) work on the Ray Tracing(RT cores’) output with the Tensor Core AI denoising in that, for example at 60FPS, only 16.67ms frame time alloted to get all that work done on Nvidia’s RTX GPUs.
In reality the Frame Times will be variable, along with the numbers of fully Ray Traced Paths made up of simiulated photons(Rays) that have to have each ray interaction calculated( variable amount of ray interaction calculations per randomly completed Ray Traced path ). Really there are ways to limit the random numbers by assigning/specifying a more limited amount of the secene’s geometry for ray tracing and using the standard Raster Pipline for the remainder. And that’s what Nvidia is forced to do with its first RTX series of GPUs.
Imagination Technologies does similar things with their PowerVR Wizard ON-GPU Ray Tracing hardware “Real Time” ray tracing functionality.
If ray tracing becomes the
If ray tracing becomes the bottleneck in FPS, is there a way to lower the fidelity of the reflections/refraction to improve FPS?
I’m also curious if DLSS and the ray tracing noise reduction algorithm would compete for the same resources, as in by running both, is there a performance hit to one feature or the other?
Last question! Any chance we will get to see Ryan or Josh sit down with any prominent game developers like Tim Sweeney or John Carmack to get their opinion on current state of hardware trends like ray tracing, deep learning, core-wars, VR, etc and software trends like Vulkin or DX12 API implementation?
The Ray Quality is going to
The Ray Quality is going to lower on its own just by there being being less rays available if the frame rates are running higher and there being less available milleseconds in which to generate and cast rays on a scene and post process that output. The Denoising AI running on the Tensor cores is going to be taking whetever grainy output that comes out of the RT cores and trying to denoise that all and then that output is mixed down with Raster Pipelines product before final Output to the frame buffer.
So What there needs to be asked of Tim Sweeney or John Carmack, if they are allowed under whatever NDA agreement with Nvidia, is for Tim Sweeney or John Carmack to flow chart the entire process of the input into the RT cores and then whetever output produced there goes into the AI denosing process running on the Tensor Cores and then out into the mix down phase that mixes in that deniosed output into the Raster Operations where every thing is then sent to the frame buffer.
So I’d Imagine that there is some form of Ray Tracing Pipline implemented that has its output fed into the Denoising stage on the Tensor Cores and then that’s mixed down via the Raster Pipline with the regular raster operations production. And there may even be some Post processing Options availble between each stage also. It’s all probably going to be Tile Based rendering with some deffered rendering options also.
Also consider this: If the frame rates are so high how much lowering of quality will your visual cortex be able to notice at frame rates above 60FPS? Human Vision is tuned for edge detection so it can track movement so at any higher rates where that dynamic action is rapidly changing the screens view port and things are not going to be in focus anyways and that’s the time to limit the ray tracing output. Also there already methods to selectively NOT include any Object/Object’s parts in a scene from recieving any ray tracing processing at all while other Object/Object parts in the scene get more rays alloted.
Already if you go and watch the Imagination Technologies PowerVR Wizzard Videos you can see that even there they limit the Ray Tracing to only limited Qbjects/Object Parts in the scene so that they can make the most judicious use of the limited Ray Casting ability of the In GPU Hardware PowerVR Ray Tracing units.
All those Texture/Materal/Mesh assets are all defined by Data/Code Objects in any Rendering oriented Game/Gaming engine or even 3D graphics software. There are methods there to assign light generated by any Light Object and have that light only affect a single Object/Few Objects in a scene. You can even tell an Object/Materal to not generate/recieve any shadows or even have that object only take diffuse light and not specular light from any or all of the lights in a scene. There are all sorts of data points and functions in the 3D Objects’ Object Oriented Code that defines the object to do all sorts of weird things that can not happen in nature.
Go and Take a look at Blender 3D at the thousands of different things and settings you can restrict at even the level of the vertex and or polygon level. And all those effects settings can be scripted to happen at various times in an Animated Scene, and gaming is just a scripted 3D amimated world where objects do not necessrily have to cast shadows or have other things limited. Hell the object can be set to be totally invisible and still cast shadows or affect the background’s reflection or refractions if that’s the effect you are looking for.
In Blender/Gaming Engines also you can take the compositor output and split that up into all the different passes AO, Diffuse, Specular, Shadow, others and remix all that via some compositor nodes added in the compositing node editor and virtual plug-board remix everything according to loads of specilized compositor node doohickies and do all sorts of operations before the final output reaches the frame buffer to be output.
“I’m also curious if DLSS and the ray tracing noise reduction algorithm would compete for the same resources”
If they need a Tensor Cores hosted AI, then they will need Tensor Cores so that DLSS and the ray tracing AI noise reduction algorithms will compete unless they do not need all of the Tensor core allotment available in which to do their jobs. If Nvidia breaks everything down into tiled rendering piplines then maybe there is room for several concurrently running AI algorithms to do Denoising And DLSS, other AI related processing. It all depends on the Number of RT/Tensor Cores per Dollar spent on Nvidia’s Various RTX SKUs.
Can CPU cores also run AI tasks, yes, can a GPU’s regular Shader Cores run AI tasks, yes. The Dedicated Tensor Cores are going to do the job more quickly and efficiently than any CPU core, but the CPU cores are calling the Shots.
AMD has so damn many extra Compute/Shader cores on their Vega SKUs there are extra cores available to run Tensor Flow(AI) related work and some form of AI denoising or AI super-sampling. Ray Tracing is a compute oriented task that can be done on Nvidia’s older GPU shader cores and on AMD’s Current/Older GPU shader cores also. Ray Tracing can also be done on CPUs where Ray Tracing was traditionally done for decades.
Nvidia’s RTX GPUs are still going to be more efficient than any non RTX Nvidia or AMD GPU because Nvidia’s RTX GPUs where engineered with that Ray Tracing and AI denoising of the Ray Traced output in mind and the only few to tens of milliseconds available per frame to get the job done. But True Real Time Ray tracing where everthing is done via the Ray Tracing pipline is going to require Petaflops of compute power.
DLSS – some people are saying
DLSS – some people are saying that when you turn it on Nvidia reduces the resolution it is ten seeing at and then upscales. Basically like checker boarding. Is that true or is it just replacing slow SSAA with something faster.
Are the AI functions of the
Are the AI functions of the RTX tensor cores tied directly to the drivers for the card or loaded (and updated) independently online? Could we see a situation where a game plays drastically different — or breaks — due to an update on the AI side?
These are great can you all
These are great can you all start putting them in the podcast feed?
Enabling RTX seems to be
Enabling RTX seems to be about 4~5 times faster on the same hardware for tasks that are predominantly raytracing (depending on workload), compared to the old compute-based solution
What do you mean “old
What do you mean “old compute-based solution”!
Ray Tracing is a compute intensive workload regardless of if it’s done on a “Compute” shader core or Nvidia’s RT cores, or even imagination technologies’s PowerVR In Hardware GPU Ray Rracing cores.
For GPUs is all ones and zeros and Math Units, same as CPU’s, DSPs, FPGAs and the Tensor Cores that are themselves just accelerated matrices math units.
It’s all compute and it’s just that there is more specific in hardware related compute going on for the Tensor Cores that’s Hardware optimized for matrices math compute, or specific hardware for Ray Tracing interactions calculations. The Ray Tracing Units may have some hardware based Ray Tree building acceleration or acceleration of refraction/reflection calculations instructions. It’s ALL still math based.
Really Shaders are just FP, Int, and a few other units, as the GPU has no idea what’s being done on the “shader” cores! It could be image related or non image related workloads going on for any GPU, ditto for CPUs doing Shader workloads or other related image processing workloads. GPUs are only advantagious for Graphics workloads above CPUs because GPUs have thousands of Shader cores(FP, INT, Whatever units) compared to CPUs that have relatively fewer FP, Int, Whatever units.
Nvidia is going to have to make with the detailed whitepapers and there will also need to be independent benchmarks also. Everyone expects that Nvidias RT cores will be better at Ray Tracing calculations just Look at the PowerVR Wizard Video(1) of that GPUs ON GPU ray tracing acceleration done against Nvidia’s GTX 980Ti.
Nvidia is not going to be able to, Apple Style, magical black box hide its Ray Tracing cores from more detailed scrutiny. Nvidia does not have as large of a cargo cult following of crazed folks willing to pay whatever the costs for cellphone bling like Apple does.
Nvidia has already detailed its Tensor Cores IP with Volta so there is plenty of whitepapers to read up on that IP. It’s also already known that Nvidia’s Ray Tracing output is not able to produce enough completed Ray Paths in the few single to double digit millisecond time frames necesary for high FPS gaming. So the special sauce for Nvidia’s “Real Time” Ray Tracing is that Tensor Core Hosted AI that’s doing the rapid milliseconds time frame denoising on that Very Grainy RT cores Ray Tracing Output.
(1)
“Imagination PowerVR 6XT GR6500 mobile GPU – Ray Tracing demos vs Nvidia Geforce GTX 980 Ti”
https://www.youtube.com/watch?v=ND96G9UZxxA
Your response indicates you
Your response indicates you don’t work in the industry and don’t really understand how RTX hardware works. You might as well say “fixed hardware functions in pre-dx8 were also compute because they’re all math in the end”.