Iray and GPU Stress

ebergerlyebergerly Posts: 3,255

So I was monkeying around this morning with my Raspberry Pi computer, and I had my multimeter out measuring stuff, and it occured to me that maybe I'll put some real world numbers to the issue of how much Iray stresses your GPU, your computer, and even your house electrical wiring, in order to get a broader perspective. 

So I decided to download "Furmark", which is pretty much the standard software for stressing your GPU and seeing how it responds. And then I thought I'd compare the GPU response during an Iray render and see how they compare. 

I guess many of us (including myself) pretty much assumed that Iray would crank the GPU up to maximum (ie, max temps, watts, fan speed, etc.). Of course this is complicated by fan curves and drivers and all that stuff, but at least it might give a broad perspective.

So I ran Furmark for around 8 minutes and plotted some GPU stuff (from GPU-Z), like temps, watts, and fan speed. I then did an Iray render of one of my larger scenes, and ran that for around 7 minutes and plotted the same data in GPU-Z. The results are in the 3 graphs below:

  • Furmark brought the GPU temps up to around 87C, while Iray brought it only up to 74C.
  • In Furmark the GPU power draw bounced around a LOT, and it showed peak power (watts) used by the GPU at around 280 watts (the 1080ti is rated 250 watts), and around 3 minutes the average power dropped (presumably due to throttling), with peaks still up around 280 watts. However, Iray had almost continuous power at only around 180 watts. 
  • The fan speed in Furmark went to 100%, while during an Iray render it maxed at only around 65-70%.

Honestly I was a bit surprised that Iray didn't push the GPU nearly as much as Furmark. Of course, Furmark is DESIGNED to stress the GPU, and I'm sure it's not practical for Iray to always max out the GPU. I assume there are some types of scenes where CUDA can take more advantage of the GPU and stress it more. But generally, whenever I run an Iray scene in Studio it shows pretty much around 99% utilization consistently. Interesting...

So I guess at this point my conclusion is that Iray may not necessarily max out the stress on your GPU, and  in fact the GPU may operate significantly below its max ratings.   

If anyone knows if there are some key things that cause Iray to max out GPU temps, etc., I'd be interested to hear. Especially if it's associated with actual test data and facts....laugh

 

Comments

  • ebergerlyebergerly Posts: 3,255
    edited March 2019

    Also, I decided to measure the voltage at the wall outlet as I rendered to see how much effect it had. My normal voltage at the outlet was around 120volts this morning, and, as expected, adding a 200-300watt load to the house wiring will only have maybe a 1 volt drop. However, coincidentally, while I was measuring, my 1200 watt microwave came on, and the outlet voltage dropped to 113-114 volts, which is like a 6 volt drop(see images below). So clearly, unless you have a bunch of GPU's in parallel, your machine isn't going to affect the voltage anywhere near what your microwave, air conditioner, etc., will. 

    Also, keep in mind that your power company is only required to maintain a voltage at the meter on the outside of your house between 114 and 126 volts under normal conditions, so that means that the voltage at your outlet can vary anywhere between maybe 109 and 126 volts under normal conditions if you add the voltage drop for your microwave, aircon, toaster oven. Of course if the power company is having a problem it can be just about anything. 

    So for me the bottom line at this point seems to be that yeah, your GPU does require some power, but in real life it might not be as stressful on your PC and your electrical circuit as you might think. Especially compared to all the other high power stuff most of us run. 

    Low.jpg
    1494 x 2656 - 2M
    High.jpg
    1494 x 2656 - 2M
    Post edited by ebergerly on
  • jmtbankjmtbank Posts: 187
    ebergerly said:

    Furmark is DESIGNED to stress the GPU

     

    Both Nvidia and AMD built in detection of furmark into their drivers around 10 years ago. Furmark now doesnt stress more than top end games.  The 280W you detected sounds about default for most manufacturers 1080tis.  The 250W rating will be for early blowers.

    Most 3D rendering and scientific software wont max out the design power limit cards like games do in my experience.  But it does vary.

  • ebergerlyebergerly Posts: 3,255
    jmtbank said:

     

    Both Nvidia and AMD built in detection of furmark into their drivers around 10 years ago. Furmark now doesnt stress more than top end games.  The 280W you detected sounds about default for most manufacturers 1080tis.  The 250W rating will be for early blowers.

    Most 3D rendering and scientific software wont max out the design power limit cards like games do in my experience.  But it does vary.

    No kidding...a Furmark detector ? laugh

    Thanks. Any idea why Iray stress is so much lower than Furmark? 

  • jmtbankjmtbank Posts: 187
    edited March 2019

    For a year or so after, you could just rename the Furmark exe to avoid it.  

    The reason they did it was both ATI and Nvidia went through a period where their chips were being acused of being too hot - remember the Nvidia 480 debacle?  Furmark was hitting 400W on cards rated at 250.

     

    I don't know what part of the programing games/furmark use that get every inch out of these chips, but Iray and other business software doesn't.  I wouldn't mind seeing the figures for Quadro cards with equivalent GPUs.

    Post edited by jmtbank on
  • ebergerlyebergerly Posts: 3,255

    Yeah, I was getting kinda nervous when during the Furmark I saw the fans max out at 100%...

    I was quick on the trigger finger to shut it down after that happened. laugh

  • ebergerlyebergerly Posts: 3,255
    jmtbank said:

    I don't know what part of the programing games/furmark use that get every inch out of these chips, but Iray and other business software doesn't.  I wouldn't mind seeing the figures for Quadro cards with equivalent GPUs.

    My guess is that Iray and other software that uses the GPU is rarely or never optimized to keep the GPU perfectly busy at all times. For example, a core might be doing a render task, but then it has to stop, save the answer in a register, load another task, and continue. So those times they take a breather means you're not perfectly busy. Maybe Furmark forces everything to be always busy in a way that is not really practical or possible with real software. So then it comes down to whether certain scenes are more likely to optimize GPU stress than others.

    Although the GPU shows 99% utilization during Iray, but only around 93% during Furmark...

    Heck, I dunno...

  • KitsumoKitsumo Posts: 1,221

    I didn't know about the Furmark detector, but GPU drivers have had special optimizations for popular benchmarks like 3dmark for years, dating all the way back to Quake. Anyway, I think Furmark is designed to throw as much data at the GPUs as possible to keep them occupied, plus it's just fur repeated over and over. I don't even know if there are any textures involved. During an Iray render, the GPU has to deal with different materials, opacities, refractions, etc plus no matter how fast VRAM is, it's not instant - it must take some time for a GPU to access textures in VRAM. It's like comparing a hard drive benchmark result to the actual speed you get when transferring a file. That's my theory anyway.

    I'd say if you want an Iray scene to stress your GPU more, get something with fewer textures and mostly geometry, reflections, refractions, SSS, etc like a Cornel box.

    cornell box iray render

    Great. Not even noon and already a GPU performance thread. There goes my Saturday. cheeky

  • ebergerlyebergerly Posts: 3,255

    This is one reason I'm kinda skeptical about RTX in some ways. Basically it takes the render tasks such as raytracing, materials, physics, etc., and separates them into specific hardware architecture components, each with its associated software. So the question becomes whether the specific scene and software can actually take full advantage of all of that. Of course, if you have an ideal scene that takes full advantage then you're golden. Otherwise, who knows? Which is why I'm gonna wait 6 months or so to see how it all unfolds. 

  • KitsumoKitsumo Posts: 1,221

    Yeah, all I've heard is RTX cores are amazing, but you won't get any benefit until developers write code that takes advantage of them. So, kind of like Kinect or Google Glass? Both amazing pieces of technology that never became really useful. I guess time will tell. Right now I can't see myself shelling out the money for another Nvidia card.

  • hjakehjake Posts: 1,273

    Thank you ebergerly for the info  :-)

     

  • ebergerlyebergerly Posts: 3,255
    edited March 2019

    Another interesting data point:

    I fired up Blender and loaded a couple of the standard Cycles benchmark scenes (BMW and schoolroom, the GPU versions) and monitored my GPU's as they rendered, and surprisingly they "stressed" my system almost identically to how Studio/Iray does. Although slightly less.

    I plotted my 1080ti performance in Blender vs. Iray in the charts below. Keep in mind that the Blender scene rendered in only like 5 minutes so the chart of Blender stuff drops off before the Iray stuff.

    The GPU temps again flattened around the mid 70's, and power usage was the same as with Iray (eg, my 250 watt 1080ti flattened at around 180 watts). And total system power consumption measured at the wall outlet was almost identical to rendering in Iray (ie, 380 watts). 

    So I guess the bottom line is that if someone tries to convince you that 3D rendering with GPU's will bring your computer to it's knees you have good reasons to be a bit skeptical. Furmark? Certainly. 3D rendering? Well, maybe not so much.

    BlenderIray1080tiOnlyTemps.JPG
    1544 x 707 - 153K
    BlenderIray1080tiOnlyWatts.JPG
    1542 x 711 - 173K
    BlenderIray1080tiOnlyFan.JPG
    1540 x 711 - 115K
    Post edited by ebergerly on
  • outrider42outrider42 Posts: 3,679

    I've explained why Iray does not stress the GPU as much as people assume it does in the benchmark thread. Actually two or three times at this point. Iray is not the most stressful thing you can run. As you now know, its not even close.

    The short of the story is that not all 100% loads are the same. When your GPU reports 100%, it only takes one area of the GPU to hit 100% for it to report it. It only means something is bottlenecking your card, but it could be one single aspect hitting that, or it could be several. Modern GPUs are way more than just a chip on a board. There are many components, and in a lot of ways, a GPU is its very own computer on that one board.

    Iray does not stress GPU VRAM. Like at all. This is because the data is all loaded one time, and one time only, for any render scene. Once the data is loaded, it stays isolated in VRAM for the full duration of the render. A video game will not do this. A video game will make a lot more draw calls for new data to get swapped in and out of VRAM. Doing this increases the heat over this area. It depends on the game, though, just like any software. Major GPU benchmarks will operate in similar, though perhaps more extreme, fashions.

    Try running gaming benchmarks like 3Dmark sometime. I suspect you would see similar results. My 1080ti's seem to have a similar temps to yours, at least the first one. 84C is the very highest I have seen, but it very rarely hits that temp. Actually I haven't played a game that pushes it that high. 79~80 is the highest I have seen from a game.

  • ebergerlyebergerly Posts: 3,255

    I've explained why Iray does not stress the GPU as much as people assume it does in the benchmark thread. Actually two or three times at this point.

    Oh really? Cool. I must have missed it. Did you post actual test data like I showed, or just a general statement? Cuz I tend to ignore general statements that don't have actual data to support them. 

  • outrider42outrider42 Posts: 3,679

    My statements were confirmed by GamersNexus video, which I believe I linked. I will not look up that video. But congratulations for proving something I already knew. Good job! :)

  • ebergerlyebergerly Posts: 3,255
    edited March 2019

    But congratulations for proving something I already knew. Good job! :)

    Hey, is there some sort of "I knew something before you did" contest going on that nobody told me about? laugh If there is, I'll gladly admit I don't know nuthin'. I'm always learning. 

    In any case, for those who don't already know everything, I did a bit of research on "utilization" in the performance graphs for GPU's, and honestly there's not much out there that specifically defines it (that I can find, at least). However I did find some NVIDIA documentation that might apply about GPU Utilization and Memory Utilization:

    unsigned int gpu
    Percent of time over the past second during which one or more kernels was executing on the GPU.
    unsigned int memory
    Percent of time over the past second during which global (device) memory was being read or written.

     

    So I think "utilization" may not mean precisely what we might think. We'd think that it should be a measure of how much utilized or busy the entire GPU is simultaneously, but since there are so many components in something like a 1080ti (cores, SM's, registers, memory controllers, etc.), it may be that it's more of a general indication that "something (anything) was going on in the GPU in the last second", without a clear indication of how busy all the components of the entire GPU were simultaneously. And I think in Task Manger the utilization numbers are for the busiest "engines", which are groups of GPU components (cores, SM's, registers, etc.) brought together to perform certain tasks (like 3D, copy, video, etc.).

    And y'know when you think about it, with all the thousands of cores and SM's and registers and the VRAM and memory controllers and so on, it's probably not a small task to monitor all of that to really determine how busy the entire GPU is. I'm guessing that they just do some more simplified checking to give a general idea, otherwise they'd need to build a big monitoring network in the GPU. 

    So when I run GPU-Z during an Iray render I get the image below, showing the following:

    • GPU Load (aka,  "GPU rendering core load percentage"; presumably the most loaded engine "compute_0" in Task Manager): 98%
    • Memory Controller Load (presumably transfers to and from memory and the SM's, registers, etc.): 56% 
    • Video Engine Load (presumably an engine that's not being used by Iray): 0% 

    ​So I'm thinking it may be like a factory with a bunch of production lines, and only a few of them are running at full capacity while others are dead. Utilization would still be 100% in that case because some process is real busy, although the entire factory isn't, on average. In our case, the compute_0 engine that Iray uses was real busy each time we checked, and the memory controller for that engine was half busy, but the video engine was on lunch break.  

    Or something like that. 

    GPUZ.JPG
    590 x 113 - 20K
    Post edited by ebergerly on
Sign In or Register to comment.