Daz3D capped gpu clock speed at 405 mhz while rendering.

kunsky54kunsky54 Posts: 2

[Daz3D capped gpu clock speed at 405 mhz while rendering.] I don't know what to do anymore. The render time also incease x2 and It's killing me. sorry for my english

My Laptop specs: Core I7 6700HQ, GTX 960M, 8GB Ram

Post edited by Richard Haseltine on

Comments

  • SimonJMSimonJM Posts: 5,945

    Are you sure it's not beign thermally throttled?  Loosk at something liek GPU-Z or MSI Afterburner to check and/or correct.

  • SimonJM said:

    Are you sure it's not beign thermally throttled?  Loosk at something liek GPU-Z or MSI Afterburner to check and/or correct.

    GPU ran normally while rendering with Blender 3D.

  • kunsky54 said:
    SimonJM said:

    Are you sure it's not beign thermally throttled?  Loosk at something liek GPU-Z or MSI Afterburner to check and/or correct.

    GPU ran normally while rendering with Blender 3D.

    Using the GPU-assisted renderer?

  • fastbike1fastbike1 Posts: 4,074

    The Nvidia driver will throttle the clock speed to stay below the thermal limit. Nothing to do with Studio or Windows.

  • Thanks guys, But I finally fixed by uncheck OptiX Primce Acceleration.

  • ebergerlyebergerly Posts: 3,255
    kunsky54 said:

    Thanks guys, But I finally fixed by uncheck OptiX Primce Acceleration.

    I think thats a bit like saying "My car engine is idling rough, so I turned off the car radio and that fixed it". Optix is a render engine designed to accelerate ray tracing. Your GPU clock speed is a function of BIOS snd driver configurations, based on design parameters and thermal requirements and user clock settings.
  • prixatprixat Posts: 1,585

    OptiX runs on CUDA, it could be the cause of GPU overheating, leading to the throttling described.

    I think Blender has Embree, (Intel's equivalent to OptiX) which does not use the GPU.

  • ebergerlyebergerly Posts: 3,255
    CUDA does not cause overheating. Cooling system problems or overclocking or BIOS/driver problems can cause overheating.
  • ebergerly said:
    CUDA does not cause overheating. Cooling system problems or overclocking or BIOS/driver problems can cause overheating.

    I understand your meaning, but "CUDA" does literally cause overheating. Cooling systems remove heat from the GPU, they don't add it. A program requests the GPU to do things, doing things creates heat. So the actual cause is the program (the source of the heat). I'm not arguing with you. The correct fix is usually to lower the clock speed or improve the cooling, but that is only because they prevent overheating not because they cause it.

    "My car engine is idling rough, so I turned off the car radio and that fixed it"

    Believe it or not, I had that happen. The roughness was mild, but by decreasing the electrical load the roughness smoothed out. The real fix was to replace the alternator (maybe the battery too). So yes, turning off OptiX is probably only addressing a symptom and not the real problem.

  • By the way, you will probably experience thermal throttling around 75 degres Celesus. This seems to be done by the manufacturer. Search GTX 960M thermal throttling in google. I didn't read too much about it, but with laptops there are limits to how much heat a person's lap can handle. Either they don't feel confident that the chip can handle 100c, or it's due to design constraints. So if this is your problem then you may have found your best solution. Optix is probably using up any spare cycles that Iray isn't using, causing the GPU to work hard enough to put out more heat than your laptop's cooling can dissapate. 

  • ebergerlyebergerly Posts: 3,255
    edited August 2018

    GPU's and other electronics are designed to generate a certain amount of heat at 100% utilization. It's reflected in the "TDP" or Total Dissipated Power. Most electronic equipment is designed to operate at its "continuous" rating continuously, and it will generate a fixed amount of heat. And a non-junk design will include sufficient cooling to maintain the electronics far below damage level. Damage typically occurs at greater than around 100C, and equipment will generally operate closer to 80C as long as the cooling system is working. 

    Either the equipment is running at 100% utilization or its not. It's not like Optix will magically make the GPU run super hot and way above the design temps. The BIOS and drivers will crank up the fans to prevent it reaching 100+C. And if the cooling system is messed up, it has a second set of protection to shut down the device before damage occurs. 

    Contrary to popular belief, rendering does not push equipment into the danger zone, or somehow stress it beyond normal. As long as the equipment is designed to operate at its continuous temperature, and the cooling is working as designed, it should operate for its design life, which, according to AMD and others, is around 6 or 7 years. 

    I even tested a piece of junk laptop rendering continuously for 24+ hours, and CPU temps never got even close to any danger zone. In fact they barely made it to 65C the entire time. They flattened after maybe 20 miinutes and stayed that way for the entire time. And that was a throwaway laptop. 

    What causes overheating is operating the equipment outside its design specifications, either by overclocking or blocked fan vents or damaged fans or users messing with BIOS. . 

    Post edited by ebergerly on
  • ebergerlyebergerly Posts: 3,255
    edited August 2018

    By the way, you're assuming that "Optix using extra cycles", even if it happened, would be beyond the capability of the BIOS fan controller and fan speed to compensate. Rarely are your fans running flat out, or even near that, during renders. Even if some software pushed the GPU over the edge or whatever, the fans could probably crank up some more and compensate. 

    Post edited by ebergerly on
  • ebergerlyebergerly Posts: 3,255
    edited August 2018

    And to inject some data into the discussion:

    I just rendered a large scene, with and without Optix, and monitored GPU utilization, temps, and fan speeds. This scene consumed over 30GB of system RAM, and about 9.5GB of my 1080ti's 11GB. Here's the results:

    • Without Optix:
      • Render time 15 minutes,
      • GPU utilization between 98-100%
      • Max GPU temp: 75C
      • Max fan speed: 70%
    • With Optix: 
      • Render time 14 minutes
      • GPU utilzation between 98-100%
      • Max GPU temp: 75C
      • Max fan speed: 69%

    In fact, Optix shortened the render time as it should, but temps and utilzation and fan speeds were no different. There wasn't even a need to crank up fan speed when Optix was enabled.

    Post edited by ebergerly on
  • prixatprixat Posts: 1,585
    edited August 2018

    On investigating further, 405MHz is the idle frequency of the GPU memory.

    This is may not be a matter of throttling down, it's a failure to throttle up.

    In a similar case, a Lenovo laptop was also stuck on 405MHz during gameplay (not all games were effected).

    The answer in that case was to reload the chipset drivers and then nVidia drivers. The thing that messed up the motherboard drivers was the usual suspect, a windows update!

    That should allow you to use OptiX again.

    Post edited by prixat on
  • ebergerlyebergerly Posts: 3,255
    edited August 2018
    While drivers certainly could be an issue, I dont think the OP has provided nearly enough info to describe the problem yet. And you cant solve a problem if you dont know what the problem is. Many things could affect apparent GPU clock speed (assuming thats whats really going on). It could be BIOS or power settings or incorrect monitoring or misunderstanding or something about Studio/Iray and so on. Just because driver updates seemed to work for another user (how many years and driver updates ago?) doesnt mean it applies here.
    Post edited by ebergerly on
  • Ebergerly, I don't think you understood the point of what I said. We just aren't on the same page. Your perception of cause and effect are different from mine; it's not a big deal.

     

    By the way, you're assuming that "Optix using extra cycles", even if it happened, would be beyond the capability of the BIOS fan controller and fan speed to compensate. Rarely are your fans running flat out, or even near that, during renders. Even if some software pushed the GPU over the edge or whatever, the fans could probably crank up some more and compensate. 

    I didn't assume anything. Also, you misquoted me. I said "spare cycles" which is a phrase that has a specific meaning in computing (although it has changed over the years).

  • ebergerlyebergerly Posts: 3,255
    edited August 2018

     

    I didn't assume anything. Also, you misquoted me. I said "spare cycles" which is a phrase that has a specific meaning in computing (although it has changed over the years).

    Well, yeah, you did assume something when you said " Optix is probably using up any spare cycles that Iray isn't using, causing the GPU to work hard enough to put out more heat than your laptop's cooling can dissapate. " I merely replied that no, it's probably not causing the GPU to work hard enough to put out more heat than your laptop's cooling can dissipate, because even if it did put out more heat (which I questioned) the cooling system would respond by increasing fan speed. And I gave actual test results showing that Optix appears to be irrelevant to the GPU workload or cooling of the GPU, since neither temps or fan speed change when it's enabled.

    But yes, I did misquote you and replaced "spare" with "extra" for which I apologize.  

     

    Post edited by ebergerly on
  • I specifically said, "probably", indicating that there is a high probability in my opinion for Optix to be using spare cycles. Had I assumed I would a have said Optix is using spare cycles. Can you tell the difference?

    And I gave actual test results showing that Optix appears to be irrelevant to the GPU workload or cooling of the GPU, since neither temps or fan speed change when it's enabled.

    Your test results are irrelevant to the original posters situation. Your graphics card isn't even the same chipset let alone being used in the same environment as a laptop. In your results, your GPU wasn't even maxed out. Do you really think that a GTX 960M can float at 98-100 percent load doing the same job as a 1080ti with 11GB. Your own test results show a max of 75 degrees. I already stated that is about the point where the laptop manufacturer seems to have forced thermal throttling for that product.

    I agree with 98% of what you are saying. I was just trying to clear up a few things, but I don't want to upset anyone. Since the OP is absent; I'm going to move on.

  • ebergerlyebergerly Posts: 3,255
    edited August 2018

    I would encourage folks not to equate general, unsupported hunches with facts. What might seem intuitively obvious might be totally incorrect.

    Curious about the 405MHz GPU "cap", I fired up my old desktop with a very old GTX-745. Surprisingly, at idle (GPU doing nothing), the GPU Memory Clock ran at 405Mhz, exactly the same as the OP's GTX-960M. And during rendering, the only times it stayed at 405MHz were:

    • It wasn't selected to participate in the render, or
    • It ran out of VRAM (4GB) on a big scene, and didn't participate in the render (rendering crashed to CPU)

    Also during rendering (Optix ON) when the GTX-745 was flat at 100% utilization, the memory clock increased and flattened at 900MHz, while GPU temps flattened at 82C, and fan speed flattened at only 38%, which means the fans had a lot of room to spare. And there was no evidence of throttling, since the clock speeds never changed as temps increased.   

    So again, there's no evidence whatsoever that Optix has any relevance to GPU load or heating, and even if it did there's no indication that fans can't crank up and compensate to prevent thermal throttling. Nor is there any evidence the OP's problem was related to overheating. Especially since the "cap" turns out to be the clock rate at idle. laugh

    My guess, as has been stated before, is that the GPU stayed at idle during the render, because OP either didn't have the GPU correctly selected for rendering, or (more likely), the scene overloaded the GPU's VRAM causing it to not participate in the render. And that might explain why it worked okay in Blender, which, presumably, was a scene with much less requirement for GPU VRAM compared to a Studio scene.

    Now we're free to imagine that the OP's case was special for some reason, but without actual data to support that it becomes just an unsupported hunch. Though I'd be happy to change my mind if other, compelling facts were presented. 

    BTW, attached are screenshots of GPU-Z with the GTX-745 both at idle and loaded to 100% during a render 

    gtx745IDLE.png
    595 x 636 - 14K
    gtx745LOADED.png
    617 x 570 - 15K
    Post edited by ebergerly on
Sign In or Register to comment.