I finally captured a Performance Monitor trace during one of the renders of my scene (BTW, I’ll definitely check out that benchmark render scene and post results here as soon as I can).
I took the advice from RoguePilot and experimented with dialing back some of the render settings. It made quite a difference in render time and very little difference in final image quality. That was a useful learning experience.
So that first “all-out” render took about 3.5 hours. Dialing it back a bit (Antialiasing “Good”, Object Acc. 1 pixel, Shadow Acc. 2 pixels, Lighting Quality Excellent, Lighting Accuracy 1 pixel) took 3 hours & 4 minutes. Dialing back even further (Antialiasing “Good”, Object Acc. 2 pixels, Shadow Acc. 2 pixels, Lighting Quality Good, Lighting Accuracy 4 pixels) dropped the time to 1 hour & 24 minutes.
What surprised me was the PerfMon results of that last render:
You can clearly see that some cores (specifically number 6 thru 11) hover around the 70% mark. The rest of the cores hover around 15%. The highlighted black line is the average of all the cores—21.14%
I suspect this difference stems from the implementation of the multi-threaded rendering code. For some reason it is not able to (or designed not to) use the cores evenly. The work loads are not created equal…
Edit: Trying to get the PerfMon capture to display…dammit…I give up…blasted forum software…