Recommended Hardware Guide for Daz3D iRay dForce

Hi all,

As I cannot find a recent post which answers my question, I'm creating a new thread. If I've missed an existing post, please point me in the right direction. :-)

I'm looking for a buying guide which recommends the best hardware per budget which supports the main features of Daz3D (iRay, dForce) best. I've seen buying guides for gaming setups that give you a few categories to choose from: e.g. budget (under 800$), mid-range (800$-1500) , upper-range (1500$-2500$), expert-range (2500$ and up).
It would be great if something like that could be maintained for Daz3D, or at the very least a buying guide which tells you what to look out for in the type of GPU, CPU, type and amount of RAM, etc. etc.

Thanks in advance!

Comments

  • JamesJABJamesJAB Posts: 1,299

    For Iray the GPU acts independently from the rest of the computer while rendering
    You should usa a Nvidia GPU with at least 4GB of VRAM.
    The Reccomended setup is a Nvidia GPU with at least 8GB of VRAM
    Here is a list of Nvidia GPUs with at least 8GB of VRAM (roughly in Iray performance order)

    Quadro K5100M 8GB -laptop-
    GTX 880M (8GB version) -laptop-

    Quadro M4000 8GB
    Quadro K5200 8GB

    Quadro M5000M 8GB -laptop-
    GTX 980M (8GB version) -laptop-

    Quadro K6000 12GB
    GTX 980 8GB -laptop-
    Quadro M5500 8GB -laptop-

    Quadro M5000 8GB
    GTX 1070 8GB -laptop- (Max q version can be slower based on laptop cooling)
    GTX 1070 8GB
    Quadro P4000 8GB -laptop-
    Quadro P4000 8GB
    GTX Titan X 12GB (Geforce 900 Series)
    Quadro M6000 12GB or 24GB

    Quadro P5000 8GB -laptop-
    GTX 1070 ti 8GB
    GTX 1080 8GB  -laptop- (Max q version can be slower based on laptop cooling)
    GTX 1080 8GB
    Quadro P5000 16GB
    GTX 1080 ti 11GB
    Quadro GP100 16GB (HBM2)
    Nvidia Titan X 12GB (Geforce 1000 Series)
    Nvidia Titan XP 12GB (Geforce 1000 Series)
    Quadro P6000 24GB
    Nvidia Titan V 12GB (HBM2)(Geforce 2000? Series,  Volta GPU)

  • To save you some time, anything under the Quadro banner is going to be very expensive, and you're likely to find a GTX series GPU with more CUDA cores (required for iray) for either the same price or less.

    For example, a GTX 1080ti is $700 direct from Nvidia and has 3584 CUDA cores and 11GB or VRAM. The Quadro P4000 has only 6GB of VRAM and only 1792 CUDA cores, and can be purchased directly from Nvidia for $900. The Quadro P5000 has 16GB of VRAM but only 2560 CUDA cores, and sells direct from Nvidia for $2000.

    These are important numbers to know, understand, and consider, especially if you aren't getting paid for this stuff.

    Quadro cards are geared toward this sort of thing, however, whereas GTX cards are geared towards gaming. The main differences are core clock speeds, which affects how quickly the cards can render. A Quadro GP100 with 3584 CUDA cores and 16GB of VRAM will make the latest game look stunning, at about 12 FPS and for $7,000. A 1080ti can do it at 60 FPS for 1/10th the cost.
    Quadros run slower so they can fully process the render data and draw the intended image as accurately as possible.

    However, using 3rd party software such as EVGA's PrecisionX or MSI's Afterburner, you can reduce the clock speed of a GTX-series GPU to some degree so that it remains stable during long renders.

    Quadros are marketed to industry professionals (engineers in the automotive, architectural, medical imaging, and other 6-figure jobs, not the guy who quits his day job to be a freelance digital artist taking random gigs for book covers or indie video games) who may need custom drivers to do only what they need the card to do, without a lot of extra stuff, or who need the driver instruction set arranged in a more efficient order, and that level of customer care is part of that price. 

     

  • OZ-84OZ-84 Posts: 111
    JamesJAB said:

    For Iray the GPU acts independently from the rest of the computer while rendering
    You should usa a Nvidia GPU with at least 4GB of VRAM.
    The Reccomended setup is a Nvidia GPU with at least 8GB of VRAM
    Here is a list of Nvidia GPUs with at least 8GB of VRAM (roughly in Iray performance order)

    Quadro K5100M 8GB -laptop-
    GTX 880M (8GB version) -laptop-

    Quadro M4000 8GB
    Quadro K5200 8GB

    Quadro M5000M 8GB -laptop-
    GTX 980M (8GB version) -laptop-

    Quadro K6000 12GB
    GTX 980 8GB -laptop-
    Quadro M5500 8GB -laptop-

    Quadro M5000 8GB
    GTX 1070 8GB -laptop- (Max q version can be slower based on laptop cooling)
    GTX 1070 8GB
    Quadro P4000 8GB -laptop-
    Quadro P4000 8GB
    GTX Titan X 12GB (Geforce 900 Series)
    Quadro M6000 12GB or 24GB

    Quadro P5000 8GB -laptop-
    GTX 1070 ti 8GB
    GTX 1080 8GB  -laptop- (Max q version can be slower based on laptop cooling)
    GTX 1080 8GB
    Quadro P5000 16GB
    GTX 1080 ti 11GB
    Quadro GP100 16GB (HBM2)
    Nvidia Titan X 12GB (Geforce 1000 Series)
    Nvidia Titan XP 12GB (Geforce 1000 Series)
    Quadro P6000 24GB
    Nvidia Titan V 12GB (HBM2)(Geforce 2000? Series,  Volta GPU)

    Well, i doubt the Tifan V is supported yet :-/

     

  • OZ-84OZ-84 Posts: 111

    To save you some time, anything under the Quadro banner is going to be very expensive, and you're likely to find a GTX series GPU with more CUDA cores (required for iray) for either the same price or less.

    For example, a GTX 1080ti is $700 direct from Nvidia and has 3584 CUDA cores and 11GB or VRAM. The Quadro P4000 has only 6GB of VRAM and only 1792 CUDA cores, and can be purchased directly from Nvidia for $900. The Quadro P5000 has 16GB of VRAM but only 2560 CUDA cores, and sells direct from Nvidia for $2000.

    These are important numbers to know, understand, and consider, especially if you aren't getting paid for this stuff.

    Quadro cards are geared toward this sort of thing, however, whereas GTX cards are geared towards gaming. The main differences are core clock speeds, which affects how quickly the cards can render. A Quadro GP100 with 3584 CUDA cores and 16GB of VRAM will make the latest game look stunning, at about 12 FPS and for $7,000. A 1080ti can do it at 60 FPS for 1/10th the cost.
    Quadros run slower so they can fully process the render data and draw the intended image as accurately as possible.

    However, using 3rd party software such as EVGA's PrecisionX or MSI's Afterburner, you can reduce the clock speed of a GTX-series GPU to some degree so that it remains stable during long renders.

    Quadros are marketed to industry professionals (engineers in the automotive, architectural, medical imaging, and other 6-figure jobs, not the guy who quits his day job to be a freelance digital artist taking random gigs for book covers or indie video games) who may need custom drivers to do only what they need the card to do, without a lot of extra stuff, or who need the driver instruction set arranged in a more efficient order, and that level of customer care is part of that price. 

     

    Sorry but here is so much wrong i have to reply to this. 

    A Quadro P6000 (about 3500$?) will beat a 1080ti in gaming easily. Your fps comparism is totaly wrong. Please check benchmarks yourself ...

    The reason why Quadros are clocked lower is not because a GTX of the same class does render less accurate or work faulty if used 24/7. The main reason is because those chips are way bigger (mostly because of more cuda cores) and they would generate to much heat at same clock like the GTX counterpart. 

    The Quadro drivers are the same ones like for the TIs with some feature sets enabled. 

    And please dont tell anyone she or he needs to downclock a stock GTX to render 24/7. This is simply wrong. 

    The main reason why Quadro cards cost so much is because NVIDIA likes to make profit (i dont blame them). Customer care is better, there are models with more ram / without fans avaiable and some Quadro variations are hardware wise regarding double precision and stuff. However... there is no real reason why a 8GB Quadro has to cost more than a 8GB GTX 

     

  • wteeningwteening Posts: 20

    So, if I read correctly, it's best to buy a GTX card? Which card delivers the best performance for the price?
    Which CPU is the best for the price?

  • OZ-84OZ-84 Posts: 111
    wteening said:

    So, if I read correctly, it's best to buy a GTX card? Which card delivers the best performance for the price?
    Which CPU is the best for the price?

    If you render with your graphicscard the cpu is not so inportant. Any intel i5 and above will do very fine. If you have the money buy a 1080ti. Because of the 11gb memory its really worth it. At least in my opinion... if you dont want to spend so much a 1070 will do ok. The minimum is in my opinion a 6gb 1060.

  • OZ-84 said:

    Sorry but here is so much wrong i have to reply to this. 

    A Quadro P6000 (about 3500$?) will beat a 1080ti in gaming easily. Your fps comparism is totaly wrong. Please check benchmarks yourself ...

    The reason why Quadros are clocked lower is not because a GTX of the same class does render less accurate or work faulty if used 24/7. The main reason is because those chips are way bigger (mostly because of more cuda cores) and they would generate to much heat at same clock like the GTX counterpart. 

    The Quadro drivers are the same ones like for the TIs with some feature sets enabled. 

    And please dont tell anyone she or he needs to downclock a stock GTX to render 24/7. This is simply wrong. 

    The main reason why Quadro cards cost so much is because NVIDIA likes to make profit (i dont blame them). Customer care is better, there are models with more ram / without fans avaiable and some Quadro variations are hardware wise regarding double precision and stuff. However... there is no real reason why a 8GB Quadro has to cost more than a 8GB GTX 

     

    Hate to say this OZ-84 but you are wrong.  All Pascal chips in the GTX 1080 TI (GP102 chip) and the Quadro GP100 (GP100 Tesla chip on a PCIe platform) cards both have exactly the same amount of Cuda cores (3584, virtually the same chip, check the specs).  The fact that the the GTX 1080TI is around $750.00 and the GP100 is around $7,000.00 to $9,000.00 depending on the source is the drivers, the amount and type of memory, and what the card is deigned for.  Where the 1080 TI has 11 Gig of GDDR5 memory, the GP100 has 16 Gig of HBM2 memory for more bandwidth.  We play games on our 1080's. Professionals use the GP100 for professional graphics and multimedia work, including video editing, CAD/CAM, 3D rendering, and more and those drivers have to be rock solid as millions of dollars may depend on the project so they put much more into Quadro drivers than the GeForce Gaming driers we get.  Nvidia strives to try to make good gaming drivers to sell the GTX cards but have to make perfect drivers for content creation at the professional level.  The only Pascal chip that has all 3840 Cuda cores active that I know of is the Quadro P6000 with 24 Gb of GDDR5 memory.  Even the Titan X only has 3584 Cuda cores active.  And the reason that GTX's will blow the Quadro cards away in games is the drivers.  Quadro drivers are made for content creation and run games slower (sometimes much slower) than GTX drivers but just flies when using business applications.

    However, I do agree with you that at our level, a GTX card is just fine. 

  • wteeningwteening Posts: 20

    What kind of speeds can I expect if I purchase a 1080ti card? If I would like to improve dForce performance, is this the way to go, as well? I'd love to see a video of someone just playing around in Daz using a 1080ti, to see if it's worth my investment.

  • JamesJABJamesJAB Posts: 1,299
    OZ-84 said:

    Sorry but here is so much wrong i have to reply to this. 

    A Quadro P6000 (about 3500$?) will beat a 1080ti in gaming easily. Your fps comparism is totaly wrong. Please check benchmarks yourself ...

    The reason why Quadros are clocked lower is not because a GTX of the same class does render less accurate or work faulty if used 24/7. The main reason is because those chips are way bigger (mostly because of more cuda cores) and they would generate to much heat at same clock like the GTX counterpart. 

    The Quadro drivers are the same ones like for the TIs with some feature sets enabled. 

    And please dont tell anyone she or he needs to downclock a stock GTX to render 24/7. This is simply wrong. 

    The main reason why Quadro cards cost so much is because NVIDIA likes to make profit (i dont blame them). Customer care is better, there are models with more ram / without fans avaiable and some Quadro variations are hardware wise regarding double precision and stuff. However... there is no real reason why a 8GB Quadro has to cost more than a 8GB GTX 

     

    Hate to say this OZ-84 but you are wrong.  All Pascal chips in the GTX 1080 TI (GP102 chip) and the Quadro GP100 (GP100 Tesla chip on a PCIe platform) cards both have exactly the same amount of Cuda cores (3584, virtually the same chip, check the specs).  The fact that the the GTX 1080TI is around $750.00 and the GP100 is around $7,000.00 to $9,000.00 depending on the source is the drivers, the amount and type of memory, and what the card is deigned for.  Where the 1080 TI has 11 Gig of GDDR5 memory, the GP100 has 16 Gig of HBM2 memory for more bandwidth.  We play games on our 1080's. Professionals use the GP100 for professional graphics and multimedia work, including video editing, CAD/CAM, 3D rendering, and more and those drivers have to be rock solid as millions of dollars may depend on the project so they put much more into Quadro drivers than the GeForce Gaming driers we get.  Nvidia strives to try to make good gaming drivers to sell the GTX cards but have to make perfect drivers for content creation at the professional level.  The only Pascal chip that has all 3840 Cuda cores active that I know of is the Quadro P6000 with 24 Gb of GDDR5 memory.  Even the Titan X only has 3584 Cuda cores active.  And the reason that GTX's will blow the Quadro cards away in games is the drivers.  Quadro drivers are made for content creation and run games slower (sometimes much slower) than GTX drivers but just flies when using business applications.

    However, I do agree with you that at our level, a GTX card is just fine. 

    The Titan X is the old one from when Pascal was all new, the GTX 1080 ti outperforms it in most tasks.  This year it was replaced by the Titan XP.
    https://www.nvidia.com/en-us/titan/titan-xp/
    The Titan XP has the same core configuration as the Quadro P6000.

    There is one hardware difference between the Geforce and top range Quadro cards.  The Quadro K6000, M5000, M6000, P5000, P6000, and GP100 all have ECC VRAM.
    All Quadro cards are pure Nvidia refrence boards with no customization allowed by the OEM (not even after market coolers)
    Quadro cards come with a higher level of support including things like on site support.
    Quadro Drivers do not have game specific optimizations. (This does not mean that they will suck at gaming, you can expect a comparable Quadro to score a few percent lower FPS in games)
    Quadro drivers come in two flavors QNF (Quadro New Feature Driver) and ODE (Optimal Driver for Enterprise)

  • OZ-84OZ-84 Posts: 111
    OZ-84 said:

     

    Hate to say this OZ-84 but you are wrong.  All Pascal chips in the GTX 1080 TI (GP102 chip) and the Quadro GP100 (GP100 Tesla chip on a PCIe platform) cards both have exactly the same amount of Cuda cores (3584, virtually the same chip, check the specs).  The fact that the the GTX 1080TI is around $750.00 and the GP100 is around $7,000.00 to $9,000.00 depending on the source is the drivers, the amount and type of memory, and what the card is deigned for.  Where the 1080 TI has 11 Gig of GDDR5 memory, the GP100 has 16 Gig of HBM2 memory for more bandwidth.  We play games on our 1080's. Professionals use the GP100 for professional graphics and multimedia work, including video editing, CAD/CAM, 3D rendering, and more and those drivers have to be rock solid as millions of dollars may depend on the project so they put much more into Quadro drivers than the GeForce Gaming driers we get.  Nvidia strives to try to make good gaming drivers to sell the GTX cards but have to make perfect drivers for content creation at the professional level.  The only Pascal chip that has all 3840 Cuda cores active that I know of is the Quadro P6000 with 24 Gb of GDDR5 memory.  Even the Titan X only has 3584 Cuda cores active.  And the reason that GTX's will blow the Quadro cards away in games is the drivers.  Quadro drivers are made for content creation and run games slower (sometimes much slower) than GTX drivers but just flies when using business applications.

    However, I do agree with you that at our level, a GTX card is just fine. 

    Yeah ... well... no problem billyben ;-)  

    -I cant see how the P6000 is blown away by the Titan X in gaming.

    https://wccftech.com/nvidia-pascal-quadro-p6000-gaming-benchmarks/

    All benchmarks i found show similar results ...

     

    -And i really cant see where you get the idea from that Nvidia uses some special magical professional driver designed only for quadro cards. 

    In fact GTX cards are running on crippled down quadro drivers

    https://wccftech.com/nvidia-titan-xp-titan-x-385-12-driver-update-massive-performance/

    -90% of the reason why Quadro cards cost so much more is because Nvidia likes to and if the GTX driveres werent crippled there would be no good reason to buy Quadros

    https://www.pcgamesn.com/nvidia-geforce-server

    - In terms of card design i can also see no significant difference :-)

    http://home.coolpc.com.tw/aes/open/nv_quadro-p6000/coolpc_p6000-37.jpg

    https://abload.de/img/nvidia-geforce-gtx-103iuix.png

     

    So, if drivers are the same, the GPU used on the cards is the same, even the PCB looks similar... why do you think that Quadros are "designed" for professional use and GTX cards are not?

    Is it because Nvidia says so? 

     

  • RaymandRaymand Posts: 51

    This is a very informative discussion. Can I add a couple follow-on questions? First, will installing two graphic cards (let's assume they have identical memory and cores), double the rendering performance?

    Second, assuming all the iRay work is passed off to the graphics card (or cards), will having more CPU cores enhance the performance of other parts of the program?

     

  • JamesJABJamesJAB Posts: 1,299
    Raymand said:

    This is a very informative discussion. Can I add a couple follow-on questions? First, will installing two graphic cards (let's assume they have identical memory and cores), double the rendering performance?

    Second, assuming all the iRay work is passed off to the graphics card (or cards), will having more CPU cores enhance the performance of other parts of the program?

     

    Using two identical GPUs will cut the total render time roughly in half (give or take a few seconds), because each GPU still takes the same amount of time to render each itteration.
    Keep in mind though, the two GPUs do not need to have the same amount of VRAM, or even be from the same GPU generation.
    As long as the scene can physiclay fit into the GPU VRAM that video card will participate in the render job.

    With Identical GPUs the Render time scaling is very linear.  Let's take the Geforce GTX 1080 ti for example using the Iray Benchmark scene that's on the forums here.
    1 GTX 1080 ti card will complete the benchmark in roughly 2 minutes, 2 cards will complete in roughly 1 minute, 3 cards will complete it in roughtly 40 seconds, and 4 cards will complete it in roughtly 30 seconds.

    More CPU cores can help if you are using a slower GPU.  If you are running a fast GPU like the GTX 1080 ti, adding the CPU in will knock 2-10 seconds depending on how many threads the CPU has.  When running multiple fast GPUs adding the CPU into the render job can slow the render down, because the GPUs are waiting around for the CPU to complete its tasks.

    Your CPU under regular Daz Studio operation will use up to two threads regardless of how many cores it has.  Functions inside of Daz Studio can be coded to utilize more CPU resources.  3Delight Rendering uses your whole CPU (unless there is some maximum number of threads allowed based on licensing), the old Optitex dynamic cloth system will use all of your CPU threads, and dForce will use all of your threads if it's set as the simulation device.

  • prixatprixat Posts: 1,083

    Here's some timings for dforce simulations.

    Running the included 'bedsheet on figure' simulation:
    i5-2500 ------ 3m40s
    GTX 750 Ti ----- 1m10s

     

  • JamesJABJamesJAB Posts: 1,299
    edited January 8
    prixat said:

    Here's some timings for dforce simulations.

    Running the included 'bedsheet on figure' simulation:
    i5-2500 ------ 3m40s
    GTX 750 Ti ----- 1m10s

     

    My turn: Notebook - Dell Precision M6700
    Geforce GTX 980M ----- 59 seconds
    Core i7-3840QM ----- 2m 11s

    Post edited by JamesJAB on
  • JD_MortalJD_Mortal Posts: 489
    edited January 8

    dForce uses OpenCL, not CUDA cores...

    However... nVidia is doing major updates to OpenCL, taking advantage of CUDA and the new Volta-series cores. (Only found in Titan-V and the new Tesla cards.)

    I would say, if it were possible, to use one nVidia for CUDA and one Radeon for dForce (OpenCL)... However, if nVidia detects another video-card, other than the one made by Intel, it will disable all CUDA cores on the card. (There is no way to trick it... However, using two computers is possible. One with the Radeon card for doing dForce, then rendering, remotely, to another computer with the nVidia card on it, for the CUDA cores.)

    https://browser.geekbench.com/opencl-benchmarks

    https://browser.geekbench.com/cuda-benchmarks

    Post edited by JD_Mortal on
  • prixatprixat Posts: 1,083
    JD_Mortal said:

    dForce uses OpenCL, not CUDA cores...

    That doesn't really make sense, you can see from your own link that nVidia cards have no problems with OpenCL. 

  • JD_MortalJD_Mortal Posts: 489
    prixat said:
    JD_Mortal said:

    dForce uses OpenCL, not CUDA cores...

    That doesn't really make sense, you can see from your own link that nVidia cards have no problems with OpenCL. 

    Being "compatible" and being "optomized for speed", are not the same.

    My shoes are compatible for diving, but flippers are optomized for speed, when diving. They are turning the shoes into flippers, with the use of CUDA cores. (To aid the shortfalls of the slower speed OpenCL code.)

Sign In or Register to comment.