Recommended Hardware Guide for Daz3D iRay dForce

Hi all,

As I cannot find a recent post which answers my question, I'm creating a new thread. If I've missed an existing post, please point me in the right direction. :-)

I'm looking for a buying guide which recommends the best hardware per budget which supports the main features of Daz3D (iRay, dForce) best. I've seen buying guides for gaming setups that give you a few categories to choose from: e.g. budget (under 800$), mid-range (800$-1500) , upper-range (1500$-2500$), expert-range (2500$ and up).
It would be great if something like that could be maintained for Daz3D, or at the very least a buying guide which tells you what to look out for in the type of GPU, CPU, type and amount of RAM, etc. etc.

Thanks in advance!

«1

Comments

  • JamesJABJamesJAB Posts: 1,760

    For Iray the GPU acts independently from the rest of the computer while rendering
    You should usa a Nvidia GPU with at least 4GB of VRAM.
    The Reccomended setup is a Nvidia GPU with at least 8GB of VRAM
    Here is a list of Nvidia GPUs with at least 8GB of VRAM (roughly in Iray performance order)

    Quadro K5100M 8GB -laptop-
    GTX 880M (8GB version) -laptop-

    Quadro M4000 8GB
    Quadro K5200 8GB

    Quadro M5000M 8GB -laptop-
    GTX 980M (8GB version) -laptop-

    Quadro K6000 12GB
    GTX 980 8GB -laptop-
    Quadro M5500 8GB -laptop-

    Quadro M5000 8GB
    GTX 1070 8GB -laptop- (Max q version can be slower based on laptop cooling)
    GTX 1070 8GB
    Quadro P4000 8GB -laptop-
    Quadro P4000 8GB
    GTX Titan X 12GB (Geforce 900 Series)
    Quadro M6000 12GB or 24GB

    Quadro P5000 8GB -laptop-
    GTX 1070 ti 8GB
    GTX 1080 8GB  -laptop- (Max q version can be slower based on laptop cooling)
    GTX 1080 8GB
    Quadro P5000 16GB
    GTX 1080 ti 11GB
    Quadro GP100 16GB (HBM2)
    Nvidia Titan X 12GB (Geforce 1000 Series)
    Nvidia Titan XP 12GB (Geforce 1000 Series)
    Quadro P6000 24GB
    Nvidia Titan V 12GB (HBM2)(Geforce 2000? Series,  Volta GPU)

  • To save you some time, anything under the Quadro banner is going to be very expensive, and you're likely to find a GTX series GPU with more CUDA cores (required for iray) for either the same price or less.

    For example, a GTX 1080ti is $700 direct from Nvidia and has 3584 CUDA cores and 11GB or VRAM. The Quadro P4000 has only 6GB of VRAM and only 1792 CUDA cores, and can be purchased directly from Nvidia for $900. The Quadro P5000 has 16GB of VRAM but only 2560 CUDA cores, and sells direct from Nvidia for $2000.

    These are important numbers to know, understand, and consider, especially if you aren't getting paid for this stuff.

    Quadro cards are geared toward this sort of thing, however, whereas GTX cards are geared towards gaming. The main differences are core clock speeds, which affects how quickly the cards can render. A Quadro GP100 with 3584 CUDA cores and 16GB of VRAM will make the latest game look stunning, at about 12 FPS and for $7,000. A 1080ti can do it at 60 FPS for 1/10th the cost.
    Quadros run slower so they can fully process the render data and draw the intended image as accurately as possible.

    However, using 3rd party software such as EVGA's PrecisionX or MSI's Afterburner, you can reduce the clock speed of a GTX-series GPU to some degree so that it remains stable during long renders.

    Quadros are marketed to industry professionals (engineers in the automotive, architectural, medical imaging, and other 6-figure jobs, not the guy who quits his day job to be a freelance digital artist taking random gigs for book covers or indie video games) who may need custom drivers to do only what they need the card to do, without a lot of extra stuff, or who need the driver instruction set arranged in a more efficient order, and that level of customer care is part of that price. 

     

  • OZ-84OZ-84 Posts: 128
    JamesJAB said:

    For Iray the GPU acts independently from the rest of the computer while rendering
    You should usa a Nvidia GPU with at least 4GB of VRAM.
    The Reccomended setup is a Nvidia GPU with at least 8GB of VRAM
    Here is a list of Nvidia GPUs with at least 8GB of VRAM (roughly in Iray performance order)

    Quadro K5100M 8GB -laptop-
    GTX 880M (8GB version) -laptop-

    Quadro M4000 8GB
    Quadro K5200 8GB

    Quadro M5000M 8GB -laptop-
    GTX 980M (8GB version) -laptop-

    Quadro K6000 12GB
    GTX 980 8GB -laptop-
    Quadro M5500 8GB -laptop-

    Quadro M5000 8GB
    GTX 1070 8GB -laptop- (Max q version can be slower based on laptop cooling)
    GTX 1070 8GB
    Quadro P4000 8GB -laptop-
    Quadro P4000 8GB
    GTX Titan X 12GB (Geforce 900 Series)
    Quadro M6000 12GB or 24GB

    Quadro P5000 8GB -laptop-
    GTX 1070 ti 8GB
    GTX 1080 8GB  -laptop- (Max q version can be slower based on laptop cooling)
    GTX 1080 8GB
    Quadro P5000 16GB
    GTX 1080 ti 11GB
    Quadro GP100 16GB (HBM2)
    Nvidia Titan X 12GB (Geforce 1000 Series)
    Nvidia Titan XP 12GB (Geforce 1000 Series)
    Quadro P6000 24GB
    Nvidia Titan V 12GB (HBM2)(Geforce 2000? Series,  Volta GPU)

    Well, i doubt the Tifan V is supported yet :-/

     

  • OZ-84OZ-84 Posts: 128

    To save you some time, anything under the Quadro banner is going to be very expensive, and you're likely to find a GTX series GPU with more CUDA cores (required for iray) for either the same price or less.

    For example, a GTX 1080ti is $700 direct from Nvidia and has 3584 CUDA cores and 11GB or VRAM. The Quadro P4000 has only 6GB of VRAM and only 1792 CUDA cores, and can be purchased directly from Nvidia for $900. The Quadro P5000 has 16GB of VRAM but only 2560 CUDA cores, and sells direct from Nvidia for $2000.

    These are important numbers to know, understand, and consider, especially if you aren't getting paid for this stuff.

    Quadro cards are geared toward this sort of thing, however, whereas GTX cards are geared towards gaming. The main differences are core clock speeds, which affects how quickly the cards can render. A Quadro GP100 with 3584 CUDA cores and 16GB of VRAM will make the latest game look stunning, at about 12 FPS and for $7,000. A 1080ti can do it at 60 FPS for 1/10th the cost.
    Quadros run slower so they can fully process the render data and draw the intended image as accurately as possible.

    However, using 3rd party software such as EVGA's PrecisionX or MSI's Afterburner, you can reduce the clock speed of a GTX-series GPU to some degree so that it remains stable during long renders.

    Quadros are marketed to industry professionals (engineers in the automotive, architectural, medical imaging, and other 6-figure jobs, not the guy who quits his day job to be a freelance digital artist taking random gigs for book covers or indie video games) who may need custom drivers to do only what they need the card to do, without a lot of extra stuff, or who need the driver instruction set arranged in a more efficient order, and that level of customer care is part of that price. 

     

    Sorry but here is so much wrong i have to reply to this. 

    A Quadro P6000 (about 3500$?) will beat a 1080ti in gaming easily. Your fps comparism is totaly wrong. Please check benchmarks yourself ...

    The reason why Quadros are clocked lower is not because a GTX of the same class does render less accurate or work faulty if used 24/7. The main reason is because those chips are way bigger (mostly because of more cuda cores) and they would generate to much heat at same clock like the GTX counterpart. 

    The Quadro drivers are the same ones like for the TIs with some feature sets enabled. 

    And please dont tell anyone she or he needs to downclock a stock GTX to render 24/7. This is simply wrong. 

    The main reason why Quadro cards cost so much is because NVIDIA likes to make profit (i dont blame them). Customer care is better, there are models with more ram / without fans avaiable and some Quadro variations are hardware wise regarding double precision and stuff. However... there is no real reason why a 8GB Quadro has to cost more than a 8GB GTX 

     

  • So, if I read correctly, it's best to buy a GTX card? Which card delivers the best performance for the price?
    Which CPU is the best for the price?

  • OZ-84OZ-84 Posts: 128
    wteening said:

    So, if I read correctly, it's best to buy a GTX card? Which card delivers the best performance for the price?
    Which CPU is the best for the price?

    If you render with your graphicscard the cpu is not so inportant. Any intel i5 and above will do very fine. If you have the money buy a 1080ti. Because of the 11gb memory its really worth it. At least in my opinion... if you dont want to spend so much a 1070 will do ok. The minimum is in my opinion a 6gb 1060.

  • OZ-84 said:

    Sorry but here is so much wrong i have to reply to this. 

    A Quadro P6000 (about 3500$?) will beat a 1080ti in gaming easily. Your fps comparism is totaly wrong. Please check benchmarks yourself ...

    The reason why Quadros are clocked lower is not because a GTX of the same class does render less accurate or work faulty if used 24/7. The main reason is because those chips are way bigger (mostly because of more cuda cores) and they would generate to much heat at same clock like the GTX counterpart. 

    The Quadro drivers are the same ones like for the TIs with some feature sets enabled. 

    And please dont tell anyone she or he needs to downclock a stock GTX to render 24/7. This is simply wrong. 

    The main reason why Quadro cards cost so much is because NVIDIA likes to make profit (i dont blame them). Customer care is better, there are models with more ram / without fans avaiable and some Quadro variations are hardware wise regarding double precision and stuff. However... there is no real reason why a 8GB Quadro has to cost more than a 8GB GTX 

     

    Hate to say this OZ-84 but you are wrong.  All Pascal chips in the GTX 1080 TI (GP102 chip) and the Quadro GP100 (GP100 Tesla chip on a PCIe platform) cards both have exactly the same amount of Cuda cores (3584, virtually the same chip, check the specs).  The fact that the the GTX 1080TI is around $750.00 and the GP100 is around $7,000.00 to $9,000.00 depending on the source is the drivers, the amount and type of memory, and what the card is deigned for.  Where the 1080 TI has 11 Gig of GDDR5 memory, the GP100 has 16 Gig of HBM2 memory for more bandwidth.  We play games on our 1080's. Professionals use the GP100 for professional graphics and multimedia work, including video editing, CAD/CAM, 3D rendering, and more and those drivers have to be rock solid as millions of dollars may depend on the project so they put much more into Quadro drivers than the GeForce Gaming driers we get.  Nvidia strives to try to make good gaming drivers to sell the GTX cards but have to make perfect drivers for content creation at the professional level.  The only Pascal chip that has all 3840 Cuda cores active that I know of is the Quadro P6000 with 24 Gb of GDDR5 memory.  Even the Titan X only has 3584 Cuda cores active.  And the reason that GTX's will blow the Quadro cards away in games is the drivers.  Quadro drivers are made for content creation and run games slower (sometimes much slower) than GTX drivers but just flies when using business applications.

    However, I do agree with you that at our level, a GTX card is just fine. 

  • What kind of speeds can I expect if I purchase a 1080ti card? If I would like to improve dForce performance, is this the way to go, as well? I'd love to see a video of someone just playing around in Daz using a 1080ti, to see if it's worth my investment.

  • JamesJABJamesJAB Posts: 1,760
    OZ-84 said:

    Sorry but here is so much wrong i have to reply to this. 

    A Quadro P6000 (about 3500$?) will beat a 1080ti in gaming easily. Your fps comparism is totaly wrong. Please check benchmarks yourself ...

    The reason why Quadros are clocked lower is not because a GTX of the same class does render less accurate or work faulty if used 24/7. The main reason is because those chips are way bigger (mostly because of more cuda cores) and they would generate to much heat at same clock like the GTX counterpart. 

    The Quadro drivers are the same ones like for the TIs with some feature sets enabled. 

    And please dont tell anyone she or he needs to downclock a stock GTX to render 24/7. This is simply wrong. 

    The main reason why Quadro cards cost so much is because NVIDIA likes to make profit (i dont blame them). Customer care is better, there are models with more ram / without fans avaiable and some Quadro variations are hardware wise regarding double precision and stuff. However... there is no real reason why a 8GB Quadro has to cost more than a 8GB GTX 

     

    Hate to say this OZ-84 but you are wrong.  All Pascal chips in the GTX 1080 TI (GP102 chip) and the Quadro GP100 (GP100 Tesla chip on a PCIe platform) cards both have exactly the same amount of Cuda cores (3584, virtually the same chip, check the specs).  The fact that the the GTX 1080TI is around $750.00 and the GP100 is around $7,000.00 to $9,000.00 depending on the source is the drivers, the amount and type of memory, and what the card is deigned for.  Where the 1080 TI has 11 Gig of GDDR5 memory, the GP100 has 16 Gig of HBM2 memory for more bandwidth.  We play games on our 1080's. Professionals use the GP100 for professional graphics and multimedia work, including video editing, CAD/CAM, 3D rendering, and more and those drivers have to be rock solid as millions of dollars may depend on the project so they put much more into Quadro drivers than the GeForce Gaming driers we get.  Nvidia strives to try to make good gaming drivers to sell the GTX cards but have to make perfect drivers for content creation at the professional level.  The only Pascal chip that has all 3840 Cuda cores active that I know of is the Quadro P6000 with 24 Gb of GDDR5 memory.  Even the Titan X only has 3584 Cuda cores active.  And the reason that GTX's will blow the Quadro cards away in games is the drivers.  Quadro drivers are made for content creation and run games slower (sometimes much slower) than GTX drivers but just flies when using business applications.

    However, I do agree with you that at our level, a GTX card is just fine. 

    The Titan X is the old one from when Pascal was all new, the GTX 1080 ti outperforms it in most tasks.  This year it was replaced by the Titan XP.
    https://www.nvidia.com/en-us/titan/titan-xp/
    The Titan XP has the same core configuration as the Quadro P6000.

    There is one hardware difference between the Geforce and top range Quadro cards.  The Quadro K6000, M5000, M6000, P5000, P6000, and GP100 all have ECC VRAM.
    All Quadro cards are pure Nvidia refrence boards with no customization allowed by the OEM (not even after market coolers)
    Quadro cards come with a higher level of support including things like on site support.
    Quadro Drivers do not have game specific optimizations. (This does not mean that they will suck at gaming, you can expect a comparable Quadro to score a few percent lower FPS in games)
    Quadro drivers come in two flavors QNF (Quadro New Feature Driver) and ODE (Optimal Driver for Enterprise)

  • OZ-84OZ-84 Posts: 128
    OZ-84 said:

     

    Hate to say this OZ-84 but you are wrong.  All Pascal chips in the GTX 1080 TI (GP102 chip) and the Quadro GP100 (GP100 Tesla chip on a PCIe platform) cards both have exactly the same amount of Cuda cores (3584, virtually the same chip, check the specs).  The fact that the the GTX 1080TI is around $750.00 and the GP100 is around $7,000.00 to $9,000.00 depending on the source is the drivers, the amount and type of memory, and what the card is deigned for.  Where the 1080 TI has 11 Gig of GDDR5 memory, the GP100 has 16 Gig of HBM2 memory for more bandwidth.  We play games on our 1080's. Professionals use the GP100 for professional graphics and multimedia work, including video editing, CAD/CAM, 3D rendering, and more and those drivers have to be rock solid as millions of dollars may depend on the project so they put much more into Quadro drivers than the GeForce Gaming driers we get.  Nvidia strives to try to make good gaming drivers to sell the GTX cards but have to make perfect drivers for content creation at the professional level.  The only Pascal chip that has all 3840 Cuda cores active that I know of is the Quadro P6000 with 24 Gb of GDDR5 memory.  Even the Titan X only has 3584 Cuda cores active.  And the reason that GTX's will blow the Quadro cards away in games is the drivers.  Quadro drivers are made for content creation and run games slower (sometimes much slower) than GTX drivers but just flies when using business applications.

    However, I do agree with you that at our level, a GTX card is just fine. 

    Yeah ... well... no problem billyben ;-)  

    -I cant see how the P6000 is blown away by the Titan X in gaming.

    https://wccftech.com/nvidia-pascal-quadro-p6000-gaming-benchmarks/

    All benchmarks i found show similar results ...

     

    -And i really cant see where you get the idea from that Nvidia uses some special magical professional driver designed only for quadro cards. 

    In fact GTX cards are running on crippled down quadro drivers

    https://wccftech.com/nvidia-titan-xp-titan-x-385-12-driver-update-massive-performance/

    -90% of the reason why Quadro cards cost so much more is because Nvidia likes to and if the GTX driveres werent crippled there would be no good reason to buy Quadros

    https://www.pcgamesn.com/nvidia-geforce-server

    - In terms of card design i can also see no significant difference :-)

    http://home.coolpc.com.tw/aes/open/nv_quadro-p6000/coolpc_p6000-37.jpg

    https://abload.de/img/nvidia-geforce-gtx-103iuix.png

     

    So, if drivers are the same, the GPU used on the cards is the same, even the PCB looks similar... why do you think that Quadros are "designed" for professional use and GTX cards are not?

    Is it because Nvidia says so? 

     

  • RaymandRaymand Posts: 62

    This is a very informative discussion. Can I add a couple follow-on questions? First, will installing two graphic cards (let's assume they have identical memory and cores), double the rendering performance?

    Second, assuming all the iRay work is passed off to the graphics card (or cards), will having more CPU cores enhance the performance of other parts of the program?

     

  • JamesJABJamesJAB Posts: 1,760
    Raymand said:

    This is a very informative discussion. Can I add a couple follow-on questions? First, will installing two graphic cards (let's assume they have identical memory and cores), double the rendering performance?

    Second, assuming all the iRay work is passed off to the graphics card (or cards), will having more CPU cores enhance the performance of other parts of the program?

     

    Using two identical GPUs will cut the total render time roughly in half (give or take a few seconds), because each GPU still takes the same amount of time to render each itteration.
    Keep in mind though, the two GPUs do not need to have the same amount of VRAM, or even be from the same GPU generation.
    As long as the scene can physiclay fit into the GPU VRAM that video card will participate in the render job.

    With Identical GPUs the Render time scaling is very linear.  Let's take the Geforce GTX 1080 ti for example using the Iray Benchmark scene that's on the forums here.
    1 GTX 1080 ti card will complete the benchmark in roughly 2 minutes, 2 cards will complete in roughly 1 minute, 3 cards will complete it in roughtly 40 seconds, and 4 cards will complete it in roughtly 30 seconds.

    More CPU cores can help if you are using a slower GPU.  If you are running a fast GPU like the GTX 1080 ti, adding the CPU in will knock 2-10 seconds depending on how many threads the CPU has.  When running multiple fast GPUs adding the CPU into the render job can slow the render down, because the GPUs are waiting around for the CPU to complete its tasks.

    Your CPU under regular Daz Studio operation will use up to two threads regardless of how many cores it has.  Functions inside of Daz Studio can be coded to utilize more CPU resources.  3Delight Rendering uses your whole CPU (unless there is some maximum number of threads allowed based on licensing), the old Optitex dynamic cloth system will use all of your CPU threads, and dForce will use all of your threads if it's set as the simulation device.

  • prixatprixat Posts: 1,585

    Here's some timings for dforce simulations.

    Running the included 'bedsheet on figure' simulation:
    i5-2500 ------ 3m40s
    GTX 750 Ti ----- 1m10s

     

  • JamesJABJamesJAB Posts: 1,760
    edited January 2018
    prixat said:

    Here's some timings for dforce simulations.

    Running the included 'bedsheet on figure' simulation:
    i5-2500 ------ 3m40s
    GTX 750 Ti ----- 1m10s

     

    My turn: Notebook - Dell Precision M6700
    Geforce GTX 980M ----- 59 seconds
    Core i7-3840QM ----- 2m 11s

    Post edited by JamesJAB on
  • JD_MortalJD_Mortal Posts: 758
    edited January 2018

    dForce uses OpenCL, not CUDA cores...

    However... nVidia is doing major updates to OpenCL, taking advantage of CUDA and the new Volta-series cores. (Only found in Titan-V and the new Tesla cards.)

    I would say, if it were possible, to use one nVidia for CUDA and one Radeon for dForce (OpenCL)... However, if nVidia detects another video-card, other than the one made by Intel, it will disable all CUDA cores on the card. (There is no way to trick it... However, using two computers is possible. One with the Radeon card for doing dForce, then rendering, remotely, to another computer with the nVidia card on it, for the CUDA cores.)

    https://browser.geekbench.com/opencl-benchmarks

    https://browser.geekbench.com/cuda-benchmarks

    Post edited by JD_Mortal on
  • prixatprixat Posts: 1,585
    JD_Mortal said:

    dForce uses OpenCL, not CUDA cores...

    That doesn't really make sense, you can see from your own link that nVidia cards have no problems with OpenCL. 

  • JD_MortalJD_Mortal Posts: 758
    prixat said:
    JD_Mortal said:

    dForce uses OpenCL, not CUDA cores...

    That doesn't really make sense, you can see from your own link that nVidia cards have no problems with OpenCL. 

    Being "compatible" and being "optomized for speed", are not the same.

    My shoes are compatible for diving, but flippers are optomized for speed, when diving. They are turning the shoes into flippers, with the use of CUDA cores. (To aid the shortfalls of the slower speed OpenCL code.)

  • fred9803fred9803 Posts: 1,558

    I'm surprised that nobody has asked what sort of scenes the OP will be rendering. If it's relatively simple scenes then one might get away with a lower spec GPU and render times won't be as much an issue.

    As for Dforce, I've only started playing with it yesterday and it didn't seem to be that resource hungry. I tested a simple scene and the pose+drape took less than 30 seconds (GTX 2080). Though I assume that if I had done it inside a more complex scene it could have been different. I suppose if a scene had already max'ed out my GPU memory that would certanly be the case as the CPU whould have to pick it up.

  • LenioTGLenioTG Posts: 2,116
    JamesJAB said:

    For Iray the GPU acts independently from the rest of the computer while rendering
    You should usa a Nvidia GPU with at least 4GB of VRAM.
    The Reccomended setup is a Nvidia GPU with at least 8GB of VRAM
    Here is a list of Nvidia GPUs with at least 8GB of VRAM (roughly in Iray performance order)

    Quadro K5100M 8GB -laptop-
    GTX 880M (8GB version) -laptop-

    Quadro M4000 8GB
    Quadro K5200 8GB

    Quadro M5000M 8GB -laptop-
    GTX 980M (8GB version) -laptop-

    Quadro K6000 12GB
    GTX 980 8GB -laptop-
    Quadro M5500 8GB -laptop-

    Quadro M5000 8GB
    GTX 1070 8GB -laptop- (Max q version can be slower based on laptop cooling)
    GTX 1070 8GB
    Quadro P4000 8GB -laptop-
    Quadro P4000 8GB
    GTX Titan X 12GB (Geforce 900 Series)
    Quadro M6000 12GB or 24GB

    Quadro P5000 8GB -laptop-
    GTX 1070 ti 8GB
    GTX 1080 8GB  -laptop- (Max q version can be slower based on laptop cooling)
    GTX 1080 8GB
    Quadro P5000 16GB
    GTX 1080 ti 11GB
    Quadro GP100 16GB (HBM2)
    Nvidia Titan X 12GB (Geforce 1000 Series)
    Nvidia Titan XP 12GB (Geforce 1000 Series)
    Quadro P6000 24GB
    Nvidia Titan V 12GB (HBM2)(Geforce 2000? Series,  Volta GPU)

    You're great, thanks, I'm going to bookmark this! :D

  • You know that's over a year out of date right?

  • LenioTGLenioTG Posts: 2,116

    You know that's over a year out of date right?

    I noticed there is no series 20, but in my country they cost really too much...I could afford at best a 2060 (for the equivalent of 430$...), so I'm interested in old used hardware! :D

  • towdow3towdow3 Posts: 83

    did anyone even answer the question about.... DFORCE.... im on the hunt for a way to simulate dforce without having to wait 15 mintues. the title states "Recommended Hardware Guide for Daz3D iRay dForce"

  • ebergerlyebergerly Posts: 3,255
    edited April 2019

    Like I always recommend, you'll need to give at least two important pieces of data before anyone can give you decent advice:

    1. How much are you willing to spend, and what exactly you're looking to buy (GPU's only, a full system, etc.). Without that, most of the recommendations you'll get are "you should spend $12,000 on buying 6 GPU's and water cooling for everything". 
    2. What exactly are you going to use the hardware for besides Iray and DForce? Games? Video editing? Other stuff? And do you really care if a render takes 10 minutes vs 20 minutes? That's important.  

    Other than that, pretty much all people can say about GPU's is that the GTX series is probably on the way out, and the RTX will likely get far more development in the future. But right now they aren't yet at full potential, and nobody really knows how it will turn out for DAZ Studio and related software. Because of that, nobody can tell you how they'll perform.

    And regarding GPU VRAM capacity, nobody can tell you what to buy until you tell us what kind of scenes you build, and how much system RAM you're willing to pay for. 

    Post edited by ebergerly on
  • JamesJABJamesJAB Posts: 1,760
    edited April 2019

    **Updated to include RTX Cards, NVLink cards, and current nvidia prices**

    For Iray the GPU acts independently from the rest of the computer while rendering
    You should usa a Nvidia GPU with at least 4GB of VRAM.
    The Reccomended setup is a Nvidia GPU with at least 8GB of VRAM
    Here is a list of Nvidia GPUs with at least 8GB of VRAM (roughly in Iray and dForce performance order)

    Quadro K5100M 8GB -laptop-
    GTX 880M (8GB version) -laptop-

    Quadro M4000 8GB
    Quadro K5200 8GB

    Quadro M5000M 8GB -laptop-
    GTX 980M (8GB version) -laptop-

    Quadro K6000 12GB
    GTX 980 8GB -laptop-
    Quadro M5500 8GB -laptop-

    Quadro M5000 8GB
    GTX 1070 8GB -laptop- (Max q version can be slower based on laptop cooling)
    GTX 1070 8GB ($399 usd)
    Quadro P4000 8GB -laptop-
    Quadro P4000 8GB
    GTX Titan X 12GB (Geforce 900 Series)
    Quadro M6000 12GB or 24GB

    Quadro P5000 16GB -laptop-
    Quadro RTX 4000 8GB
    RTX 2070 8GB ($599 usd)

    Quadro P4200 8GB -laptop-
    GTX 1070 ti 8GB
    GTX 1080 8GB  -laptop- (Max q version can be slower based on laptop cooling)
    GTX 1080 8GB
    Quadro P5200 16GB -laptop-
    Quadro P5000 16GB
    RTX 2080 8GB ($799 usd)
    Quadro RTX 5000 16GB -*NVLink*-
    GTX 1080 ti 11GB
    Quadro GP100 16GB (HBM2) -*NVLink 2way*-
    Nvidia Titan X 12GB (Geforce 1000 Series)
    Nvidia Titan XP 12GB (Geforce 1000 Series)
    Quadro P6000 24GB
    Quadro GV100 32GB (HBM2) (Volta GPU) -*NVLink 2way*- ($8999 usd)
    Nvidia Titan V 12GB (HBM2) (Volta GPU)
    Nvidia Titan V CEO Edition 32GB (HBM2) (Volta GPU)
    RTX 2080 ti 11GB -*NVLink*- ($1199 usd)
    Nvidia Titan RTX 24GB -*NVLink*- ($2499 usd)
    Quadro RTX 6000 24GB -*NVLink*- ($4000 usd)
    Quadro RTX 8000 48GB -*NVLink*- ($5500 usd)

    Post edited by JamesJAB on
  • ebergerlyebergerly Posts: 3,255
    edited April 2019
    JD_Mortal said:

    dForce uses OpenCL, not CUDA cores...

    I'm certainly not a DForce expert, but I started up Studio, added a sphere, added a plane with 1,000 divisions, set the plane as a cloth object, set the sphere as a collision object, and started a DForce simulation. ​

    Below is a screenshot of Task Manager showing the almost 90-100% utilization of CUDA on my GTX-1080ti to do the DForce simulation. 

    Capture6.JPG
    1186 x 856 - 133K
    Post edited by ebergerly on
  • KitsumoKitsumo Posts: 1,210
    JD_Mortal said:

    dForce uses OpenCL, not CUDA cores...

    However... nVidia is doing major updates to OpenCL, taking advantage of CUDA and the new Volta-series cores. (Only found in Titan-V and the new Tesla cards.)

    Strictly speaking, I guess its the OpenCL that uses CUDA cores, as well as AMD stream processors, and CPU cores.

    JD_Mortal said:

    I would say, if it were possible, to use one nVidia for CUDA and one Radeon for dForce (OpenCL)... However, if nVidia detects another video-card, other than the one made by Intel, it will disable all CUDA cores on the card. (There is no way to trick it... However, using two computers is possible. One with the Radeon card for doing dForce, then rendering, remotely, to another computer with the nVidia card on it, for the CUDA cores.)

    https://browser.geekbench.com/opencl-benchmarks

    https://browser.geekbench.com/cuda-benchmarks

    Both cards can work together. OpenCL doesn't care what hardware it's using, as long as it meets the minimum requirements.

     

  • JamesJABJamesJAB Posts: 1,760
    edited April 2019

    As stated above dForce uses OpenCL.
    "The dForce engine utilizes OpenCL 1.2 to perform simulations."
    Every GPU on my list will run dForce just fine.
    AMD GPUs :  Radeon HD 5XXX or newer support OpenCL 1.2 or higher.
    Keep in mind that a low end GPU may support OpenCL 1.2 but will probably simulate slower than a modern quad core i7 or Ryzen 5)

    For an AMD card here's my dForce recomendation (keep in mind that if you have a capable Iray GPU there is no point in adding a second card for dForce because unlike Iray it only uses a single GPU for simulating):
    R9 2XX or higher
    R9 370X or higher
    R9 (Fury, Nano, Fury X, Pro Duo)
    RX 470 or higher
    RX 560 or higher
    RX Vega
    Radeon VII
    Radeon Pro WX 5100 or higher
    Radeon Pro SSG
    Radeon Pro Duo
    Radeon Vega Frontier 

    Post edited by JamesJAB on
  • yanmingyanming Posts: 4

    But, In my PC, The dforce simulate always say "the hardware can't compatible dforce", and my gpu is NV1070.Why? How to turn on it,it is too slow.

  • But, In my PC, The dforce simulate always say "the hardware can't compatible dforce", and my gpu is NV1070.Why? How to turn on it,it is too slow.

    Does your CPU also have a built-in GPU - Intel or AMD? If so make sure that DS is using the nVidia card, rght-click on the desktop and open the nVidia Control Panel.

  • My Pc is Intel(R) Core(TM) i5-10400 CPU @ 2.90GHz   2.90 GHz 

    Not listed I know.Is there any way to get DeForce to run on my PC?

Sign In or Register to comment.