CPU vs GPU rendering bang per buck?

I'm wondering which is the best option from a bang for your buck point of view in regards to rendering animation.

A hi end GPU looks to be better rendering performance but I am unsure when you factor in the $ cost.

Has anyone examined this and come to some conclusions?

Comments

  • Both, are expensive but if you are into animation I would look at Blender not Daz Studio (I use Lighwave myself but very expensive). Daz Studio is more focused on stills 2D if you want to render animation in Daz studio you can but the focus of the software is not animation but stills and daz iray is not really geared for it. If you are into animation and don't want to go thru too much work doing animation I would look at Iclone 7 with the right plugin (3dexchange) you can inport daz figure and animate them in this software. You can also animate in Maya with the daz plugin but more work. There is a plugin in the free section that lets you import daz figures in to Blender. You need at least 32gb of ram and at least 4 cpu cores to render animations. The more cores and memory the better. Otherwise, use a cloud based service to do your animation renders for you.

  • nicsttnicstt Posts: 7,233

    Depends on your rendering engine.

    Also a consideration is scene size. If scenes wont fit on your card(s), and drop to CPU, then they are expensive paperweights.

    Time spent having to get a scene to fit on the card versus lettering it render on CPU is also a consideration; a further consideration is that when using IRAY in Studio, it might start off on the GPU, but drop to CPU.

  • JonnyRayJonnyRay Posts: 790
    edited November 8

    You don't need the most cutting edge GPU to take advantage of the efficiency of Iray running on the Cuda cores. Look at one that is a generation or so behind. Get as much VRAM as you can afford on the card so it doesn't fall back to CPU rendering. You would see good benefits say from a GTX 1070 with 8 GB of VRAM (about $400-450 on NewEgg). Doubling the cost for a RTX 2080 will get you incremental more speed, but in my opinion it isn't worth the extra cost.

    You can check out the Iray Starter Scene - Post Your Benchmarks thread where people talk much more in depth about perfromance with various configurations.

    Post edited by JonnyRay on
  • JD_MortalJD_Mortal Posts: 496
    edited November 9

    The more money you have, the faster and larger scenes you can render...

    The formula is still the same, for the latest version of Daz3D. (It is only the number of cuda-cores which matters.) *{Tensor-cores are still irrelevant at the moment "Volta" and "RTX"}

    For "cost", you have multiple factors... Up-front costs and Hidden costs, as well as secondary hardware costs.

    Up front = The purchase of the cards and also the hardware and software to operate them. (Multiple windows licenses and/or remote-rendering licenses, if you push this into a render-farm.)

    Hidden costs = Power to operate the devices, and also "cool them". At 200-300 watts per card, four cards and a CPU will push 1500 watts. That is a large "electric heater", which will be running while you also operate your homes Air-conditioner, to combat the heat from only four cards.

    Which cards? Well, the number of cuda-cores is roughly a linear step towards completion. The ultimate factor is the fact that each card, irrelevant of the number of cuda-cores, will draw nearly the same power. Also, there is the issue of "project size". Five clothed models, in a modest single-room scene, can easily fill-up 11GB. Which also assumes the image-size is under 8K resolution.

    The more cards you can cram into ONE computer system, the better. (At the moment, I think nVidia has a limit of only 4 cards of the same "model", in windows.) Not to mention, that will require the largest "american" PSU of 1600-1500Watts. That will place you at most homes "breaker amperage limit", or "wire amperage limit", for 120vac plugs. Having only 1250 cores per card, 5000 total, will take about 4x the time to process the same image as cards with 5000 cores each, totaling 20,000 total. Thus, your AC bill and electric bill will be 1/4 the cost, while your hardware and software costs will remain the same, except the cost of the cards themselves.

    On a dime, the cheapest "old cards", would be best, if you are in no rush and you run your computers outside, or you have free electricity. However, if you are a render-farm, or plan to be... You want as low as possible, for electric-bills, and as small of a space as possible, while also reducing the number of software licenses needed. (Less controlling hardware. With Linux, you can cram up to 16 cards in one server computer, with four processors. Which would also require 240v power and a series of 1600-2400watt power supplies.)

    Value... Figure-out $cost/cuda-core, or cuda-cores/$dollar... While also keeping in mind the VRAM ammount and the rough assumption of 200-300Watts per card.

    NOTE: Your project MAY be limited to the smallest cards VRAM value. I believe it is "all or nothing", but Daz/IRAY may have fixed that. If it doesn't fit in your cards, you resort to CPU rendering, making all the cards you have irrelevant.

    Bonus things to think about...

    Tensor-cores (AI cores found in "Volta", "Titan-V", and "RTX" cards), is faster than CPUs ability to process AI output. {However, Daz3D does not do this yet. Beta is sort-of working, for CPU/CUDA only.}

    IRAY, may, at some point, use better project management. (Currently, they try to cram the whole project into the VRAM, instead of only using what is needed.)

    The "Volta" cards can "share memory", using VRAM, from two cards, as one large bank. (If you purchase the super-expensive memory bridges, at nearly $600 per NVlink.)

    Post edited by JD_Mortal on
  • If there are multiple cards with different memory sizes then Iray will use all that can hold the sceen andd rop those that can't. It never has been the case that the smallest limits the use of GPUs at all.

  • nicsttnicstt Posts: 7,233

    But only if you spend appropriately. :)

    JD_Mortal said:

    The more money you have, the faster and larger scenes you can render...

    The formula is still the same, for the latest version of Daz3D. (It is only the number of cuda-cores which matters.) *{Tensor-cores are still irrelevant at the moment "Volta" and "RTX"}

    Not quite.

    If you're rendering on IRAY, CUDA matters; the more cores the better is true, but one may be better getting fewer cores on a later card versus more on a previous gen card. A consideration may also be GPU RAM on the two cards being considered.

    If you don't use IRAY - then CPU is almost all that matters.

    If the render engine is outside of Studio, then see what the engine needs

Sign In or Register to comment.