Suggestion on a hardware upgrade?

So I'm looking to upgrade my hardware for faster renders and previews as I'm sick of waiting ages for my viewport to render the Iray preview.
Currently I have:
-GTX 1080
-AMD Ryzen 5 1600 (6 core)

I was having a problem during full renders where (even though both were ticked) Daz would only default to using my CPU and not using my GPU at all. It would max out my CPU and the whole computer then would be unusable while it renders. If I unselected the CPU and only had the GPU selected to be used, then it would use the GPU and max it out, but at least my computer doesn't freeze up and I can use it for other things while I wait that isn't graphics intensive. 

I'd like to upgrade my system, but I have 3 choices that I'd love for anybody to weigh in on if you have some insight! My three choices are:

1: Upgrade the CPU to an Ryzen 9 3900x which is unparalleled in terms of graphics rendering. ​Up against the latest AMD it almost matches for performance in gaming, but completely outstrips it in rendering in programs like DAZ.

2: Buy another GPU - another GTX 1080 and run them together in SLI. Obviously having two of them would wrok, and running in SLI is going to improve it all that much more!

3: Buy another GPU - an RTX 2080 WITHOUT SLI. SLI doesn't work if the cards are different, so they'd be operating as individuals and I'm not sure how that would compare to fully SLI'd 1080s, or even if DAZ can use two graphics cards at once that aren't SLI.

I'd love to hear any thoughts on what people think?
Cheers

«1

Comments

  • SLI will not help Iray, and in fact nVidia recommends turning it off.

    It sounds as if your scenes are pushing the limits of the GPU's memory, in whichcnase adding a second 8GB card would improve spped for those which do render but wouldn't help with those which do not. Unfortunately moving up to an 11GB card is pricey, and going beyond that is heart-stopping.

    Even the fastest CPU is not going to be as fast as a good GPU, though it does avoid memory issues. You can, even on yoru current system, use the Set Affinity option in Task Manager (right-click on the application in the Details tab) to free up one or more cores on the CPU, slowing the render but making the system more usable in the eman time.

  • SevrinSevrin Posts: 6,313

    Your CPU isn't as nearly important as your GPU with Iray.   You might as well leave it unchecked.   SLI isn't worthwhile for rendering with Iray.   The 2080 would be your biggest improvement.  A 2080ti would be an even bigger improvement, but you'd need to uncheck your 1080 for scenes that require more than 8Gb of video memory.

    If you had to budget for only one thing, after bumping your memory up to at least twice video memory, it would be the 2080ti.  If you search the forum here, you'll get plenty of hardware information.

  • p0rtp0rt Posts: 217
    edited August 2019

    SLI will not help Iray, and in fact nVidia recommends turning it off.

    It sounds as if your scenes are pushing the limits of the GPU's memory, in whichcnase adding a second 8GB card would improve spped for those which do render but wouldn't help with those which do not. Unfortunately moving up to an 11GB card is pricey, and going beyond that is heart-stopping.

    Even the fastest CPU is not going to be as fast as a good GPU, though it does avoid memory issues. You can, even on yoru current system, use the Set Affinity option in Task Manager (right-click on the application in the Details tab) to free up one or more cores on the CPU, slowing the render but making the system more usable in the eman time.

    SLI help's iray. SLI make's rendering on my PC literally 100% faster then having a single card, optix is based on SLI code but for server's withmultiple rack's linked together for rendering farm's

     

    if you upgrade anything your want the CPU with DDR4 which can keep up with the GPU bus and won't bottleneck if something isn't stored in VRAM from the previous frame. likr doom 2016 game engine, with a virtual texture cache in the physical RAM so your graphic card can take 1000 textures per second if needed, and the main reason is can run at 200FPS

    Post edited by p0rt on
  • If you are running out of VRAM, here is a much cheaper option to an 11GB videocard https://www.daz3d.com/scene-optimizer I use it sometimes even though I have an 11GB videocard.

    Also, check the texture compression values. Default is 512 medium, 1024 high, this should be fine, but someone might have increased the values to mess with you or something.

    With an 8GB VC, you would want at least 24GB of system memory because textures are not compressed in system memory, only in VRAM, even if you are only rendering with your GPU. (as far as I know, actually this makes no sense if you think about it, so maybe I'm wrong)

    Never bother using your CPU for rendering, it is a joke compared to your GPU, at least it is for Iray. Always untick your CPU as a rendering device.

  • p0rt said:

    SLI will not help Iray, and in fact nVidia recommends turning it off.

    It sounds as if your scenes are pushing the limits of the GPU's memory, in whichcnase adding a second 8GB card would improve spped for those which do render but wouldn't help with those which do not. Unfortunately moving up to an 11GB card is pricey, and going beyond that is heart-stopping.

    Even the fastest CPU is not going to be as fast as a good GPU, though it does avoid memory issues. You can, even on yoru current system, use the Set Affinity option in Task Manager (right-click on the application in the Details tab) to free up one or more cores on the CPU, slowing the render but making the system more usable in the eman time.

    SLI help's iray. SLI make's rendering on my PC literally 100% faster then having a single card, optix is based on SLI code but for server's withmultiple rack's linked together for rendering farm's

    That is not what nVidia has said in the past.

    p0rt said:

    if you upgrade anything your want the CPU with DDR4 which can keep up with the GPU bus and won't bottleneck if something isn't stored in VRAM from the previous frame. likr doom 2016 game engine, with a virtual texture cache in the physical RAM so your graphic card can take 1000 textures per second if needed, and the main reason is can run at 200FPS

    Rendering, unlike game playing, does not require a fast connection between the GPU and the rest of the system.

  • kenshaw011267kenshaw011267 Posts: 3,805
    p0rt said:

    SLI will not help Iray, and in fact nVidia recommends turning it off.

    It sounds as if your scenes are pushing the limits of the GPU's memory, in whichcnase adding a second 8GB card would improve spped for those which do render but wouldn't help with those which do not. Unfortunately moving up to an 11GB card is pricey, and going beyond that is heart-stopping.

    Even the fastest CPU is not going to be as fast as a good GPU, though it does avoid memory issues. You can, even on yoru current system, use the Set Affinity option in Task Manager (right-click on the application in the Details tab) to free up one or more cores on the CPU, slowing the render but making the system more usable in the eman time.

    SLI help's iray. SLI make's rendering on my PC literally 100% faster then having a single card, optix is based on SLI code but for server's withmultiple rack's linked together for rendering farm's

    SLI doesn't matter. Having multiple cards does.

  • p0rtp0rt Posts: 217
    p0rt said:

    SLI will not help Iray, and in fact nVidia recommends turning it off.

    It sounds as if your scenes are pushing the limits of the GPU's memory, in whichcnase adding a second 8GB card would improve spped for those which do render but wouldn't help with those which do not. Unfortunately moving up to an 11GB card is pricey, and going beyond that is heart-stopping.

    Even the fastest CPU is not going to be as fast as a good GPU, though it does avoid memory issues. You can, even on yoru current system, use the Set Affinity option in Task Manager (right-click on the application in the Details tab) to free up one or more cores on the CPU, slowing the render but making the system more usable in the eman time.

    SLI help's iray. SLI make's rendering on my PC literally 100% faster then having a single card, optix is based on SLI code but for server's withmultiple rack's linked together for rendering farm's

    That is not what nVidia has said in the past.

    p0rt said:

    if you upgrade anything your want the CPU with DDR4 which can keep up with the GPU bus and won't bottleneck if something isn't stored in VRAM from the previous frame. likr doom 2016 game engine, with a virtual texture cache in the physical RAM so your graphic card can take 1000 textures per second if needed, and the main reason is can run at 200FPS

    Rendering, unlike game playing, does not require a fast connection between the GPU and the rest of the system.

    I know, nvidia uses windows virtual memory as an extension to VRAM, which Id software did with rage, and was bottle necked by SATA speeds and resulted in major texture pop
  • TaozTaoz Posts: 10,256
    Sevrin said:

    A 2080ti would be an even bigger improvement, but you'd need to uncheck your 1080 for scenes that require more than 8Gb of video memory.

    For what I've heard, that shouldn't be necessary, the 1080 may drop out if you use more than 8 GB VRAM but the 2080ti should keep continuing until you exceed its 11 GB.

  • p0rt said:
    p0rt said:

    SLI will not help Iray, and in fact nVidia recommends turning it off.

    It sounds as if your scenes are pushing the limits of the GPU's memory, in whichcnase adding a second 8GB card would improve spped for those which do render but wouldn't help with those which do not. Unfortunately moving up to an 11GB card is pricey, and going beyond that is heart-stopping.

    Even the fastest CPU is not going to be as fast as a good GPU, though it does avoid memory issues. You can, even on yoru current system, use the Set Affinity option in Task Manager (right-click on the application in the Details tab) to free up one or more cores on the CPU, slowing the render but making the system more usable in the eman time.

    SLI help's iray. SLI make's rendering on my PC literally 100% faster then having a single card, optix is based on SLI code but for server's withmultiple rack's linked together for rendering farm's

    That is not what nVidia has said in the past.

    p0rt said:

    if you upgrade anything your want the CPU with DDR4 which can keep up with the GPU bus and won't bottleneck if something isn't stored in VRAM from the previous frame. likr doom 2016 game engine, with a virtual texture cache in the physical RAM so your graphic card can take 1000 textures per second if needed, and the main reason is can run at 200FPS

    Rendering, unlike game playing, does not require a fast connection between the GPU and the rest of the system.

     

    I know, nvidia uses windows virtual memory as an extension to VRAM, which Id software did with rage, and was bottle necked by SATA speeds and resulted in major texture pop

    Isn't that a confusion over what the V in VRAM stands for? GPUs do not swap to virtual memory as far as I am aware.

  • kenshaw011267kenshaw011267 Posts: 3,805
    p0rt said:
    p0rt said:

    SLI will not help Iray, and in fact nVidia recommends turning it off.

    It sounds as if your scenes are pushing the limits of the GPU's memory, in whichcnase adding a second 8GB card would improve spped for those which do render but wouldn't help with those which do not. Unfortunately moving up to an 11GB card is pricey, and going beyond that is heart-stopping.

    Even the fastest CPU is not going to be as fast as a good GPU, though it does avoid memory issues. You can, even on yoru current system, use the Set Affinity option in Task Manager (right-click on the application in the Details tab) to free up one or more cores on the CPU, slowing the render but making the system more usable in the eman time.

    SLI help's iray. SLI make's rendering on my PC literally 100% faster then having a single card, optix is based on SLI code but for server's withmultiple rack's linked together for rendering farm's

    That is not what nVidia has said in the past.

    p0rt said:

    if you upgrade anything your want the CPU with DDR4 which can keep up with the GPU bus and won't bottleneck if something isn't stored in VRAM from the previous frame. likr doom 2016 game engine, with a virtual texture cache in the physical RAM so your graphic card can take 1000 textures per second if needed, and the main reason is can run at 200FPS

    Rendering, unlike game playing, does not require a fast connection between the GPU and the rest of the system.

     

    I know, nvidia uses windows virtual memory as an extension to VRAM, which Id software did with rage, and was bottle necked by SATA speeds and resulted in major texture pop

    No, graphics cards do not use virtual memory when rendering. That process is far to slow for the speeds at which GPU's operate. The data would need to be passed over the PCIE bus to the CPU and then over the SATA or PCIE bus to the drive, for every swap. 

    Games try to prevent this by the various quality settings. If you take a low end card, say a 1050, and tell it to run a game at max texture quality it will either outright fail or the game will be a slideshow with all the data being shuffled on and off the card.

  • DustRiderDustRider Posts: 2,880
    edited August 2019
    p0rt said:
    p0rt said:

    SLI will not help Iray, and in fact nVidia recommends turning it off.

    It sounds as if your scenes are pushing the limits of the GPU's memory, in whichcnase adding a second 8GB card would improve spped for those which do render but wouldn't help with those which do not. Unfortunately moving up to an 11GB card is pricey, and going beyond that is heart-stopping.

    Even the fastest CPU is not going to be as fast as a good GPU, though it does avoid memory issues. You can, even on yoru current system, use the Set Affinity option in Task Manager (right-click on the application in the Details tab) to free up one or more cores on the CPU, slowing the render but making the system more usable in the eman time.

    SLI help's iray. SLI make's rendering on my PC literally 100% faster then having a single card, optix is based on SLI code but for server's withmultiple rack's linked together for rendering farm's

    That is not what nVidia has said in the past.

    p0rt said:

    if you upgrade anything your want the CPU with DDR4 which can keep up with the GPU bus and won't bottleneck if something isn't stored in VRAM from the previous frame. likr doom 2016 game engine, with a virtual texture cache in the physical RAM so your graphic card can take 1000 textures per second if needed, and the main reason is can run at 200FPS

    Rendering, unlike game playing, does not require a fast connection between the GPU and the rest of the system.

     

    I know, nvidia uses windows virtual memory as an extension to VRAM, which Id software did with rage, and was bottle necked by SATA speeds and resulted in major texture pop

    No, graphics cards do not use virtual memory when rendering. That process is far to slow for the speeds at which GPU's operate. The data would need to be passed over the PCIE bus to the CPU and then over the SATA or PCIE bus to the drive, for every swap. 

    Games try to prevent this by the various quality settings. If you take a low end card, say a 1050, and tell it to run a game at max texture quality it will either outright fail or the game will be a slideshow with all the data being shuffled on and off the card.

    Very true, Iray does not use system memory or virtual memory for rendering (unless you are using the CPU to render). There is no swapping of any data for rendering from system RAM to GPU memory during the render process, except for the rendered image information sent from the GPU back to the system/CPU for image updates There seems to be some other communication between the GPU to the system/CPU based on the high CPU usage, but this could also just be very inefficient transferring of the rendered image data from the GPU to the main system (one CPU core seems to be used for each GPU in the system, much higher than other GPU based renderers), but all of the rendering is done on the GPU, and all of the data for rendering the image must fit on the GPU to use the GPU for rendering in Iray.

    However, this isn't true for other GPU based renderers. For example, Octane can make use of system RAM, or "out of core" memory to store data (both texture/shader data and geometry data) and intelligently swap it between system RAM and GPU memory during the rendering process. This means that unlike Iray, which will drop to CPU only rendering when GPU memory limits are exceeded, Octane (and some other GPU render engines like AMD ProRender) can handle scenes much larger than what can fit within GPU memory, and still render at the much faster GPU speeds. Yes, there is a slight performance hit (with Octane, I experience a maximum of about 10-15% loss in performance), but it's still much faster than using the CPU to render (if that is an option with the render engine being used). I think it's interesting to note that when using out of core memory with Octane, the CPU usage is almost nonexistent, much less than Iray in GPU only rendering surprise

    Post edited by DustRider on
  • nicsttnicstt Posts: 11,715
    Sevrin said:

    Your CPU isn't as nearly important as your GPU with Iray.   You might as well leave it unchecked.   SLI isn't worthwhile for rendering with Iray.   The 2080 would be your biggest improvement.  A 2080ti would be an even bigger improvement, but you'd need to uncheck your 1080 for scenes that require more than 8Gb of video memory.

    If you had to budget for only one thing, after bumping your memory up to at least twice video memory, it would be the 2080ti.  If you search the forum here, you'll get plenty of hardware information.

    There is a proviso here.

    If the GPU is being used, I'd agree, but where it isn't, the graphics card is an expensive paperweight.

  • p0rtp0rt Posts: 217
    nicstt said:
    Sevrin said:

    Your CPU isn't as nearly important as your GPU with Iray.   You might as well leave it unchecked.   SLI isn't worthwhile for rendering with Iray.   The 2080 would be your biggest improvement.  A 2080ti would be an even bigger improvement, but you'd need to uncheck your 1080 for scenes that require more than 8Gb of video memory.

    If you had to budget for only one thing, after bumping your memory up to at least twice video memory, it would be the 2080ti.  If you search the forum here, you'll get plenty of hardware information.

    There is a proviso here.

    If the GPU is being used, I'd agree, but where it isn't, the graphics card is an expensive paperweight.

    And going from 14nm to 7nm ryzens with faster ram has to be at least 40% more floating point calculations per second which probably out numbers a few hundred extra cuda cores without running some benchmarks
  • p0rt said:
    nicstt said:
    Sevrin said:

    Your CPU isn't as nearly important as your GPU with Iray.   You might as well leave it unchecked.   SLI isn't worthwhile for rendering with Iray.   The 2080 would be your biggest improvement.  A 2080ti would be an even bigger improvement, but you'd need to uncheck your 1080 for scenes that require more than 8Gb of video memory.

    If you had to budget for only one thing, after bumping your memory up to at least twice video memory, it would be the 2080ti.  If you search the forum here, you'll get plenty of hardware information.

    There is a proviso here.

    If the GPU is being used, I'd agree, but where it isn't, the graphics card is an expensive paperweight.

     

    And going from 14nm to 7nm ryzens with faster ram has to be at least 40% more floating point calculations per second which probably out numbers a few hundred extra cuda cores without running some benchmarks

    I'm not entirely sure what you mean here, but I suspect it is incorrect. Each CPU (virtual) core can handle one thread, so it can't really be eqivalent to more than one CUDA core - though it can do more/use fewer cycles to handle a thread.

  • p0rtp0rt Posts: 217
    edited August 2019
    p0rt said:
    nicstt said:
    Sevrin said:

    Your CPU isn't as nearly important as your GPU with Iray.   You might as well leave it unchecked.   SLI isn't worthwhile for rendering with Iray.   The 2080 would be your biggest improvement.  A 2080ti would be an even bigger improvement, but you'd need to uncheck your 1080 for scenes that require more than 8Gb of video memory.

    If you had to budget for only one thing, after bumping your memory up to at least twice video memory, it would be the 2080ti.  If you search the forum here, you'll get plenty of hardware information.

    There is a proviso here.

    If the GPU is being used, I'd agree, but where it isn't, the graphics card is an expensive paperweight.

     

    And going from 14nm to 7nm ryzens with faster ram has to be at least 40% more floating point calculations per second which probably out numbers a few hundred extra cuda cores without running some benchmarks

    I'm not entirely sure what you mean here, but I suspect it is incorrect. Each CPU (virtual) core can handle one thread, so it can't really be eqivalent to more than one CUDA core - though it can do more/use fewer cycles to handle a thread.

    until the 3900x is released to the public, it is all guess work, as review site's are kind of lame, and run the same benchmark's over and over which doesn't actually prove anything, and as far as the 1080 Vs the 2080 goes, the 2080 without RTX enabled, is an anerage of 19% faster according to https://gpu.userbenchmark.com/Compare/Nvidia-RTX-2080-vs-Nvidia-GTX-1080/4026vs3603 

     

    all the score's mean nothing, if the benchmark can run and complete on a computer with power issue's if you try to run prime95 on them, like cinebench and 3dmark, which prove they arn't really a true benchmark of power and performance

     

    to find out the in's and out's, you would need to run ADIA64 GPGPU benchmark https://www.aida64.co.uk/user-manual/gpgpu-benchmark

    Post edited by p0rt on
  • kenshaw011267kenshaw011267 Posts: 3,805
    p0rt said:
    nicstt said:
    Sevrin said:

    Your CPU isn't as nearly important as your GPU with Iray.   You might as well leave it unchecked.   SLI isn't worthwhile for rendering with Iray.   The 2080 would be your biggest improvement.  A 2080ti would be an even bigger improvement, but you'd need to uncheck your 1080 for scenes that require more than 8Gb of video memory.

    If you had to budget for only one thing, after bumping your memory up to at least twice video memory, it would be the 2080ti.  If you search the forum here, you'll get plenty of hardware information.

    There is a proviso here.

    If the GPU is being used, I'd agree, but where it isn't, the graphics card is an expensive paperweight.

     

    And going from 14nm to 7nm ryzens with faster ram has to be at least 40% more floating point calculations per second which probably out numbers a few hundred extra cuda cores without running some benchmarks

    Not remotely true. Even a 40% increase in IPC would not match the gain in performance in going from one level of card to another, which is in prior to RTX cards a jump of several hundred CUDA.

    And for the love of the FSM don't use userbenchmark for anything. That is a SEO site made to get ad revenue and not provide useful data to consumers.

  • p0rt said:
    I know, nvidia uses windows virtual memory as an extension to VRAM

    That is totally false. The V in VRAM means "Video", as in it is dual ported and addressable by the GPU, not "Virtual".

  • p0rt said:
    And going from 14nm to 7nm ryzens with faster ram has to be at least 40% more floating point calculations per second which probably out numbers a few hundred extra cuda cores without running some benchmarks

    You should really stop.

  • So I'm looking to upgrade my hardware for faster renders and previews as I'm sick of waiting ages for my viewport to render the Iray preview.
    Currently I have:
    -GTX 1080
    -AMD Ryzen 5 1600 (6 core)

    I was having a problem during full renders where (even though both were ticked) Daz would only default to using my CPU and not using my GPU at all. It would max out my CPU and the whole computer then would be unusable while it renders. If I unselected the CPU and only had the GPU selected to be used, then it would use the GPU and max it out, but at least my computer doesn't freeze up and I can use it for other things while I wait that isn't graphics intensive. 

    I'd like to upgrade my system, but I have 3 choices that I'd love for anybody to weigh in on if you have some insight! My three choices are:

    1: Upgrade the CPU to an Ryzen 9 3900x which is unparalleled in terms of graphics rendering. ​Up against the latest AMD it almost matches for performance in gaming, but completely outstrips it in rendering in programs like DAZ.

    2: Buy another GPU - another GTX 1080 and run them together in SLI. Obviously having two of them would wrok, and running in SLI is going to improve it all that much more!

    3: Buy another GPU - an RTX 2080 WITHOUT SLI. SLI doesn't work if the cards are different, so they'd be operating as individuals and I'm not sure how that would compare to fully SLI'd 1080s, or even if DAZ can use two graphics cards at once that aren't SLI.

    I'd love to hear any thoughts on what people think?
    Cheers

    If you've only got one card right now, the clear best choice is another identical GPU. Your renders will be near 100% faster. When I went from 1 GTX 1080ti to 2, it was pretty much linear improvement. That's the nature of ray tracing. I don't think there's another single component that would give you that sort of improvement; my first gen threadripper is not that much faster than the 6 core X6 it replaced and to be honest, I kind of regret buying it. I think you'll be pleased with the GPU upgrade, as others have been saying as well.

    Good luck!

  • outrider42outrider42 Posts: 3,679
    edited August 2019

    So I'm looking to upgrade my hardware for faster renders and previews as I'm sick of waiting ages for my viewport to render the Iray preview.
    Currently I have:
    -GTX 1080
    -AMD Ryzen 5 1600 (6 core)

    I was having a problem during full renders where (even though both were ticked) Daz would only default to using my CPU and not using my GPU at all. It would max out my CPU and the whole computer then would be unusable while it renders. If I unselected the CPU and only had the GPU selected to be used, then it would use the GPU and max it out, but at least my computer doesn't freeze up and I can use it for other things while I wait that isn't graphics intensive. 

    I'd like to upgrade my system, but I have 3 choices that I'd love for anybody to weigh in on if you have some insight! My three choices are:

    1: Upgrade the CPU to an Ryzen 9 3900x which is unparalleled in terms of graphics rendering. ​Up against the latest AMD it almost matches for performance in gaming, but completely outstrips it in rendering in programs like DAZ.

    2: Buy another GPU - another GTX 1080 and run them together in SLI. Obviously having two of them would wrok, and running in SLI is going to improve it all that much more!

    3: Buy another GPU - an RTX 2080 WITHOUT SLI. SLI doesn't work if the cards are different, so they'd be operating as individuals and I'm not sure how that would compare to fully SLI'd 1080s, or even if DAZ can use two graphics cards at once that aren't SLI.

    I'd love to hear any thoughts on what people think?
    Cheers

    Like Richard said, it appears you are running out of VRAM. Buying a 2nd 1080 or a 2080 will not help at all if this is the case, as you are being capped by the 8GB VRAM in the 1080.

    This means you have only a few options:

    1- Figure out ways to better optimize your scenes to get under that 8GB VRAM limit. There are many ways to do so of which can be found in these forums. You can remove textures that are not visible (like mouths), you can simplify textures/geometry of background items, and so on.

    2- Buy a GPU that has more than 8GB VRAM. Sadly, the options are expensive. Only a few Nvidia gaming GPUs go beyond 8GB. Maybe you can find a 1080ti 11GB, but those are no longer sold new. There are some last gen Titans that have 12GB, the current 2080ti has 11GB, and the Titan RTX has 24GB. But the Titan RTX is $2500. Then there is Quadro, which can get wildly expensive.

    3- Go for better CPU rendering, which is only limited by system RAM, not GPU VRAM. Like a Threadripper or maybe the new 12 core 3900X. AMD also has a 16 core 3950X coming eventually. However, I very much doubt any CPU will be nearly as fast as a 1080 would be. This would also mean building a whole new machine with top end CPUs, an expensive task. I am still waiting to see somebody bench a 3900X by itself with Iray. Maybe it can surprise us.

    Those are the only options you have. I would very strongly try to go with the "free" option here before spending cash on what may become a bit of a wild ray traced goose chase.

    Some notes, GPU rendering with Iray is nothing like a video game. You do not need to concern yourself with a so called 'balanced build' here. Faster DDR4 is pointless with GPU rendering. How pointless? You can shove a brand new 1080 into a Core 2 Quad with DDR2 and it will render at the exact same speed as it will with the fastest CPU and fastest DDR4. This has been tested and well proven several times over by different people right here in the forums. In fact Iray is so GPU focused that few other specs matter. It is only if you plan on using a CPU for rendering that the CPU and other components might matter. And obviously if you like to play video games on this machine then it matters. But for Iray GPU rendering, the GPU is the ONLY thing that matters. I would not go to that extreme, but this demonstrates how Iray performs.

    Post edited by outrider42 on
  • p0rtp0rt Posts: 217
    p0rt said:
    And going from 14nm to 7nm ryzens with faster ram has to be at least 40% more floating point calculations per second which probably out numbers a few hundred extra cuda cores without running some benchmarks

    You should really stop.

    Why, all cuda is , os a way to take some load off the CPU, while GPU's where not designed for that purpose during rendering, di the direct compute framework was created, with a single operatiom processor dedicated to floating point math
  • kenshaw011267kenshaw011267 Posts: 3,805
    p0rt said:
    p0rt said:
    And going from 14nm to 7nm ryzens with faster ram has to be at least 40% more floating point calculations per second which probably out numbers a few hundred extra cuda cores without running some benchmarks

    You should really stop.

     

    Why, all cuda is , os a way to take some load off the CPU, while GPU's where not designed for that purpose during rendering, di the direct compute framework was created, with a single operatiom processor dedicated to floating point math

    Because everything you say is either wrong or a very inaccurate version of things. 

    For instance CUDA doesn't just do one FP operation at a time but doing thousands of the same operation in parallel. That's why CUDA, and GPGPU's in general, is so good at things like AI or rendering which require the same relatively minor operation on many thousands to millions of data items very quickly.

  • p0rtp0rt Posts: 217
    p0rt said:
    p0rt said:
    And going from 14nm to 7nm ryzens with faster ram has to be at least 40% more floating point calculations per second which probably out numbers a few hundred extra cuda cores without running some benchmarks

    You should really stop.

     

    Why, all cuda is , os a way to take some load off the CPU, while GPU's where not designed for that purpose during rendering, di the direct compute framework was created, with a single operatiom processor dedicated to floating point math

    Because everything you say is either wrong or a very inaccurate version of things. 

    For instance CUDA doesn't just do one FP operation at a time but doing thousands of the same operation in parallel. That's why CUDA, and GPGPU's in general, is so good at things like AI or rendering which require the same relatively minor operation on many thousands to millions of data items very quickly.

    I said processor, a CUDA core, not 3000+ cores dedicated to FP math
  • In before the lock.  surprise  Guys, if we don't keep this civil, the mods are gonna lock the thread.  Here is an Iray primmer for those who might need it.

    https://www.nvidia.com/en-us/design-visualization/iray/

  • SevrinSevrin Posts: 6,313
    edited August 2019

    This is silly.  Iray from Nvidia, a graphics card company, is designed to help sell Nvidia graphics cards.  They don't make their money with Iray; they make their money with graphics cards.

     The better, newer, more expensive Nvidia graphics card you have, and the more Nvidia graphics cards you have, the better the result will be.   Nothing much else comes close in terms of determing performance, as long as the rest of your system can keep up with your Nvidia graphics card(s).

    Post edited by Sevrin on
  • Sevrin said:

    This is silly.  Iray from Nvidia, a graphics card company, is designed to help sell Nvidia graphics cards.  They don't make their money with Iray; they make their money with graphics cards.

     The better, newer, more expensive Nvidia graphics card you have, and the more Nvidia graphics cards you have, the better the result will be.   Nothing much else comes close in terms of determing performance, as long as the rest of your system can keep up with your Nvidia graphics card(s).

    Iray rendering is designed to use with Nvidia Quadro Pro professional level video cards using professional level software.  Luckily we can also use Iray on our consumer level cards with DS.

  • kenshaw011267kenshaw011267 Posts: 3,805
    p0rt said:
    p0rt said:
    p0rt said:
    And going from 14nm to 7nm ryzens with faster ram has to be at least 40% more floating point calculations per second which probably out numbers a few hundred extra cuda cores without running some benchmarks

    You should really stop.

     

    Why, all cuda is , os a way to take some load off the CPU, while GPU's where not designed for that purpose during rendering, di the direct compute framework was created, with a single operatiom processor dedicated to floating point math

    Because everything you say is either wrong or a very inaccurate version of things. 

    For instance CUDA doesn't just do one FP operation at a time but doing thousands of the same operation in parallel. That's why CUDA, and GPGPU's in general, is so good at things like AI or rendering which require the same relatively minor operation on many thousands to millions of data items very quickly.

     

    I said processor, a CUDA core, not 3000+ cores dedicated to FP math

    There is no such thing as 1 CUDA. There are always multiple CUDA per streaming multiprocessor and there are always multiple SMP per GPU.

    So talking about 1 CUDA makes no sense.

  • What I want from Santa this year.

    https://www.nvidia.com/en-us/design-visualization/visual-computing-appliance/

    30,720 Cuda cores and 256 GB of shared video memory.

    It is rumored that you can share video memory on two consumer level 2080 RTX video cards if you buy the Quadro Pro NV link (about $700.00 vs about $80.00 for a consumer level NV link,  This is just a rumor.  YMMV.  I don't think anyione is willing to spend $700.00 on a rumor).

  • CUDA cores are not analogous to CPU cores. AFAIK they don't even really exist and are more of a conceptual device for developers; from what I understand they can't perform control flow instructions and don't have separate instruction sets for each of the "cores". The VC just sort of rips through parallel math operations and can emulate the existence of separate cores. 

     

     

    It is rumored that you can share video memory on two consumer level 2080 RTX video cards if you buy the Quadro Pro NV link (about $700.00 vs about $80.00 for a consumer level NV link,  This is just a rumor.  YMMV.  I don't think anyione is willing to spend $700.00 on a rumor).

    Nah, there's no way. NVIDIA probably could have enabled memory pooling on the 2080 and 2080ti (the 2080ti is not significantly different to the RTX Titan which does support memory pooling), but they choose not to. This forces companies to pay (a massive amount) extra for Quadro or Titan if they want something that is likely to be of almost no benefit to the vast majority of consumers. AFAIK the RTX Titan uses the exact same bridge as the 2080/2080ti, but I could be wrong about that. That being said, people find hacks for this sort of thing all the time, I've heard there is some way to fake a bridge for 2x RTX2060's (physically it exists across the PCIe slots) to enable SLI. I don't know much about it though.

  • kenshaw011267kenshaw011267 Posts: 3,805

    What I want from Santa this year.

    https://www.nvidia.com/en-us/design-visualization/visual-computing-appliance/

    30,720 Cuda cores and 256 GB of shared video memory.

    It is rumored that you can share video memory on two consumer level 2080 RTX video cards if you buy the Quadro Pro NV link (about $700.00 vs about $80.00 for a consumer level NV link,  This is just a rumor.  YMMV.  I don't think anyione is willing to spend $700.00 on a rumor).

    Do not do this! The links appear to be completely identical except for the one for RTX 5000 which is smaller and incompatible with all other links.

Sign In or Register to comment.