Might GPU Prices Prompt a Return to CPU-based PBR?

Nyghtfall3DNyghtfall3D Posts: 764
edited March 2018 in The Commons

I only recently learned about cryptocurrency miners driving up GPU prices and found myself wondering if the issue may have inadvertently shed light on one disadvantage of GPU-based rendering: Cost.

Before DAZ introduced us to Iray two years ago, I never imagined turning my workstation into a multi-GPU render rig.  Though CPU rendering with LuxRender was slow, I was perfectly content with my GTX 760 as my display driver and sole gaming card.  Since then, I've upgraded thrice.  My current setup includes a 1080 Ti and 980 Ti.  I was hoping to go dual 1080 Ti at some point, but that's clearly not going to happen anymore.

As I think back on how much money I've spent on GPUs in just two years time, it occurs to me that I might never stop chasing increasingly faster GPU render speeds, and that really upsets me.  I do not want to spend my hard earned money constantly upgrading my rig, but I know I'm not going to be able to help myself.  In stark contrast, I'm still using the Core i7-4770K I bought in 2013.  I could've bought a new CPU, motherboard, and RAM with the money I've spent on GPUs.

Now I wish Iray hadn't so successfully driven us PBR artists toward GPU-based rendering.  Maybe Paolo would still be keeping Reality updated.  The current version is two years old.  On a related note, last December, it was announced that LuxRender is being rebooted with a new team.  The old team left the project.  The current version is also two years old.  :P

One thing I miss about CPU rendering is not being bottlenecked by VRAM.  I've got 11 GB on my 1080 Ti but would love to give my projects 32 GB of extra breathing room on RAM.

 

EDIT: The first few responses to this thread above my last post have put things back into perspective for me.  I feel much better now and will continue thoroughly enjoying GPU-only rendering.

 
Post edited by Nyghtfall3D on
«13

Comments

  • tj_1ca9500btj_1ca9500b Posts: 2,048
    edited March 2018

    I've been seriously pondering dual EPYC... 128 threads might actually get somewhat close to Iray/CUDA core render times.  Looking at the Daz Iray benchmarks thread, an i7-7980XE (16 cores/32 threads) is clocking in at just over 8 minutes.  Sure EPYC is slower, but with quadruple the number of cores/threads, I'm guessing it might come in at about 4 minutes.  Four minutes is still slower than say 1 minute (1080/dual 1080 territory), but that's a lot better than say a half hour a render...

    Of course, the buy in for dual EPYC is in the 10-20K range, which is why I'm only pondering it and not doing it.

    Post edited by tj_1ca9500b on
  • GreymomGreymom Posts: 1,104

    Paolo is looking at a path for updating Reality also, according to the forum.  

    Although there were problems, I was pretty happy with Reality/Luxrender, and I've got the hardware I need already for multi-machine or GPU (AMD) rendering (once some of the current driver issues are ironed out).

  • Still chugging along with my gtx 770! I had figured that I would upgrade at some point but never got around to it. Not likely to happen any time soon. I bought some iray to 3dl converters should my card fail.

    I never really figured 3dl out. Maybe I can try Reality? I was always amazed by many of the reality renders that I saw.

  • Oso3DOso3D Posts: 14,896

    I always laugh when people complain about 12 gb vram, when I’ve been pretty happy with my 3.5. Heh

  • HavosHavos Posts: 5,310

    I've been seriously pondering dual EPYC... 128 threads might actually get somewhat close to Iray/CUDA core render times.  Looking at the Daz Iray benchmarks thread, an i7-7980XE (16 cores/32 threads) is clocking in at just over 8 minutes.  Sure EPYC is slower, but with quadruple the number of cores/threads, I'm guessing it might come in at about 4 minutes.  Four minutes is still slower than say 1 minute (1080/dual 1080 territory), but that's a lot better than say a half hour a render...

    Of course, the buy in for dual EPYC is in the 10-20K range, which is why I'm only pondering it and not doing it.

    GPU prices may have increased, but you can still get a 1080 for much much less than 10-20K, infact you could get at least 10-20 such cards. If you are seriously considering buying one of these CPU beasts the principal reason should be for something other than rendering.

    GPU prices may have increased, but we should get things into perspective. Prices are maybe 50% up on what they were earlier, but GPU prices would need to be tenfold before CPU rendering starts to make economic sense.

    I get that CPU has certain advantages (eg: not limited by VRAM size), but I have never had an issue getting scenes (including complex ones) into 4GB of VRAM. I personally much prefer rendering on GPU only, meaning I can continue to use my PC for other tasks, instead of it running like a dog whilst all the processors are maxed out at 100%.

  • davesodaveso Posts: 6,466

    its all about your available cashflow though. I have an onboard GPU and quite frankly cannot afford any of the current graphics cards. Perhaps used would work, if you can find them. 
    Although if I quit buying DAZ products for 6 months I could afford a 1080Ti 

    To have to continually upgrade has been a trend for as long as I've owned a computer, and that was an Atari 800 way back. Its a never ending thing ... so far in the history of personal computing, unless you;re happy with amber screens and 8 bit graphics

  • Nyghtfall3DNyghtfall3D Posts: 764
    edited March 2018
    Havos said:
    I personally much prefer rendering on GPU only, meaning I can continue to use my PC for other tasks, instead of it running like a dog whilst all the processors are maxed out at 100%.

    Good point.  Shortly after DS 4.8 was released, I discovered that un-checking my CPU in the render settings let me easily multitask while my GPU rendered in the background.  I will also grant that I've yet to see any of my projects use more than about 4 GB of VRAM.  Nevertheless, seeing them even get close to that is what prompted me to swap out one of my 980 Ti's for a 1080 Ti.

    One thing I do not miss about CPU rendering is having to tell Lux to use all but one of the processors.

     

     
    Post edited by Nyghtfall3D on
  • GreymomGreymom Posts: 1,104
    daveso said:

    its all about your available cashflow though. I have an onboard GPU and quite frankly cannot afford any of the current graphics cards. Perhaps used would work, if you can find them. 
    Although if I quit buying DAZ products for 6 months I could afford a 1080Ti 

    To have to continually upgrade has been a trend for as long as I've owned a computer, and that was an Atari 800 way back. Its a never ending thing ... so far in the history of personal computing, unless you;re happy with amber screens and 8 bit graphics

    I remember being really excited that I had saved up the $300 to buy another 4K (yes kilobyte) card for my Atari 800!  Still have it.

  • HavosHavos Posts: 5,310
    Nyghtfall said:
    Havos said:
    I personally much prefer rendering on GPU only, meaning I can continue to use my PC for other tasks, instead of it running like a dog whilst all the processors are maxed out at 100%.

    Good point.  Shortly after DS 4.8 was released, I discovered that un-checking my CPU in the render settings let me easily multitask while my GPU rendered in the background.  I will also grant that I've yet to see any of my projects use more than about 4 GB of VRAM.  Nevertheless, seeing them even get close to that is what prompted me to swap out one of my 980 Ti's for a 1080 Ti.

    One thing I do not miss about CPU rendering is having to tell Lux to use all but one of the processors.

     

     

    I also don't miss my PC sounding like a plane is taking off when the main CPU fan hit max speed to cool down my melting processors. Even if 3DL could be used to create as nice looking renders as Iray, I would be relucant to go back to CPU rendering for this reason alone.

  • Nyghtfall3DNyghtfall3D Posts: 764

    Thanks to everyone for your insight.  I feel much better now.

     

     
  • Pack58Pack58 Posts: 750
    edited March 2018
    Oso3D said:

    I always laugh when people complain about 12 gb vram, when I’ve been pretty happy with my 3.5. Heh

    LOL. Yep I happily struggle along with my two $15.00 clearance GT 710's. Still better then CPU rendering with the third gen, single threading i5 quad on that machine.

    Post edited by Pack58 on
  • GreymomGreymom Posts: 1,104

    My "workflow":  1) Start render.   2) Go to bed.   3) Check in morning.

    I have done most of my rendering in the past with an old Core 2 Quad Q6600 machine.  Working on upgrading finally.

  • TynkereTynkere Posts: 834
    Oso3D said:

    I always laugh when people complain about 12 gb vram, when I’ve been pretty happy with my 3.5. Heh

    Heh.  Maybe some wisdom to that.  I should pay more attention to the 'veterans' here.  For example, on one topic-- I forget where-- you mentioned converting IRay only stuff to 3DL in less that 5 minutes?  Wish I could do it that quickly!

    As for VRAM game, maybe learn from my mistake.  I traded a 1070 8G for a used P5000 16G for $800.  Very difficult decision for me.  I should have known better. 

    Place 8 actors and fill with props approaching 14G I think.  Then watch the IRay render's geometry eat 32GB of RAM for lunch.  So I can't even use this card's potential unless I spend (waste?) even more money upgrading to 64G ram?  Not worth it.  Granted, it's a nice card-- I didn't know it supports color calibration at higher bit depth in photoshop on an NEC monitor, but the money would've probably been better invested in ram. 

    Hindsight is always 20/20  : )

    Just a few random thoughts this morning.  smiley

    Thanks for reading

    --Bruce

     

     

  • tj_1ca9500btj_1ca9500b Posts: 2,048
    edited March 2018
    Havos said:

    I've been seriously pondering dual EPYC... 128 threads might actually get somewhat close to Iray/CUDA core render times.  Looking at the Daz Iray benchmarks thread, an i7-7980XE (16 cores/32 threads) is clocking in at just over 8 minutes.  Sure EPYC is slower, but with quadruple the number of cores/threads, I'm guessing it might come in at about 4 minutes.  Four minutes is still slower than say 1 minute (1080/dual 1080 territory), but that's a lot better than say a half hour a render...

    Of course, the buy in for dual EPYC is in the 10-20K range, which is why I'm only pondering it and not doing it.

    GPU prices may have increased, but you can still get a 1080 for much much less than 10-20K, infact you could get at least 10-20 such cards. If you are seriously considering buying one of these CPU beasts the principal reason should be for something other than rendering.

    GPU prices may have increased, but we should get things into perspective. Prices are maybe 50% up on what they were earlier, but GPU prices would need to be tenfold before CPU rendering starts to make economic sense.

    I get that CPU has certain advantages (eg: not limited by VRAM size), but I have never had an issue getting scenes (including complex ones) into 4GB of VRAM. I personally much prefer rendering on GPU only, meaning I can continue to use my PC for other tasks, instead of it running like a dog whilst all the processors are maxed out at 100%.

    For the record, I'm currently using Iray and dual 1080s (6.4 GB ea after Windows 10 reserves some GPU memory).

    I looked into downgrading to Windows 7, but my (uber) laptop has a BIOS that is specific to Windows 10...

    I hit the wall fairly often on the VRAM memory, and have ended up doing multiple passes (with some characters hidden) to try to get things to fit inside the VRAM.  Also, Daz Studio has this annoying trait where sometimes I have to restart after the first pass, as the VRAM isn't getting 'cleared' between renders.  In this case, even reloading the scene isn't enough, a full restart is required.

    Plus, I have some VERY large scenes that I put together, then there's that Airport Island stuff by Perry Walinga that can easily soak up well over 6.4 GB by itself, even for just a section of it.  I do have scene optimizer which I use occasionally, but this takes time away from just rendering.  And doing things in multiple passes doubles/triples the amount of time to do just one fairly complex scene...

    Soooo, the prospect of not having to sweat VRAM is attractive to me.  Quadro P6000s are going in the 5K range currently (they have 24 GB of VRAM), or just a bit more expensive than the most expensive EPYC CPU (not counting Mobo, PS, Memory, storage, etc.).

    The other thing that EPYC can do is easily accomodate say 8 GPUs (with the right mobo setup)... so you could 'tag team' your IRAY renders, alternating between two sets of four GPUs, but still have the CPU only option for very complex renders... Sure, you could build multiple boxes for this, but an 8 GPU dual EPYC setup would have a LOT of flexibility for rendering (or start with a single EPYC CPU config if you wanted to pinch pennies until you could afford the second CPU).

    I've thought about grabbing a P6000 as well, with an external enclosure and utilizing the TBolt 3 port on my laptop as an option.

     

    Post edited by tj_1ca9500b on
  • Oso3DOso3D Posts: 14,896
    Tynkere said:
    Oso3D said:

    I always laugh when people complain about 12 gb vram, when I’ve been pretty happy with my 3.5. Heh

    Heh.  Maybe some wisdom to that.  I should pay more attention to the 'veterans' here.  For example, on one topic-- I forget where-- you mentioned converting IRay only stuff to 3DL in less that 5 minutes?  Wish I could do it that quickly!

    Check out the free Iray to 3DL script!

    https://www.daz3d.com/forums/discussion/139326/irayto3delight-conversion-script

    While you will probably want to tweak some stuff, IME it's not any more tweaking than you'd probably want to do with stuff set up as 3DL anyway. (Like glass, emissives, etc)

    This other script, also free from Esemy, is a HUGELY useful tool for manipulating surfaces, no matter what renderer you are using:

    https://www.daz3d.com/forums/discussion/196421/search-and-select-surfaces-in-daz-studio

     

  • HavosHavos Posts: 5,310
    Tynkere said:
    Oso3D said:

    I always laugh when people complain about 12 gb vram, when I’ve been pretty happy with my 3.5. Heh

    Heh.  Maybe some wisdom to that.  I should pay more attention to the 'veterans' here.  For example, on one topic-- I forget where-- you mentioned converting IRay only stuff to 3DL in less that 5 minutes?  Wish I could do it that quickly!

    As for VRAM game, maybe learn from my mistake.  I traded a 1070 8G for a used P5000 16G for $800.  Very difficult decision for me.  I should have known better. 

    Place 8 actors and fill with props approaching 14G I think.  Then watch the IRay render's geometry eat 32GB of RAM for lunch.  So I can't even use this card's potential unless I spend (waste?) even more money upgrading to 64G ram?  Not worth it.  Granted, it's a nice card-- I didn't know it supports color calibration at higher bit depth in photoshop on an NEC monitor, but the money would've probably been better invested in ram. 

    Hindsight is always 20/20  : )

    Just a few random thoughts this morning.  smiley

    Thanks for reading

    --Bruce

    If you scene is using 32GB of RAM just for geometry I suspect it has not been well optimized. You would need a lot of characters at very large levels of Sub-D (ie 4+) to get to that amount of vertices/polygons. If these are not close ups of the characters in question, it is highly unlikley such a high level of sub-D will make any difference to the final render.

    The problem with never bothering to properly optimize a scene, is that once you spent the megabucks for a better machine, in no time you quickly out grow it, and are looking again at the need to upgrade.

  • Oso3DOso3D Posts: 14,896

    Personally, I'd rather spend 30 extra minutes optimizing a scene than 30 grand on some hotrod machine.

     

  • j cadej cade Posts: 2,310
    Oso3D said:

    Personally, I'd rather spend 30 extra minutes optimizing a scene than 30 grand on some hotrod machine.

     

    Here here. Honestly, it's pretty fun to see just how much I can fit on my puny little 2gb GPU. And so much of the optimizations are so simple, like removing hidden textures.
  • bluejauntebluejaunte Posts: 1,861
    j cade said:
    Oso3D said:

    Personally, I'd rather spend 30 extra minutes optimizing a scene than 30 grand on some hotrod machine.

     

     

    Here here. Honestly, it's pretty fun to see just how much I can fit on my puny little 2gb GPU. And so much of the optimizations are so simple, like removing hidden textures.

    https://www.grammarly.com/blog/here-here-vs-hear-hear/

    Sorry angel

  • kyoto kidkyoto kid Posts: 40,602
    Greymom said:

    Paolo is looking at a path for updating Reality also, according to the forum.  

    Although there were problems, I was pretty happy with Reality/Luxrender, and I've got the hardware I need already for multi-machine or GPU (AMD) rendering (once some of the current driver issues are ironed out).

    ...the geologic level render time was one of the main reasons I pulled the plug on Reality/Lux.  Yeah I know there is the speed boost for my old system it was only x3 (not x10) and it was at the expense of render quality.

    Furthermore R4 tended to have a number of bugs, the worst of which was it not being backwards compatible (for example, when I tried to render a scene created in the previous 4.x release, I still found myself having to convert all the materials over again from scratch). It also seemed with each patch to fix one bug, another one or two appeared.

  • kyoto kidkyoto kid Posts: 40,602

    I've been seriously pondering dual EPYC... 128 threads might actually get somewhat close to Iray/CUDA core render times.  Looking at the Daz Iray benchmarks thread, an i7-7980XE (16 cores/32 threads) is clocking in at just over 8 minutes.  Sure EPYC is slower, but with quadruple the number of cores/threads, I'm guessing it might come in at about 4 minutes.  Four minutes is still slower than say 1 minute (1080/dual 1080 territory), but that's a lot better than say a half hour a render...

    Of course, the buy in for dual EPYC is in the 10-20K range, which is why I'm only pondering it and not doing it.

    ...so far Eypc only supports Linux (which Daz doesn't  The 32 core CPUs are also very expensive, 3,400$ for the 2.0/3.0 GHz Epyc 7501 and 4,200$ for the 2.2/3.2GHz Epyc 7601.  For that you could get a 16 GB Quadro P5000 with 2560 Cores and have 1,400$ in change (or two for the price of the Epyc 7601)

  • j cadej cade Posts: 2,310
    j cade said:
    Oso3D said:

    Personally, I'd rather spend 30 extra minutes optimizing a scene than 30 grand on some hotrod machine.

     

     

    Here here. Honestly, it's pretty fun to see just how much I can fit on my puny little 2gb GPU. And so much of the optimizations are so simple, like removing hidden textures.

    https://www.grammarly.com/blog/here-here-vs-hear-hear/

    Sorry angel

    DOH. I even thought about checking it. :(

  • kyoto kidkyoto kid Posts: 40,602
    edited March 2018
    Havos said:
    Nyghtfall said:
    Havos said:
    I personally much prefer rendering on GPU only, meaning I can continue to use my PC for other tasks, instead of it running like a dog whilst all the processors are maxed out at 100%.

    Good point.  Shortly after DS 4.8 was released, I discovered that un-checking my CPU in the render settings let me easily multitask while my GPU rendered in the background.  I will also grant that I've yet to see any of my projects use more than about 4 GB of VRAM.  Nevertheless, seeing them even get close to that is what prompted me to swap out one of my 980 Ti's for a 1080 Ti.

    One thing I do not miss about CPU rendering is having to tell Lux to use all but one of the processors.

     

     

    I also don't miss my PC sounding like a plane is taking off when the main CPU fan hit max speed to cool down my melting processors. Even if 3DL could be used to create as nice looking renders as Iray, I would be relucant to go back to CPU rendering for this reason alone.

    ...I have made that transition. With IBL Master we now have superior indirect lighting without the long render times of UE. I tend to purchase a lot of Shader resource kits, which was also true with 3DL. Using these along with the experience from my Iray experiments, I have been able to get better looking metals and glass than previously.

    There is also a mega 3DL shader system in testing right now which as I understand will open a lot more of what this engine is really capable of.

    When even a simple test render in Iray of a single character with a neutral backdrop takes upwards of 40 min on the CPU to render (without increasing Sub_D), compared 10 - 15 min for to a full scene with multiple characters as well as transmaps and reflections in 3DL, The latter wins out when it comes to workflow.  I don't have to mess with time consuming workarounds to get everything to fit in a small amount of VRAM or extensive postwork to get the desired results.

    Post edited by kyoto kid on
  • kyoto kid said:

    I've been seriously pondering dual EPYC... 128 threads might actually get somewhat close to Iray/CUDA core render times.  Looking at the Daz Iray benchmarks thread, an i7-7980XE (16 cores/32 threads) is clocking in at just over 8 minutes.  Sure EPYC is slower, but with quadruple the number of cores/threads, I'm guessing it might come in at about 4 minutes.  Four minutes is still slower than say 1 minute (1080/dual 1080 territory), but that's a lot better than say a half hour a render...

    Of course, the buy in for dual EPYC is in the 10-20K range, which is why I'm only pondering it and not doing it.

    ...so far Eypc only supports Linux (which Daz doesn't  The 32 core CPUs are also very expensive, 3,400$ for the 2.0/3.0 GHz Epyc 7501 and 4,200$ for the 2.2/3.2GHz Epyc 7601.  For that you could get a 16 GB Quadro P5000 with 2560 Cores and have 1,400$ in change (or two for the price of the Epyc 7601)

    Windows Server is also supported, https://community.amd.com/thread/226108

  • kyoto kidkyoto kid Posts: 40,602

    ...OK that is new from the review I read after the initial release of Epyc (they did manage to get it sort of working with W10). However, who here can afford a licence for Windows Server Edition? 

    Xeon CPUs at least support 64 bit desktop versions of Windows (up to two CPUs)

  • GreymomGreymom Posts: 1,104
    kyoto kid said:

    ...OK that is new from the review I read after the initial release of Epyc (they did manage to get it sort of working with W10). However, who here can afford a licence for Windows Server Edition? 

    Xeon CPUs at least support 64 bit desktop versions of Windows (up to two CPUs)

    I am seeing used, tested, 30-day warranty E5-2680V2 10-core Xeons for ~$175 on EBAY.  Some listed as new are going for about twice that or a bit more.  That's pretty darn good.  Supermicro and Asrock Rack C602 motherboards are still available, saw one of the Asrocks on sale for $280.   But, at the moment, you can still get custom-rebuilt refurbished servers as mentioned above for less than the total for the parts, through EBAY or Newegg.

  • tj_1ca9500btj_1ca9500b Posts: 2,048
    kyoto kid said:

    I've been seriously pondering dual EPYC... 128 threads might actually get somewhat close to Iray/CUDA core render times.  Looking at the Daz Iray benchmarks thread, an i7-7980XE (16 cores/32 threads) is clocking in at just over 8 minutes.  Sure EPYC is slower, but with quadruple the number of cores/threads, I'm guessing it might come in at about 4 minutes.  Four minutes is still slower than say 1 minute (1080/dual 1080 territory), but that's a lot better than say a half hour a render...

    Of course, the buy in for dual EPYC is in the 10-20K range, which is why I'm only pondering it and not doing it.

    ...so far Eypc only supports Linux (which Daz doesn't  The 32 core CPUs are also very expensive, 3,400$ for the 2.0/3.0 GHz Epyc 7501 and 4,200$ for the 2.2/3.2GHz Epyc 7601.  For that you could get a 16 GB Quadro P5000 with 2560 Cores and have 1,400$ in change (or two for the price of the Epyc 7601)

    Multiple server vendors offer Windows Server with EPYC builds (Supermicro, etc.).  They have been pretty much since the beginning.  They also offer a number of Linux options.

    Titan Computers also offers Windows 10 Pro 64 bit with their dual EPYC workstation.  That Mobo config only has a couple of x16 slots though...

    The main issue I've seen is the lack of mobo options, at least for EPYC.  So far, the only one I've found that has a (theoretically) feasable 8 GPU option is the Supermicro H11DSU-iN, but most of those riser slot cards are only x8, so you'd likely be using riser (ribbon) extenders that have x16 slots (to fit the x16 cards).  It'd be a DIY build (i.e. having to 'bolt on' the x16 extenders, and providing additional power supplies for the GPUs) , in a rack config.  You'd also be a x8 for 7 of the 8 GPUs which really isn't a problem since x16 currently only gains you a couple of percentage points on performance (i.e. adding another GPU at x8 will gain you significantly more performance.). 

    BUT, if you were looking for a system that you could add on to as budget allows (i.e. adding on additional GPUs later), the expandability options are huge.  Plus, equipping it with 512 GB of ram (16 32 GB sticks) actually makes parking a lot of stuff in a ramdisk a very feasable option (with a HDD/SSD backup to store the ramdisk image when you shut down of course.).  Yeah, 99% of people won't need a build like this, but if you are doing a lot of animation sequences (where you are rendering multiple frames back to back), well you can see why reducing render times becomes more important.  And having the option to render very complex scenes more quickly with the 128 CPU threads when the CUDA cores let you down is there too.

    Most of the other workstation options with 8 GPUs are Intel only currently, which has less cores and less PCIe lanes... but obviously they managed to get 8 GPUs to work somehow... EPYC doesn't really have an 8 GPU workstation build mobo option currently, at least not that I've found.

    Of course, if you are doing 3Delight, having 128 CPU threads starts to look VERY attractive, since CUDA cores are no help here.  At least for Daz.  Again, that Titan Computers EPYC workstation would work well for this. 

    Such a build isn't for most people, but for those looking to maximize their workload options, well you can see the possibiilities.  But there's always 'bigger and better' just around the corner.  AMD's 7 nm options will be hitting the market next year, and a 64 core/128 thread server CPU is on AMD's roadmap (so that's 128 cores and 256 threads in a 2x CPU build).  Plus at 7 nm, they should be a little faster than the current options...  And Intel's 10 nm hedt options should be (finally) hitting the market in the coming months as well.

    ------

    On the optimization thing...

    I tried optimizing an Airport Island scene once, by cutting down texture sizes and doing a whole bunch of things.  The scene was a more open 'indoor long shot' and I was never able to get it to fit into the 6.4 GB limit for my 1080s.  And I spent hours trying to get it to fit, time I could have been spending on posing/setup instead.  This is why the P6000 looks attractive to me, my only question is whether 24 GB (minus whatever Windows reserves) is enough.

    BTW, fastest way to max out your VRAM, try rendering 8 or more Genesis 2/3/etc. characters simultaneously in a detailed environment... I usually end up splitting these up, but of course this messes with the shadows and such (i.e. characters in one pass that affect the lighting in the other pass).

    Anyways, this isn't something I'd be building immediately, as I'm pretty invested in Iray.  But the P6000 external via a thunderbolt 3 port option is something I've been seriously considering for a while now.   The $5k plus pricetag is the thing that has me hesitating... 

    I'm hoping that the next round of NVidia cards provides us with a faster option to the P6000 with it's 24 GB of VRAM, but we shall see... the next round of releases for new NVidia cards is due in the next couple of months...

     

  • Ghosty12Ghosty12 Posts: 1,985
    edited March 2018
    Greymom said:
    kyoto kid said:

    ...OK that is new from the review I read after the initial release of Epyc (they did manage to get it sort of working with W10). However, who here can afford a licence for Windows Server Edition? 

    Xeon CPUs at least support 64 bit desktop versions of Windows (up to two CPUs)

    I am seeing used, tested, 30-day warranty E5-2680V2 10-core Xeons for ~$175 on EBAY.  Some listed as new are going for about twice that or a bit more.  That's pretty darn good.  Supermicro and Asrock Rack C602 motherboards are still available, saw one of the Asrocks on sale for $280.   But, at the moment, you can still get custom-rebuilt refurbished servers as mentioned above for less than the total for the parts, through EBAY or Newegg.

    Just take note that buying hardwarre expecially CPU's off Ebay for that price they might be dodgy.. Watching a Linus Tech Tips video, they had quite a few problems with getting the system to post, not to forget that they didn't have the heatsinks of which they had to get in.. They got the system to work but it was quite the headache for them.

    On other things there is a story floating around of how Nvidia could be in hot water for anti competitive behaviour, with its GeForce Partner Program..

    Post edited by Ghosty12 on
  • MadbatMadbat Posts: 382

    I'm still running a 780ti, and that will have to do. I'd love to upgrade, but that's not happening for months at least, if I knock off on spending. Someone up there mentioned geological render times, That's why I never really cared for reality/lux based renders, or 3Delight for that matter. I've had some run for as long as 87 hours...never again.

Sign In or Register to comment.