Iray Starter Scene: Post Your Benchmarks!

1246749

Comments

  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    edited December 1969

    Dumor3D said:

    I'm thinking of at some point adding a Titan X, just for those occasions that will certainly happen somewhere along the line, where 4 is not enough and then for most scenes, let the work horses be 980s with the Titan. I keep watching to see if something blows my CUDA core theory up. So far it seems to be the number of cores = a very linear gain in speed regardless of card specs.

    Misgenus published some benchmark here

    Render Time mostly scales with core number except for Fermi based cards

  • BlackDog1966BlackDog1966 Posts: 10
    edited March 2015

    Hi guys,

    Tried the benchmark scene on my machine and finished in 2 minutes 43 seconds.

    My machine is a Windows 7 64bit quad core i7 with 32gb of ram and x2 GTX 970 gpu's with 4gb each.

    Post edited by BlackDog1966 on
  • EmmaAndJordiEmmaAndJordi Posts: 952
    edited December 1969

    Hi guys,

    Tried the benchmark scene on my machine and finished in 2 minutes 43 seconds.

    My machine is a Windows 7 64bit quad core i7 with 32gb of ram and x2 GTX 970 gpu's with 4gb each.

    That is an amazing time! What is your processor model? Can you make a test with just 1 gpu? I am asking because we have an i7 3770K and would like to purchase a GTX 970, so it may help us to decide. Thank you! :)

  • Dumor3DDumor3D Posts: 1,316
    edited December 1969

    Dumor3D said:

    I'm thinking of at some point adding a Titan X, just for those occasions that will certainly happen somewhere along the line, where 4 is not enough and then for most scenes, let the work horses be 980s with the Titan. I keep watching to see if something blows my CUDA core theory up. So far it seems to be the number of cores = a very linear gain in speed regardless of card specs.

    Misgenus published some benchmark here

    Render Time mostly scales with core number except for Fermi based cards
    Interesting. This seems to agree with what I'm finding. As stated, that is testing from a previous version of Iray, so it could differ with our version. I looked at the GTX780s, which have around 800 more CUDA cores than the 980s, but dropped that idea due to 3GB VRAM (or at least that is the only option I found at my 'normal online computer store') as my choice was to go to at least 4GB.

    Thanks for that link.

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited March 2015

    Hi guys,

    Tried the benchmark scene on my machine and finished in 2 minutes 43 seconds.

    My machine is a Windows 7 64bit quad core i7 with 32gb of ram and x2 GTX 970 gpu's with 4gb each.

    That is an amazing time! What is your processor model? Can you make a test with just 1 gpu? I am asking because we have an i7 3770K and would like to purchase a GTX 970, so it may help us to decide. Thank you! :)


    The 970 is a really nice chip, I'm beginning to have doubts about that 4GB ram size on the card. If the scene dose not completely fit in the cards ram, along with whatever else is there (drivers, desktop video buffer, etc), the card will NOT be used by Iray. The cards ram size is not cumulative, it is for each card independently. If you can get cards with more ram, especially for HD figures and subdivision levels.

    I will add, that all the k6000, and VCA demo vids I've seen so far, were of things not people (figures). Easy to optimize with lower poly counts and no maps at all for most surface types (plastic, metal, glass, etc), unlike figures.

    Interesting. This seems to agree with what I'm finding. As stated, that is testing from a previous version of Iray, so it could differ with our version. I looked at the GTX780s, which have around 800 more CUDA cores than the 980s, but dropped that idea due to 3GB VRAM (or at least that is the only option I found at my 'normal online computer store') as my choice was to go to at least 4GB.

    Thanks for that link.
    I was going for a cheaper 4GB card, just to get my toes wet per say. And to drive my monitors, to get that out of the ram of additional cards. Yes, having run out of ram with 16GB and then 32GB with nothing running except Daz Studio, I'm really hoping for cards with similar amounts, if not Much more ram.

    I have been there with a 4GB Ram max address limit of 32 bit less then a year ago, lol. It' just a really close proxy to the 4GB graphics cards. The drivers eats up some of that video-ram on the card. And you end up with less then 4GB to work with.

    My graphics card ate up some of that 4GB system ram addresses on the old 32 bit system, close enough to comprehend the limits, lol. It was an (win XP 32bit) computer with two 2GB ram sticks in it (4GB total). I never had all 4GB to work with. to use most fugures, I had to drop the maps down below 2k x 2k pixels, and even run subdivision levels of zero, not one... just to get hair on the figures with cloths, lol.

    Post edited by ZarconDeeGrissom on
  • EmmaAndJordiEmmaAndJordi Posts: 952
    edited December 1969


    The 970 is a really nice chip, I'm beginning to have doubts about that 4GB ram size on the card. If the scene dose not completely fit in the cards ram, along with whatever else is there (drivers, desktop video buffer, etc), the card will NOT be used by Iray. The cards ram size is not cumulative, it is for each card independently. If you can get cards with more ram, especially for HD figures and subdivision levels.

    Yes you are right, Zarcon. Here in the shops there are cards up to 4 gb, but maybe they can order one bigger, or buy through Internet.

    In NVIDIA pages there is this table, if you expand the GeForce Products section, there is a measurement of performance. The results of their own benchmarks are surprising. The 970 scores very well:

    https://developer.nvidia.com/cuda-gpus

  • SimonWMSimonWM Posts: 924
    edited December 1969

    My System:
    Windows 7 pro SP1 64 bit
    32 GB Ram, Intel Core i7 3930K @ 3.20GHz
    EVGA GTX 580 3GB Classified Ultra
    EVGA GTX 580 3GB Classified
    NVIDIA Driver version: 344.65

    My results (to completion):
    GPUs Only = 4 minutes 21.64 seconds
    GPUs + CPU = 4 minutes 28.85 seconds

  • Dumor3DDumor3D Posts: 1,316
    edited December 1969

    Hi guys,

    Tried the benchmark scene on my machine and finished in 2 minutes 43 seconds.

    My machine is a Windows 7 64bit quad core i7 with 32gb of ram and x2 GTX 970 gpu's with 4gb each.

    That is an amazing time! What is your processor model? Can you make a test with just 1 gpu? I am asking because we have an i7 3770K and would like to purchase a GTX 970, so it may help us to decide. Thank you! :)


    The 970 is a really nice chip, I'm beginning to have doubts about that 4GB ram size on the card. If the scene dose not completely fit in the cards ram, along with whatever else is there (drivers, desktop video buffer, etc), the card will NOT be used by Iray. The cards ram size is not cumulative, it is for each card independently. If you can get cards with more ram, especially for HD figures and subdivision levels.

    I will add, that all the k6000, and VCA demo vids I've seen so far, were of things not people (figures). Easy to optimize with lower poly counts and no maps at all for most surface types (plastic, metal, glass, etc), unlike figures.

    Interesting. This seems to agree with what I'm finding. As stated, that is testing from a previous version of Iray, so it could differ with our version. I looked at the GTX780s, which have around 800 more CUDA cores than the 980s, but dropped that idea due to 3GB VRAM (or at least that is the only option I found at my 'normal online computer store') as my choice was to go to at least 4GB.

    Thanks for that link.


    I was going for a cheaper 4GB card, just to get my toes wet per say. And to drive my monitors, to get that out of the ram of additional cards. Yes, having run out of ram with 16GB and then 32GB with nothing running except Daz Studio, I'm really hoping for cards with similar amounts, if not Much more ram.

    I have been there with a 4GB Ram max address limit of 32 bit less then a year ago, lol. It' just a really close proxy to the 4GB graphics cards. The drivers eats up some of that video-ram on the card. And you end up with less then 4GB to work with.

    My graphics card ate up some of that 4GB system ram addresses on the old 32 bit system, close enough to comprehend the limits, lol. It was an (win XP 32bit) computer with two 2GB ram sticks in it (4GB total). I never had all 4GB to work with. to use most fugures, I had to drop the maps down below 2k x 2k pixels, and even run subdivision levels of zero, not one... just to get hair on the figures with cloths, lol.

    What I have found is computer RAM usage for a scene does not equal VRAM usage for a scene. Iray does a huge amount of optimization before sending it to the card. I just testing a 12 person scene... I haven't run that for a while in 3DL, but I'm almost positive it went over 8GB (and I sort of remember getting really close on 16GB used on the system with Studio the scene and a couple of other things running) of RAM for the scene. It used just over 3GB of VRAM for the Iray scene. A 4GB card will hold a LOT. I was at the point where moving the scene around in Studio was getting to be a pain in texture view. Yes, 4GB will go a LONG ways. 2GB, not so much. The system can take 500mb and then a few other things can cause some quick spikes. What I found was I was on the edge on 2GB cards with scene that approached 1.5GB used. So, going to a 4GB card actually gives you almost 3 times the room for scenes.

    Also, Iray eats up polys. Testing with increasing levels of SubD did very little to increase VRAM use. Textures however are a set size. Iray seems excellent about finding textures that are reused and it seems to only load a texture once in spite of it being used on different figures. But textures can be large and those will be the thing that eats up your VRAM. Tiling is a good thing. :) Yes, I have some WIPs for Iray friendly products and have been testing a LOT. :) I've had to forget a number of things that I thought I knew. LOL

  • Dumor3DDumor3D Posts: 1,316
    edited December 1969

    One other little hint about cards. If your computer can hold two cards, at least on Windows it seems that system things and in particular OpenGL is loaded to the card that has the monitor plugged into it. If you run two monitors, you might want to see if both can be run from one card. The second card will then gain maybe 300 to 500mb of free VRAM. This can be the difference between a scene loading or not, especially on 2GB and 3GB cards.

    Note that web browsers, email programs and of course studio all use OpenGL, so each does add to VRAM use. And I've watched my main system card have an OpenGL crash when it showed 500MB free. I have a theory that OpenGL may send VRAM use spikes to the card, so quickly my utility does not show the use, but suddenly OpenGL video drivers crash out on the computer. Each time this has happened it was when I did something in a program that used OpenGL, during the render... such as opening a new browser tab. Again, this was when pushing the limits of the main card or the one with the monitor plugged into it.

    At the moment, my normal setups is I run 2 monitors from 2 GTX660s. In Studio Photo Real, I turn off CPU, turn off both GTX660s and turn on 2 GTX980s. If I know my scene will fit, I turn on 1 or 2 of the 660s. CPU is good to prevent a crash if the GPUs drop out, but adds so little time bonus, in spite of it being a 6 core i7, that I'd rather keep it for doing things while the render runs.

    Anyway, I hope that helps some extend their computer's abilities. :)

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited March 2015

    On first cup of coffee, so bound to miss something, lol.

    SimonWM, I am curious about single card performance as well?

    Thanks EmmaAndJordi and Dumor3D, I'm CPU bound here, so I can't do testing of my test chamber in GPU only, tho tiling is heavily used there, with small maps at that. The largest being the color cube I think. The trick, is to turn smoothing OFF in the surface tab with small maps on things (props, etc. not figures).

    I had taken that website apart, and the linked GeForce website as well. looking at a potential dedicated system set up (low watt display card, with separate dedicated crunching cards). I keep catching hints that the ram on these cards is going to go up, tho no idea when, or any other details. For a dedicated display card, it only needs to be, dare I say, more capable then the 7 year old card I have, and be very sparing on watts with no fan.
    I just hope the 730 is not out of stock by the time I can click the Buy-now button. :smirk: (counting days, lol)
    (EDIT, and Dumor3D posted again as I was spell-checking, lol)

    Post edited by ZarconDeeGrissom on
  • EmmaAndJordiEmmaAndJordi Posts: 952
    edited December 1969

    On first cup of coffee, so bound to miss something, lol.

    SimonWM, I am curious about single card performance as well?

    Thanks EmmaAndJordi and Dumor3D, I'm CPU bound here, so I can't do testing of my test chamber in GPU only, tho tiling is heavily used there, with small maps at that. The largest being the color cube I think. The trick, is to turn smoothing OFF in the surface tab with small maps on things (props, etc. not figures).

    I had taken that website apart, and the linked GeForce website as well. looking at a potential dedicated system set up (low watt display card, with separate dedicated crunching cards). I keep catching hints that the ram on these cards is going to go up, tho no idea when, or any other details. For a dedicated display card, it only needs to be, dare I say, more capable then the 7 year old card I have, and be very sparing on watts with no fan.
    I just hope the 730 is not out of stock by the time I can click the Buy-now button. :smirk: (counting days, lol)
    (EDIT, and Dumor3D posted again as I was spell-checking, lol)

    Beware it seems to be several 730 cards, two of them have 384 cuda cores, and other has 92:

    http://www.geforce.com/hardware/desktop-gpus/geforce-gt-730/specifications

    We are now running one with 92 cudas and Iray works but with more cudas it may be faster. We want to get one more powerful :)

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited March 2015

    On first cup of coffee, so bound to miss something, lol.

    SimonWM, I am curious about single card performance as well?

    Thanks EmmaAndJordi and Dumor3D, I'm CPU bound here, so I can't do testing of my test chamber in GPU only, tho tiling is heavily used there, with small maps at that. The largest being the color cube I think. The trick, is to turn smoothing OFF in the surface tab with small maps on things (props, etc. not figures).

    I had taken that website apart, and the linked GeForce website as well. looking at a potential dedicated system set up (low watt display card, with separate dedicated crunching cards). I keep catching hints that the ram on these cards is going to go up, tho no idea when, or any other details. For a dedicated display card, it only needs to be, dare I say, more capable then the 7 year old card I have, and be very sparing on watts with no fan.
    I just hope the 730 is not out of stock by the time I can click the Buy-now button. :smirk: (counting days, lol)
    (EDIT, and Dumor3D posted again as I was spell-checking, lol)

    Beware it seems to be several 730 cards, two of them have 384 cuda cores, and other has 92:

    http://www.geforce.com/hardware/desktop-gpus/geforce-gt-730/specifications

    We are now running one with 92 cudas and Iray works but with more cudas it may be faster. We want to get one more powerful :)duly noted, I had mentioned that in the beta thread, and forgot to here. Yes, look closely at the specs before buying.
    http://www.daz3d.com/forums/viewreply/786546/
    Yes, this 8600GT is beyond pathetic, lol.
    (EDIT)
    The exact card I'm looking at is a 4GB GT 730 with 384 cores, in the picture a few pages back in that beta thread here.
    http://www.daz3d.com/forums/viewreply/785924/

    Post edited by ZarconDeeGrissom on
  • R25SR25S Posts: 595
    edited December 1969

    need 58.04 minutes to finish (reached 86% after 1.26 minutes remaining 14% need nearly 1 hour)

    used Laptop with Windows 8.1, NVIDIA GEFORCE GTX 880M, Intel Core i7-4700HQ CPU 2.4 GHz, 8GB RAM

  • provencialprovencial Posts: 84
    edited December 1969

    OMG!!!
    50 minutes 32 seconds! Intel Processor i7 @3.4 GHz with AMD Radeon 6700 video card and 16 GB RAM

  • SimonWMSimonWM Posts: 924
    edited March 2015

    SimonWM, I am curious about single card performance as well?

    Sure thing:

    EVGA GTX 580 3GB Classified Ultra = 7 minutes 50.5 seconds
    EVGA GTX 580 3GB Classified = 8 minutes 7.47 seconds

    Plus a new benchmark for me:

    Both GPUs plus CPU optimized to run faster in my ASUS Bios = 4 minutes 10.32 seconds

    (Not a great help but the CPU running faster was able to shave some seconds when optimized)

    Post edited by SimonWM on
  • SimonWMSimonWM Posts: 924
    edited March 2015

    OptiX Prime Acceleration seems to make a BIG DIFFERENCE!!! New benchmark:

    CPU + Both GPUs + OptiX Acceleration = 3 minutes 6.98 seconds

    It pushed my benchmark UP by over 1 minute!!! That's crazy!!! All my benchmarks have been full scene to completion.

    Post edited by SimonWM on
  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited March 2015

    SimonWM said:
    OptiX Prime Acceleration seems to make a BIG DIFFERENCE!!! New benchmark:

    CPU + Both GPUs + OptiX Acceleration = 3 minutes 6.98 seconds

    It pushed my benchmark UP by over 1 minute!!! That's crazy!!!
    completely opposite to What I found for my computer on the old Beta (4.8.0.4), lol.
    http://www.daz3d.com/forums/viewreply/783738/
    All my benchmarks have been full scene to completion.

    is confused again, lol. ("O", your referring to that sphere 8 & 9 thing, lol)
    Post edited by ZarconDeeGrissom on
  • SimonWMSimonWM Posts: 924
    edited March 2015

    SimonWM said:
    OptiX Prime Acceleration seems to make a BIG DIFFERENCE!!! New benchmark:

    CPU + Both GPUs + OptiX Acceleration = 3 minutes 6.98 seconds

    It pushed my benchmark UP by over 1 minute!!! That's crazy!!!
    completely opposite to What I found for my computer on the old Beta (4.8.0.4), lol.
    http://www.daz3d.com/forums/viewreply/783738/
    All my benchmarks have been full scene to completion.

    is confused again, lol. ("O", your referring to that sphere 8 & 9 thing, lol)

    Hmm maybe OptiX Prime Acceleration doesn't helps with every card architecture as much. I'm with a Fermi card, which is an older architecture but I've always heard that NVIDIA crippled their cuda after the Fermi architecture and newer cards are never as fast in 3D rendering. Which these results seem to confirm, after getting these numbers I won't waste my money on a GTX 980. What I'm wondering is the new Maxwell Titan. I'm dying to see someone benchmark those cards but I don't even know if they are on sale yet.

    The comment of "full scene to completion" has to do with people posting benchmarks after deleting spheres, don't know why would anyone do that. They are cheating the benchmark and those numbers are not valid, neither putting benchmarks where the scene reaches 90%. The benchmarks posted should be with the scene untouched until the rendering reaches 100%. otherwise values start getting hard to compare.

    From this thread the budget priced 970 results surprised me, the card seems to have an insane amount of cuda cores at a lower speed than the 980 but then all these cards can be overclocked. I have never overclocked a card but I'm very curious of the results a 970 overclocked could reach.

    Post edited by SimonWM on
  • SickleYieldSickleYield Posts: 7,626
    edited March 2015

    I went back and redid my first scene and setup (four cards, two of them GTX 740's and two GTX 980's, CPU turned off) with OptiX Prime Acceleration on. It finished in 2 min. 18 sec vs. 3 min. 30 sec.

    That's a third of the render time cut off.

    So OptiX is worth it for some setups, at least.

    Post edited by SickleYield on
  • JohnDelaquioxJohnDelaquiox Posts: 1,184
    edited March 2015

    CUDA device 0 (GeForce GTX 760): 5000 iterations, 29.824s init, 611.082s render

    Finished Rendering
    Total Rendering Time: 10 minutes 46.88 seconds

    I have a GTX 760 and it does not seem to be cutting it for Iray at all. So I need to upgrade

    Choices are

    Refurbished GTX780 TI Super Clocked 3GB of Ram at 380.00
    Used GTX 780TI Super Clocked 6GB of Ram at 450.00
    Used GTX 580 3GB at 230.00
    Used GTX Titan 6GB at 500.00

    Post edited by JohnDelaquiox on
  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited December 1969

    I have a GTX 760 and it does not seem to be cutting it for Iray at all. So I need to upgrade

    Choices are

    Refurbished GTX780 TI Super Clocked 3GB of Ram at 380.00
    Used GTX 780TI Super Clocked 6GB of Ram at 450.00
    Used GTX 580 3GB at 230.00
    Used GTX Titan 6GB at 500.00

    The 3GB one may get you by, tho the display drivers and other stuff eats up some of that. I think the most RAM you can get, then the better performing one.

    Perhaps if you can, to spare ram on the better card, keep the old one to drive the display(s), as that also eats up ram.

  • JohnDelaquioxJohnDelaquiox Posts: 1,184
    edited March 2015

    Using CPU and GPU I only saved about a minute

    Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 0 (GeForce GTX 760): 4393 iterations, 25.929s init, 532.683s render
    Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CPU (7 threads): 607 iterations, 22.807s init, 536.154s render

    Finished Rendering
    Total Rendering Time: 9 minutes 21.30 seconds

    The idea is that I would take the GTX 760 and put it in my home machine so I would have just the one card to replace the two 9800gt's that I have in there

    so you think the GTX 780 B would work well for Iray

    Post edited by JohnDelaquiox on
  • ScarletX1969ScarletX1969 Posts: 587
    edited December 1969

    Is it safe to assume that most of you who are doing benchmarks are more than hobbyists and enthusiasts? Because I'm seeing some beef on these pages when it comes to rigs where I know some of the components are pricey. I can't afford half of these cards I'm seeing being used. LOL. Dark Angel's rig is about the closest mine would come to, so I will test this scene overnight and see what happens. I do have a network at home of 4 machines in a render farm, so I'm waiting on Nvidia to release the Iray Server to do network rendering.

  • SickleYieldSickleYield Posts: 7,626
    edited December 1969

    dsexton72 said:
    Is it safe to assume that most of you who are doing benchmarks are more than hobbyists and enthusiasts? Because I'm seeing some beef on these pages when it comes to rigs where I know some of the components are pricey. I can't afford half of these cards I'm seeing being used. LOL. Dark Angel's rig is about the closest mine would come to, so I will test this scene overnight and see what happens. I do have a network at home of 4 machines in a render farm, so I'm waiting on Nvidia to release the Iray Server to do network rendering.

    This is what I do for a living, and so do Dumorian and SimonWM, among others (and Spooky, who knows all ;) ). $600+ worth of graphics cards is a very legitimate (and tax-deductible) expense for a full-time DAZ Published Artist, or anyone else who has 3D as their only job.

    Before Iray was a blip on the horizon I knew GPU-based unbiased rendering was going to figure prominently in my future; I just expected it to be Octane or Lux at the time. I suspect a lot of us have been seeing the writing on the wall.

  • JohnDelaquioxJohnDelaquiox Posts: 1,184
    edited December 1969

    dsexton72 said:
    Is it safe to assume that most of you who are doing benchmarks are more than hobbyists and enthusiasts? Because I'm seeing some beef on these pages when it comes to rigs where I know some of the components are pricey. I can't afford half of these cards I'm seeing being used. LOL. Dark Angel's rig is about the closest mine would come to, so I will test this scene overnight and see what happens. I do have a network at home of 4 machines in a render farm, so I'm waiting on Nvidia to release the Iray Server to do network rendering.

    I am currently running an

    AMD FX 8150 3.60 GHZ upgrading soon to an 8350 4.0GHZ
    MSI 990FX Mother Board that can do either Cross-Fire or SLI
    32 Gigs DDR3 Team Vulcan Memory(Best RAM I have ever Used)
    A 1TB Seagate SSHD soon I will be upgrading all of my drives to SSHD
    A Single GTX 7602GB
    I have an Older Cooler Master Case which I am looking to upgrade to either a Thermal Take Mozart TX ( an older Case but still my favorite) or two Thermaltake Core X9 cases stacked into either a parallel rendering rig or one for media and the other for rendering.
    Videocard as I have mentioned is a GTX 760 2GB which actually seems to be doing ok.

    Everyone has tested either two GTX 970 cards or two GTX 980 cards but no one has tested two of the new GTX 960 4GB cards. I think I may.
    The GTX 960 4GB is roughly $250.00 when you calculate taxes if they charge them and shipping.

  • SimonWMSimonWM Posts: 924
    edited March 2015

    dsexton72 said:
    Is it safe to assume that most of you who are doing benchmarks are more than hobbyists and enthusiasts? Because I'm seeing some beef on these pages when it comes to rigs where I know some of the components are pricey. I can't afford half of these cards I'm seeing being used. LOL. Dark Angel's rig is about the closest mine would come to, so I will test this scene overnight and see what happens. I do have a network at home of 4 machines in a render farm, so I'm waiting on Nvidia to release the Iray Server to do network rendering.

    This is what I do for a living, and so do Dumorian and SimonWM, among others (and Spooky, who knows all ;) ). $600+ worth of graphics cards is a very legitimate (and tax-deductible) expense for a full-time DAZ Published Artist, or anyone else who has 3D as their only job.

    Before Iray was a blip on the horizon I knew GPU-based unbiased rendering was going to figure prominently in my future; I just expected it to be Octane or Lux at the time. I suspect a lot of us have been seeing the writing on the wall.

    What SickleYield said. I built my system for Octane in November 2012 so it was ready for Iradium.

    Post edited by SimonWM on
  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited March 2015

    dsexton72, like others, enthusiast or profession is the driving force for all the god-cards (I call most of them Watt-hogs, lol).

    I built my system for CAD&DAW; work over a decade ago (2005), and upgraded as necessary. I do have other constraints, so my system is not sporting a Watt-hog card. My interest for the time being is in fanless cards using less then fifty watts. There is a big difference between recording music in a studio, and attempting similar on the flight-deck of a carrier during flight-ops, lol.
    (I was on the ship on the left, trust me, they get a tad noisy when launching plains, lol)

    It would be nice to see some benchmarks from less capable cards. I myself will probably be doing a 384 core GT730 eventually. There is a more lobotomized variant of the 730 out there that I'm not interested in purchasing, tho seeing the score would be at least beneficial to Dumor3D (The Dumor3D CUDA core theory), and myself.

    As for VCA setups, I think it would chew threw this bench rather quickly, network bandwidth and latency probably being the biggest part of any degradation in performance. How many K6000 cards per VCA, [goes looking it up], lol.

    US_Navy_050204-N-0905V-011_USS_Camden_AOE2_conducts_a_replenishment_at_sea_with_the_USS_Carl_Vinson_CVN70.jpg
    2000 x 1303 - 2M
    Post edited by ZarconDeeGrissom on
  • namffuaknamffuak Posts: 4,040
    edited December 1969

    dsexton72, like others, enthusiast or profession is the driving force for all the god-cards (I call most of them Watt-hogs, lol).

    It would be nice to see some benchmarks from less capable cards. I myself will probably be doing a 384 core GT730 eventually. There is a more lobotomized variant of the 730 out there that I'm not interested in purchasing, tho seeing the score would be at least beneficial to Dumor3D (The Dumor3D CUDA core theory), and myself.

    As for VCA setups, I think it would chew threw this bench rather quickly, network bandwidth and latency probably being the biggest part of any degradation in performance. How many K6000 cards per VCA, [goes looking it up], lol.

    Back on page 2 or 3 I posted my results - with a 384 core 4 GB GT 740 that cost about $120.

    CPU only (I7, 6-core 3.5 GHz) 23 minutes 26.54 Seconds, 4747 iterations
    GPU only 52 minutes 55.48 Seconds, 4723 iterations
    CPU and GPU 18 Minutes 11.26 Seconds, 3335 iterations CPU, 1419 iterations GPU

    These numbers are from the reduced scene, without the two spheres - because the GPU only render didn't finish the original scene before the 2-hour time limit.

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited December 1969

    namffuak said:
    Back on page 2 or 3 I posted my results - with a 384 core 4 GB GT 740 that cost about $120.
    CPU only (I7, 6-core 3.5 GHz) 23 minutes 26.54 Seconds, 4747 iterations
    GPU only 52 minutes 55.48 Seconds, 4723 iterations
    CPU and GPU 18 Minutes 11.26 Seconds, 3335 iterations CPU, 1419 iterations GPU
    These numbers are from the reduced scene, without the two spheres - because the GPU only render didn't finish the original scene before the 2-hour time limit.
    I couldn't remember, and was looking threw that again, then got distracted with an RRRR prize, lol.
    Yea, the time limit is pointless unless you can nab the point it finished and at what Convergence. I don't always have the patience for that, lol. Thanks.
  • SimonWMSimonWM Posts: 924
    edited December 1969

    You can find the time your render took by going to the log. It will tell you in minutes and seconds.

Sign In or Register to comment.