Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2026 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2026 Daz Productions Inc. All Rights Reserved.
Comments
OK, now we both outed ourselves as old geezers.
AMD is cheaper because Nvidia is beating them at performance at the moment. When the performance is near the same, the prices are the same.
Early adopters and those want the latest and greatest will always pay more.
CUDA and AMD Stream are both RISC sets. AMD is beating nVidia on price because the number of instructions that the Stream Processors can do is much smaller than CUDA and therefore the number of transistors/core is fewer. It costs less to make each Stream core and AMD can pack a huge number of them onto a die.
CUDA is approacing CISC but not quite there yet. With the more complex instructions come larger transistor counts/core thus the ability to fit fewer cores on a die. Also the controller for CUDA has to be more complex due to the higher number of states the cores require.
So, in some cases the number of Stream cores running more instructions to do the same work as a single CUDA instruction can outrun the equivalent CUDA. This cannot happen in every case, nor can it happen consistently. For "general purpose computing" there are simply operations that the AMD Stream cannot do at all, which is why nVidia can charge a premium. On those operations where Stream can hyper-parallelize the work, CUDA loses badly.
Two different paradigms used for similar jobs -- at least in this industry.
Kendall
..well the Tesla P100 has 16 GB HBM memory.
...yes but will there be a plugin or RIB utility to send scenes from Daz like Lux, Octane, and the standalone 3DL has and will it be useable with Radeon cards?
...this is my feeling about the Titan-P Not a lot more rendering horespower to justify the extra cost, which is probably why Maxwell Titan-Xs won't be coming down in price even though they have been discontinued. Still 12 GB, still GDDR5 instead of HBM 2.
In spite of the extra cost, the Quadro P5000 sounds like a better deal in comparison as it has double the memory of its predecessor.
As to dual 1080s, they may be faster, but there is still the memory ceiling issue for me. Your process dumps to CPU and all those CUDA cores mean squat.
Well I'm hoping since it's opensource that the ProRender API can have have a DAZ Render COM API layer added over it to to pipe the revelant info back & forth between DAZ Studio and ProRender
..doing some quick research, it seems that Nvidia still has the edge on GPU rendering which means either 1,000$s for high end GPUs with a lot of memory or glacial CPU mode.
Amother option I discussed with a freind last night is online render farms.
That is supposed to change this Winter is what I heard but you now how these claims go...
Telsa P100 Mezzanine 16GB 1.4Gb HBM2 , Telsa P100 16GB 1.4Gb HMB2, Telsa P100 12GB HMB2
...looked at the Pascal Tesla line and all have either 16 or 12 GB HBM2. So what is the 1.4Gb rating?
Also so what is with the full length form factor? I thought one of the benefits of HBM 2 was supposed be a more compact card.
This is why I am reluctantly (and seriously reluctantly) considering the new Titan. (Well I would, but my credit card has hidden itself away.
)
Indeed, I'll believe this when I see the independent stats; just like I believe IRAY performance on the new cards, once the stats support the claims.
Lets face it, a company doesn't have to be corrupt, only make a mistake; presuming that is what happened here. It's going to cost NVidia a pretty penny, the mistake over directly addressable memory.
http://uk.ign.com/articles/2016/07/29/nvidia-settles-gtx-970-lawsuit-owes-buyers-money
That is the memory clock of HBM2 at 1.4Gbps , Telsa P-100 16GB max GPU clock speed is 1300Mhz , 3584 Cuda cores and 250Watts
of course if the money is not the issue you don't need to settle for less vram
OK yeah, yours are overclocked. I haven't overclocked for many years now, I'm only interested in stock performance.
I WISH SLI was all that it's cracked up to be. I play games and render. I have two Titan X's. It actually laggier with SLI enabled than using a single card.
It's been mentioned before, the 1080s don't have as much memory, the Iray performance increase may be marginal. And if SLI is crap like it currently does with my Titan X's, it's barely any better for games. I have two Nvidia Titans.
You aren't supposed to use SLI when rendering in Iray.
My stock Titan X run @1377 Mhz when not overclocked , they have different bios than founder edition , any boost is actual OC , normally I downclock my all cards to 1277Mhz as there is not much difference in rendering when I push to 1500Mhz , and of course base clock at 1400Mhz will be better choice for rendering in iray than card with boost .. for games this really not matter base or boost but there is difference when rendering . I don;t know what you doing but you should have super speed with your 2 cards , I hope you not using them in SLI while rendering as that is not what you should , I use 2 xTitans X on a daily basic I can play my animated timeline frames in real time in the iray viewport .. so definitelly there is something wrong , you should have no less than 50% boost with second GPU , my max was 76% with second GPU and 4.9.2.70 DS build .
Yeah well bugged SLI fixed that problem for me.
I was always having to turn SLI on and off switching between gaming and rendering. Well, with the Titans I don't have to bother anymore, because things run WORSE with SLI enabled vs. one card.
The reason why games sometimes lag with SLI is because not all games are fully optimized for alternate frame rendering. Ideally, AFR should theoretically boost frame rates drastically but almost never happens since there's a lot going on between each frame. DX11 is also a major limitation to the implementation of multi video card setups, so we should *theoretically* see better performance when DX12 is more widely adopted.
Totally agree !
And how much does that cost again? That and any other Tesla has a very limited production run. So with a more limited production they can add HBM2. Gaming cards are sold in much, much larger volumes and that supply has to be there to meet the demand, including a Titan level card. Nvidia's original plans for Pascal clearly state HBM. But that did not happen. I very highly doubt Nvidia crammed G5X in there just to save costs...it is a Titan after all and they are still asking $1200 for this thing. Besides that, AMD hasn't pulled anything out with HBM2, either. The $1500 Pro Duo is using first gen HBM that the Fury Nano it was based own uses. At this rate, we wont see HBM2 in a consumer card until 2017 at the very earliest. If Nvidia is really looking to release Volta in 4Q 2017, it may show up there.
...just like with the 8 GB Maxwell 970/980 hinted at over a year ago.
Maybe just going with a render farm service like my friend and I discussed the other night would be better than sinking 1,000s into high end GPUs for Iray rendering.
Also, I thought Daz was planning to do that with the "Cloud (Beta)" option which shows up in the Advanced Render Settings tab.
I don't know, I got the impression that it was only with Titans.
Before those, I had two EVGA GTX 970s. SLI ran great (and I kept turning SLI on & off switching between gaming & rendering.) I also had some ATI R9 290s, those ran great too. Forgot what I had before that, but I'm pretty sure I've been exclusively running SLI for gaming since the 3DFX days. And I keep my drivers up to date, except for a while when Octane wouldn't work with 2 Titans at some driver revision forward for many months. Could be fixed now for all, I gave up on Octane.
Maybe it's the games, I only play a few now. IIRC, Killing Floor 2 actually runs well SLI. Battlefront & FO4 does not.
Any Nvidia card they say turn SLI off as it can significantly slow down Iray.
http://irayrender.com/fileadmin/filemount/editor/PDF/iray_Performance_Tips_100511.pdf
Not all games are optimized for SLI , however when I had 3 Titan X I could use SLI but with 4 I can't , I tried different kind of SLI bridges and nothing , Nvidia Panel does not recognize the bridges with 4 cards
someone told me couple days ago that they can run SLI with iray by connecting the bridge and selecting only 1 GPU under iray rendering setting , but I tried it before and it was not working and very slow as Cuda technology with SLI in iray will double the raytraced faces in rendering for that reason it is not recommended .
-----------
and on the side a little note
Also using different mixed architectures with iray like Kepler , Maxwell or Pascal cards together will not work optimal since each of the architecture required different display drivers for optimal performance with Cuda technology so unless the cards are from the same series that use the same display drivers all other cards should be not used when rendered with Iray as Nvidia does not recommend doing it for a good reason as it will mess up your GPU scaling and slow down everything and the performance will be not optimal . I got this info from Nvidia iray programmers since a lot of people don't believe it is true and can stack up GPUs Cuda cores no matter what they throw into PC .
And if you want to use optimal older card with different architecture as monitor you should remove all cards from the slot, put the display card only and start the PC , install the correct driver and shut down, put the newer card in the slot start again the PC and install the proper driver for the second card , so each card have own proper driver to perform optimal and older card can be used correctly to run the monitor a long side with newer card for rendering only . This little trick is not approved by Nvidia but it works if you know what you doing , however if you select the both card for the same task you may lose the optimal performance of the newer card.
It does not turn the SLI automatic off , what it do with simple words is rendering the same thing double at the same time in place of stacking up the GPU performance for faster GPU scaling and rendering
as I said someone said that selecting only 1 GPU from the SLI will work ok but I can;t confirm that as it does the same so definitely need to be set off .
Yep, it's not automatic. Scott must turn it off himself.
"Apple types" willing to overpay? I think in the interest of an informative thread you should stay on topic.
Yes, I know. A few times I said had to switch SLI on & off going from gaming to rendering. In other word, I enable SLI when gaming, and disable SLI when rendering.
I wish it were that simple.
But my problem is the other way around. SLI sucks for gaming, I have no issues with two Titans (SLI off) rendering. Basically. There seems to be some bug where once in a while my rendering goes CPU only (and I have CPU rendering unchecked). When that happens no matter how many times I render it won't use GPUs, I need to re-start Daz Studio then it works again.
Titan X Pascal available now!
Gaming performance is impressive. With Maxwell Titan X's being unavailable I'm tempted to pick up the Pascal Titans. It's been a few months now, anyone hear when Iray rendering will be supported?
http://www.pcworld.com/article/3102877/components-graphics/tested-nvidias-new-titan-x-is-absolutely-decadant-in-sli.html
I could hold out for SLI to be fixed with Battlefront and my Maxwell Titans but I'm not holding my breath.
Current word from nVidia is Late September for Pascal-compatible Iray.