Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
Yeah, this just completely blew my whole idea of running dual 1070's up. For two 1070's, I could just get the 1080ti.
Though the idea of a 10gb 1070 is kind of appealing as well. That could change things.
Its crazy. We were stuck at 4gb of VRAM for what seemed FOREVER. Now you can a 1080ti for $700 with 11gb, and maybe a 1070 with 10 down the road??? Though that doesn't make sense, why would they make the 1080 so undesirable? With the 10gb 1070 below, and the 11gb 1080ti above, the 8gb 1080 would be a pretty lame duck.
Vega is set to launch in April. Its not a ghost product in a never ending loop of the "near future." While we still don't really know what Vega is going to be, AMD came out swinging with Ryzen in such a way that it had to scare the pants off Nvidia. AMD just completely uprooted the CPU market, and I hope they do very well in bring that market to balance. I have no dog in the race other than myself. But Intel and Nvidia both need a serious serving of humble pie. This release here shows what Nvidia can do when they are motivated to do something. Intel is fumbling about, but they are talking about higher core chips all the sudden.
Even if Vega is not the fastest, it will be the cheapest. Since Nvidia moved first, now AMD has a chance to consider their thoughts. Would they release a 1070 competitor for $250? Or a 1080 competitor for $400? This would blow market wide open, even if they don't have a 1080ti competitor yet.
How did the 10GB 1070 get brought up? No rumors have pointed to such a move, and historically, there has never been a xx70-ti model. Even if a 10GB 1070 is released, this 1080ti would be a better buy than two of those hypothetical 1070s.
From the lone Vega demonstration we've seen so far, (which was provided by AMD no less, so big grain of salt there), Vega is supposed to be about 10% faster than the 1080. The 1080ti is 35% faster according to Nvidia. Once driver optimization is accounted for, it's possible that the Vega card competes more with the 1080 ti than the 1080, so we might even see lower prices pretty soon. I think we should also expect lower prices from AIB partners. 649 seems reasonable for the lower end cards from Asus, Msi, EVGA and the like.
It would be amazing for us GPU renderers if AMD can disrupt the gpu market the same way they are going to with Ryzen in the CPU market. I just bought some RX 480s just to show them some support, and I'll be building a Ryzen system to put them in. If Vega pans out, I think sub-600 dollar 1080 TIs are possible come May.
The GTX 1080 will now be $499. The other information is being confused, the RAM speed is increasing, not the RAM amount on the existing cards. The 11 GB on the 1080ti is likely due to competitive reasons (leaving Titan with most RAM), and also possibly due to chip harvesting, where chips that didn't make the grade for Titan X could be used for 1080ti due to defects in the chip on the ROP or memory bus.
http://www.anandtech.com/show/11173/nvidia-partners-to-begin-selling-geforce-gtx-1080s-gtx-1060s-with-faster-memory
What Richard said /nod
Did any of you read how the Vega GPUs are designed? They can stream data from storage like SSDs so these DAZ render problems with out of memory should be a thing of the past if DAZ used the open source PBR renderer AMD released a few months back. At any rate, SW like Unity that will be having the Octane renderer added could utilize the new Vega cards to the maximum of the capabilities using this storage streaming technology.
It uses the same Pascal archtecture as the other 10 series cards so this shouldn't be an issue.
Cheers,
Alex.
I'm old enought to remember the jokes about MS/DOS: never buy the .0 version, always wait for .1.
Cheers,
Alex.
You'd have to convince Daz to adopt such a renderer. IMO, they are pretty much tied to Iray by a pretty firm contractual noose. The best we can hope for is for someone else to make a bridge/plugin, and probably would never be sold in this store or receive support from Daz.
For that reason, when ever I release a new app, it always starts at 1.1
Greetings,
So...any thoughts on what the right direction here would be? I have a 1080 in my render box right now, and I'm considering moving all the components to a larger case, so I can put a second card into it. Should I splurge, and get a 1080Ti, use it as my primary video card as well as an Iray compute card, and live with the 8GB scene limitation (which has never bit me), or get a second 1080 at the $500 price point, and live with the ~7.5GB of max scene size?
Is there any problem having a mismatch of cards, like a 1080ti and a 1080 together in the Iray rendering setup?
-- Morgan
...again with only 1 GB less memory and the same core count, I would see little reason why anyone would choose to shell out 500$ more for the GP102 Titan-X over a 1080Ti.
It would appear that Nvidia has undermined their top end consumer card unless there are plans in the works that they aren't talking about (like a sizeable memory boost and/or moving to HBM2, remember they quietly doubled the Maxwell M6000's memory from 12 to 24 GB but didn't increase the price).
I think my 980ti will hold out till the 1180 comes out. I'd only drop the money on a 1080ti if it was at least twice as fast...which it obviously isn't.
The 1080 will surely drop in price now for anyone who doesn't already have a good GPU so that's neat.
probably 40% faste for renders (even more), while consumption still inferior. i'd call it a great progress from one generation to another (contrary to intel wich progresses 5% each time)
So, with 2 of them, optimised, you can expect 3 times the perf, in a reasonable investment
I feel like Titan X is just the bleeding edge that they sell for a while but don't really want to sell a lot of to begin with. It wasn't even maketed at gamers, they sold it as a deep learning hardware primarily and allowed no third party versions. It's also a bit of clever marketing if you ask me. Look at this unoffordable card for a few months, well now you can have that performance for way less, aren't we great?
Hi, I have a few questions that maybe someone in here can answer. I was lucky enough to pre-order 2x 1080TI cards from Nvidia's site during the one hour window they sold them for the other day. I currently have 2x 980TI cards and my motherboard supports 3 cards. 2 slots are 16x but they drop to 8x when two cards are being used at the same time. The third slot is a 4x slot. I know that Daz 3d will use all 3 cards and that it is reccomended to disable SLI. What I don't know is if I can use the 980TI with my 1080TI's when I am gaming? If I can't use all 3 cards in gaming can I use a bridge for the two 1080TI cards in SLI and then leave the 980TI by itself without it causing issues in gaming? Can I use the third card as a dedicated physics processor or do all 3 cards need to be the same series? Do I need to buy a 3 way bridge or should I leave the 980TI by itself?
I am pumped up waiting for these cards to ship. Currently with my 2x 980TI cards, I have about 5600 cuda cores but with the 2x 1080TI cards and one of my 980TI cards combined it should push my system up to 10,000 cuda cores. I am hoping to see a major gain in my rendering times. I don't know how much to expect but it would be awsome if I could cut my times in half. I won't get my hopes up though, I know that the third card doesn't scale well and the 4X slot might limit it but I hear that the cuda cores in the 1080TI may be larger and faster than the ones I have in my 980TI. Eitehr way I will be raising my 6GB video RAM limitation up to 11GB which will give me some serious breathing room. Since the 1080 is now supported in Daz 3d, do you guys think the 1080TI will work or will I have to wait for a patch?
I'm running a 980 TI and a 1080 and not seeing any issues. I'd exect a 1080 and a 1080 TI to play nicely together.
Lucky you! The entire pre-order sold out almost immediately. You can now be "notified" when cards are available.
You cannot mix GPU architectures in SLI but you can SLI your two new TI cards. You should be able to dedicate the 980ti as a physics card.
How you setup your cards for DAZ depends on how much you render vs. game. If you game more I would connect your monitor(s) to one of the 1080ti cards in SLI and use the 980ti just for Iray/physics. Just keep in mind that the 980ti will drop out of a render if the scene won't fit on it. If you have a scene that requires more than 6GB of VRAM it will render just on the 1080ti's. If you render more, I would use the 980ti to drive your monitor which allows you to do other things while the 1080ti's are rendering.
I had two GTX 970s in my rig and bought a Titan X(P) just last month (bad timing cost wise). Luckily, I was originally going to buy two but thought "let's first try one" - good move cost wise. So now I run a GTX 970 and Titan X(P). The 970 drives my monitor. The Titan X(P) has about twice the rendering performance as my dual 970's. I'm planning on adding another Titan X(P) if the price drops, or a 1080ti.
A word about noise and performance. The 1080ti's should perform about the same as the Titan X(P), which is pretty amazing. The cards seem to auto-overclock depending on the card's temps. So even though the specs say 1531 MHz Boost clock, in reality you'll get MUCH higher, even on air. The key is to define your own fan profile in something like EVGA's Precision XOC that gets the blower fan going when temps hit around 35C. From 0 to 35C, it's 22% (about 1100rpm and pretty inaudible), at 35C I've set 50%, 65% at 65C, and 100% at the 84C limit. If you're used to non-reference design GPU that use large heat-pipes and multiple fans you'll be in for a noise surprise. Even when rendering full out my 970s only required the fans to go to 40% (almost inaudible) to keep the temps at 53C.
The Titan X(P) can render well above 1800 MHz without adding an overclock so long as you keep your temps down. In the render below, the Titan's blower fan was running at 60% and was making the normal "shhhhh" sound. It was about as loud as I would tolerate for any length of time. In a well ventilated case you can get the card into the 1900 MHz range but the blower will run closer to 70%. The GPU does an amazing job of removing the heat, the air coming out the GPU grill at the back of the case is like a blow-dryer in its intensity and heat. The solution to the noise is of course to water cool, which I plan on doing. It will also allow you to get the GPU up past 2000 MHz, approaching 2100. The other drawback to running an overclock, water cooling or not, is the amount of heat that will dissipated into your room - it will be significant.
Thanks a lot for your response! That was what I was wanting to know. Currently I have my 980Ti's overclocked with a custom fan profile using MSI Afterburner. My temps on the top card are 75C and 68C for the bottom card. How does Precision X or MSI afterburner handle the third card if it is not from the same generation as the 1080TI's? Will it show up seperatly? Will the fan profile apply to all 3 cards? Thanks again for your well informed post! Oh and good point about the RAM limitation. I have gotten close to breaking 6GB in some of the scenes I have done but for everything I have done so far, all 3 cards would work. In the future if I break 6GB I will now be aware of the sacrifice I will have to make unless I buy a 3rd 1080TI later on. (unlikly)
Well I am running off an old fashioned HD for the next month or so and the slow down to reading data of the HD is huge compared to SSD. It's a mystery why the same SSD PCI-e card to HD adaptor worked for over a year but now won't anymore.
As far as DAZ supporting the AMD opensource renderer it's probably more about the man-power needed to integrate it than a contractual issue. They have the 3DL and openGL renderers after all. At any rate, if they don't support it Unity will and that's fine by me too especially with the improvements that Morph 3D is making to exported DAZ models to their Unity plugin
The Precision Xoc software will allow you to link similar cards (it knows which cards are similar) to the same profile, otherwise, like in my example above, it provides separate settings for each card. Oh, and when I mentioned that the cards seem to auto-overclock, that's exactly what they do using Nvidia's GPU Boost 3.0.
I know this is the 1080ti thread but the Titan X(P) behavior and performance should be almost identical. To that end I decided to try a little overclocking to see if it is even worth the extra heat and noise. And while you can greatly reduce the noise via water cooling you cannot reduce the heat output. Using the popular Sickleyield benchmark my Titan X Pascal performs as follows (the GTX 970 was not used):
Stock settings: 2 min, 7 sec (to the standard 95% convergence). The GPU clock started at 1860 MHz, 1.062v, and dropped down to 1822 MHz at 1.05v as the temp rose from 32C to 60C.
My system is a 2600K overclocked on air to 4GHz with 32GB DDR3 1600 RAM on a Z68 MB in a Corsair Carbide 400R case.
OC settings (+109 core, +495 memory, same fan profile): 1 min, 59 sec. The GPU clock started at 1961 MHz, 1.062v, and dropped down to 1923 MHz, at 1.05v as the temp rose from 32C to 61C. Memory clock was 5005 MHz (10010 MHz effective). Stock clock was 4513 MHz.
Overall, I think these are impressive numbers for air cooling although the OC shaved just 8 seconds, or 6.3% off the render time. Given that the best I've read these cards can do under water cooling is around 2088 MHz there isn't much more savings to be had. Less noise, YES. A longer test with a more complex scene (like the Orient Express I used above) may provide a better OC numbers. I'll have to try that.
...the thing is, the Titan X is still a consumer/enthusiast, not pro grade card. I don't see that CG enthusiasts would be into toying around with deep learning and AI which usually involves supercomputers, not the systems like we use. That is what the Quadro P6000, Tesla GP100 and forthcoming HBM2 Quadro GP 100 (whcih also uses NVLink) are intended for.