Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
From what I read, almost a 1080. Eh, seems like they are starting to get too many SKUs.
Yes, the 1070ti will be nearly a 1080. The 1080 is faster than a regular 1070, it has a solid CUDA advantage. You can overclock a 1070 and potentially hit very close or even match some 1080 marks, but as I said before, overclocking for something as demanding as Iray is not recommended (at least not too much.) The 1070ti exists thanks to AMD's Vega 56. Vega 56 is beating the 1070 in most gaming scores, so Nvidia is countering with a revised 1070ti model. They are locking the clock speed as to keep the 1070ti from cannibalizing the 1080, as the locked clock will prevent most from beating the 1080 performance. So we can thank AMD for the 1070ti.
Here is a big table, if it will work in the forum. Lets see.
1x HDMI 2.0b
1x DVI
1x HDMI 2.0b
1x DVI
1x HDMI 2.0b
1x DVI
1x HDMI 2.0b
1x DVI
1x HDMI 2.0b
1x DVI
1x HDMI 2.0b
1x DVI
1x HDMI 2.0b
1x DVI
1x HDMI 2.0b
1x DVI
1x HDMI 2.0b
1x DVI
1x HDMI 2.0b
1x HDMI 2.0b
If you are rendering scenes that are coming close to filling up your 8GB card.... Yes, get the 1080ti.
Here's an observation that I've made on my system after adding a GTX 1080 ti... according to MSI Afterburner, here is my GPU memory usage right now with lots of things open.
GTX 1080 ti hooked up to one screen @ 3840x2160 : 1092MB used
GTX 1060 6GB no connected screens and set as Physx dedicated card in the Nvidia Control Panel : 80MB used
The 1060 stays at 80MB at all times unless being used in Iray rendering or as a Physx processor in a game.
If you don't need it don't buy it then and send me the money :) I'll make a good use
More seriously a graphic card may not only be used to play game or render with DS. There are other field like AI that can push you to buy one. Just have to find what could push you
DAZ just introduced dforce that could also benefit from GPU? So big simulations are also a possibility
The fact that you don't think you need it also shows that you may not be rendering big enough scenes. So make bigger scenes and render more.
I'm thinking of buying one too but I've been mostly using 3delight so I don't really feel the need of it. And I was too occupied to eventually have time to dabble into AI but I know I'll do and will buy it (or a Volta)
One other thing that will push me to upgrade hardware is Blender Evee when it's out. I've tested a few scenes and having big complicated scenes in real time will be certainly be a pleasure
Otherwise if you don't really have interrest in upgrading, spend elsewhere in something that will really serve you (Zbrush, Photoshop, Wacom Tablet or whatever)
I can't believe you guys are forcing me to spend almost 800 bucks on a card I don't really need against my will !!!!!
This is so bad...
But I do'ed it. I just bought an MSI 1080ti from newegg. And go figure....I got a $60 discount for some reason. Cool. Total damage just under $720. Should have it by Friday.
But I was very strict with myself. As takeo.kensei suggested, I told myself if you're going to waste your money on something you don't need, then make sure you need it. And I promised myself I'll start learning how to develop C# apps to use the GPU instead of the stuff I've done in the past using multithreaded CPU's.
You guys are so bad
Pretty decent deal in this market. Hope you have fun with it, I sure am loving mine.
Oh, I see why the $60 discount. They give you a "free" game worth $60, but that $60 is included in the price of the card on the website. And then when you check out they deduct the $60 to make you feel like you got a deal.
It worked. I feel like I got a deal.
Oh wait, they charged me $60 for the game, but it's a separate charge on the credit card. But I don't want the game. But it's a package I guess. Damn.
I wonder if I can get the $60 back. Anyone know how newegg works on stuff like this?
I would talk to their customer service. https://help.newegg.com/contactus
But I've never had to return or cancel anything through them.
Yeah I emailed them will all the info. We'll see what they say. Thanks.
Okay well my new GTX 1080ti will arrive tomorrow. And since I told myself I MUST find a use for it, I started researching the available libraries for Visual Studio so I can write some C# software that uses the GPU.
And I found a free package called Alea GPU which seems to be just what I need. Takes away the device-level programming and sounds like it makes life a lot easier. In fact they have a Parallel.For method that directly uses the GPU, just like the regular C# parallel.for uses the CPU threads.
I'm curious, are there any software developers out there who know if this is the best best for CUDA development with C#/Visual Studio? I saw some others, but this seems like the most tailored to VS and C#.
Thanks.
No idea, but if you're looking for uses for it it is a kick-azz gaming card.
I was surprised. The new Battlefront 2 beta was a week or two ago, it didn't support SLI. That was OK as it ran 60 FPS all the time at 4K on a single 1080 Ti (EVGA Hybrid).
Okay, so I finally installed the beast. The monster. So now I have a GTX 1070 in parallel with a GTX 1080ti.
Surprisingly it went without a hitch. Luckily my motherboard came with a second set of power supply cables (2 x 8 pin) for the cards, so I just plugged it in and fired it up. The GTX 1070 uses only one 8 pin power connector, and the 1080ti uses two, BTW. It was immediately recognized, probably because I already have a GTX 1070 installed and it uses the same drivers.
GPU-Z showed the second card, and I loaded up Studio and it automatically recognized the second card and automatically enabled it, so I rendered a scene. Maximum power draw from the wall for my entire computer (Ryzen 7 1700, 3 x 27" monitors, 64GB RAM) while I'm rendering using both cards at 99% utilization was around 380 watts maximum. My PS is 750 watts, so those who have expressed concern over needing more power might want to re-think their position.
I haven't done any benchmark testing yet, but that will come next. My big hope is it will make a huge improvement in 3D viewport iray response.
BTW, I did notice that with both cards during rendering the sound level is as quiet as with only the GTX 1070. So that's nice.
Wow. I did a quick render, and with only the GTX 1070 I got 5 minutes, and with both the 1070 and 1080ti I got only 2 minutes. That's a 60% improvement. Nice.
So that's $760 for a 60% improvement, or $12 per % improvement. And I was hoping to stay under $10 or so. Whatever.
Sell the 1070 and that per % improvement will skyrocket.
I'm trying to figure what you mean...
The $12 per % improvement is relative to just a 1070. Adding a 1080ti gives $12 per % from just the 1070.
So if I sell the 1070 I'm just back to a $760 1080ti. So there's no % improvement relative to anything. Yeah, I gain $400 or whatever, but I lose the contribution of the 1070. So the only comparison is to using the CPU or something?
I think the suggestion is that you try rendering with just the 1080Ti - some people have reproted that rendering with a slower clock speed card drops the clock speed of the faster card, though as I recall others have reported not seeing this.
Haha, guess I misunderstood. The % improvement would obviously be the 1080 Ti vs the 1070. With the high prices of the 1070 I'm pretty sure the performance gained from switching to a 1080 Ti in relation to the money spent (after selling the 1070) would be more $ per % improvement than you have now.
Basically what bluejaunte said.
From the mining, mid range cards are expensive. GTX 1070's are selling for a good price used. You can probably get what you paid for it or more. Then all you're in for the 1080 Ti is the purchase price minus what you got for the 1070.
Some more render results...
I did a longer render, and with just the 1070 it took 38 minutes, 45 seconds.
With both the 1070 and 1080ti it took 14 minutes.
That's an improvement of about 64%, which is almost identical to the 66% improvement reported by others for the same set of cards in the Sickleyield benchmark scene.
So for those who have doubts about the correctness of the Sickleyield benchmark I think you can rest easy. A small scene doesn't mean it doesn't reflect all the stuff going on in larger scenes. And I think that's because we're not talking about absolute times, we're talking relative times between two configs of GPU's.
Now if someone has data to contradict all that, and which shows that Sickleyield's scene doesn't represent, then please feel free to post what you have.
By the way, I'm not convinced that the installed GPU power has much significant effect on responsiveness of the 3D View in Iray mode. Yeah, the GPU's render the Iray 3D view, but as you manipulate the view I see the utilization of any GPU's drop, and the CPU utilization increase while I'm manipulating. Not by a huge amount, but significant. For example, when I start to manipulate my CPU utilzation will jump from maybe 10-15% to 20-25%, and my GPU utilization for both GPU's will drop from 100% to maybe 70-80%. Maybe there's a big transfer of data back and forth to DAZ Studio's UI as the GPU calculates and that's what uses CPU power, but I really don't notice much or any improvement in responsiveness when I add a 1080ti to the existing 1070.
I think the other settings that have been mentioned do FAR more to improve responsiveness in Iray mode. But with or without the addition of the 1080ti my responsiveness is virtually immediate as long as the settings are optimized.
You are using the 1080 Ti as the main though? There's not going to be both cards used in the normal viewport.
Not sure what you mean by the "main". I have both cards enabled to do Iray Photoreal, no CPU, and both cards are at 99% when the 3dView is in Iray mode. And like I said, both drop to 70-80% as I manipulate the 3d View.
I do have my 3 monitors connected to the 1070 if that's what you mean.
Yeah so pretty sure the viewport still runs on your 1070 only. You need to swap.
I mean, don't get me wrong. I don't think you'll see massive gains. That viewport frankly never felt very optimized to me. It seems to slow down not only by number of polys but also number of items in the scene or something. Still I'd swap, there's not reason for your 1070 to be the main card. Any realtime opengl/directx viewport or game is going to run on whatever card the monitors are connected to.
Below is a screenshot of both GPU-z windows showing my 1070 and 1080ti when Studio is just sitting there with the viewport in Iray mode. As you can see they are both running flat at 99% as they both render the viewport. I'm not sure why you're saying that only one card is rendering the viewport. And when I start manipulating the view they both drop in unison to 70-80%. But they're both still rendering the viewport together.
I thought we were talking about the none Iray, raw viewport performance.
Ah my bad, you said ' 3D View in Iray mode' so... well, maybe try connecting the monitors to the 1080 Ti and test what that does? What happens to VRAM, is Iray Preview just as slow? Should at least be interesting to have some first hand experience.
Clearly with both cards connected the actual "render time" of the 3d View is much shorter than with only the 1070. Which means that both are contributing to the viewport render.
My guess is that when you manipulate the view, things change. Now you have a lot of dynamics to update the Studio user interface in real time, when before you had a static image and you were just updating it. Now as you rotate and move your view there's a lot more changing, so maybe theres a lot more communication between the GPU's and the CPU and maybe main RAM. And maybe THAT is what is the limiting factor in how responsive the viewport is as you rotate and stuff. Now you have to worry about the PCI-e bus and CPU and memory bus and so on. Maybe all of that is the bottleneck to responsiveness of the viewport manipulation.
Heck, I don't know.....
Yeah I think the responsiveness is definitely linked to the raw viewport performance. If you have a complex scene and try to move during Iray Preview it's basically near impossible. Meanwhile with a near empty scene it's almost realtime. So if you can make the viewport faster with the 1080 Ti then moving in Iray Preview may also be faster. All I can say is with my 1080 Ti it's still often near impossible to move around druing Iray Preview, there's definitely a lot of slow down that seems to have nothing to do with the video card and more to do with just UI, CPU or such. It's a bit like when you link the eyes to the camera the fps drop dramatically.