Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
While we continue to wait (E3 seems to be the next window of opportunity for an NVidia announcement, there's also some noise about NVidia's next gen GPU being announced later this year), I came across this...
http://www.guru3d.com/news-story/asus-h370-mining-master-connects-20-gpus-over-usb.html
This makes for an interesting thought experiment. How would such a setup (20 GPUs communicating over USB) fare in rendering?
Also, would you only use this in the winter, since the heat output would be significant? I'm thinking that you might need multiple 20 amp circuits, unless the power requirements for each GPU were really low.
...I have to agree with the last post:
"Go home Asus, you're drunk"
Besides, wouldn't USB be slower than PCIe?
Yes, but the question is, once the scene is loaded, how much bandwidth would be needed for cross-communication? And how much would having 20 GPUs offset the bandwidth deficit?
I'm not suggesting that anyone do this, but I'm just curious as to how it might shake out as a rendering machine... i.e. benchmarks, etc. soley as a thought experiment of course.
Threadripper with 64 PCIe lanes is likely a much more formidable rendering machine of course (thinking 6 GPUs at x8 each, or even just 4 GPUs), but such thought experiments intrigue me.
The 1080 released on May 27. The 1070 did not release at the same time, it dropped June 10. On July 15, Octane released 3.02 which added "experimental Pascal support". While technically experimental, it still offered performance boosts over the 980ti, and more importantly, it worked. This was just over a month since the 1080 released.
Meanwhile the Iray SDK with support for Pascal did not release until October. Daz added support in the beta shortly after, but the general release with Pascal did not come until January 2017, which is staggering. This is important as not all plugins work well in the betas. So people who bought Pascal cards and made the mistake of only using Pascal cards were 1) forced to wait 6 months for any support at all, and 2) forced to use the beta for another 3 months before the general release finally came out.
All told, it took about 9 months for Daz Studio to officially support Pascal. Some people might say this was outside Daz's control as they must wait on Nvidia. But...that's not entirely true. Daz made the choice when they chose to license Iray. And just think, this could happen every 2 years or so with every new GPU generation! Awesome!
Another problem with this time was there was very little information upfront about the lack of Pascal support. Aside from forum rants, it was pretty much silence. The page that gives Daz Studio's requirements and recommendations NEVER reflected the lack of Pascal support. You owe it to your customers to list "Pascal support coming soon**" on that page, and in a sticky on the forums. Sure, there are people who don't check these things when building a computer, but there are people who do, and these people are mislead. It is Daz's job to do as much as possible to avoid confusing or misleading customers. Customers who get upset, whether it is Daz's fault or the customer's fault, will purchase less and possibly leave. That's not speculation, that is a fact of business.
Whatever happens with the 1100 series, Daz DO NOT do what you did with Pascal.
And again, I'm holding out hope this will be pointless, and that Volta support will also add the 1100 support.
Even if there is a slight bottleneck, I don't believe it would be all that severe. IMO, 20 GPUs are going to murder any other setup aside from a farm or DGX, whether they are USB or not. Who knows, it might even beat a DGX-1.
I think it would be fantastic to see what a GPU mining rig can do with Iray. Even a small one, like the popular setup of 8x 1070s on risers. I bet that would smoke just about any rig seen in the forums.
...but scene size would still be limited by the VRAM of a single card.
So you're talking a humongous expense just for core count speed. Even at list price, 20 1080 Ti's (and the Nvidia store is out again) would be about 14,000$ and you'd need a 6,000W PSU (there's one by Cisco for 1,600$) to support them all at peak performance (you want some "overhead" so the PSU isn't pushed to its limit every time you render). I think I could build a pretty killer workstation for 15,600$ that would do just about as good.
Octane isn't entirely relevant to nVidia's Iray. nVidia did say they recognised that they had been slow to release support (words to that effect) and would do better next time, so hopefully the 11x0 (or whatever) range will not be a repeat of the experience.
I'm hoping to find out soon. A couple of months ago, when I bought my 1080ti, I went to newegg and picked up some pcie risers and a psu extender cable (since I'm already spending money I don't have). So I have the pieces I need, I just have to mount my cards in a case and hook them up. This is mostly experimental, it's not for speed. My cards are a 1080ti, 770 4gb, 460 1gb, and a Radeon 5830 1gb. So obviously the 1080 will do all the heavy lifting, I just want to see if the other cards contribute at all and if the render setup time is ridiculously long. I'm including the Radeon because I'll test in Luxmark also. Also, whatever I render will be limited to 1gb, so this is mainly for testing and benchmarking.
Cost too; shudder.
The next big thing in rendering machines?
https://wccftech.com/atari-vcs-specs-bristol-ridge-radeon-r7/
OK, so it doen'st have cuda cores, and it's Linux based, but it'll come with Missile Command, Centipede and Asteroids!
I spent many quarters on those games back in the day...
..hmm, 4 GB VRAM and 4 GB physical. Well, maybe a simple portrait could be rendered on it you don't use HD or 4K textures.
Octane is a GPU rendering software. It is very much relevant to the discussion. The fact of the matter is that Iray's competition was up and running on Pascal nearly 5 months faster than Nvidia's own GPU rendering software. That is, IMO, embarrassing. Sure, Nvidia says they will do better next time, but all you have is their word on that. There is no guarantee that they will follow through.
I for one absolutely hope they do follow through on this promise, because I want to upgrade ASAP from a 970 that is really showing its age for rendering. I do not want to wait 6 months after the product launch to be able to buy or use a new GPU.
Don't make scenes so big, lol. Sorry, I know the VRAM limitations, but for me, I am already limited to less than 4gb. So I am already used to trying to fit scenes into that, by removing things and working in post, optimizing, ect. 8gb would be like freakin heaven to me right now. Life is full of compromise.
Anyway, I'm not talking about buying such a crazy machine, I just want to know what such a machine can actually do for Iray. I'm not interested in the cost because I'm not buying it. Its just simple curiosity. These machines exist in the wild, there's a ton of them out there thanks to the wild crypto boom. It would be fun to grab one and play with it for a while to see what it can do, like Linus does on his YouTube show, except for Iray.
How fast can 20 GPUs connected by USB render Victoria in a dungeon with a sword??? Found out in this episode of Outrider's Tech Show!
I meant not relevant to how long it took for there to be a DS that supported the new cards - and I did think there were some significant issues with that first Octane version. And yes, we should of course withhold judgement on the next update until it's here (though aren't there already Quadros with next-generation chips?)
Octane's initial Pascal support was not perfect, but at least it was something, and it worked for most people. Plus the fact that it worked at all meant creators could use it with their new machines. Daz users did not have this option at all for another 5 months. Octane doesnt have to wait for Nvidia, they can get started as soon as the new cards drop on updating Octane drivers...no middle man here (other than CUDA itself, which is very quick to update otherwise no CUDA based app would work). Daz has to wait for Nvidia to release a new Iray SDK before it can even start to update its software. This is a crucial difference. I'll also point out Blender Cycles beat Daz to support Pascal, too, adding support in late August. Again, they didn't need to wait on Nvidia.
The new quadros are Volta powered like the Titan V. The next GTX card will be something...different. That's the thing, we don't know exactly what it will be called. But it is basically a gaming centered Volta with the Tensor cores supposedly cut or removed entirely. Which is why I have some hope that whatever it is called, that the current Iray SDK which supports Volta will also in turn support the 1100 series as they may be close enough in architecture to overlap. I think the chances are decent for this, fingers crossed.
I have asked this question before, but does Daz Iray support Volta yet? Nobody has confirmed this. I remember a user posting that the Titan V did not work yet in the benchmark thread, but that has been a while. The SDK with Volta has been available for some time now, there are Iray benchmarks with the Titan V in the link I gave above (they don't use Daz Iray, rather they use the one supplied direct by Nvidia). So the Titan V is supported now, but I have not seen any mention of the Titan V nor Volta in the Daz Studio change logs on the beta. When is this going to happen? If Daz does not yet support Volta, that does not instill much confidence that Daz will get the 1100 series supported very quickly.
So can can somebody please confirm the status of Volta support for Daz Studio?
Apparently AMD has big plans at Computex next week (June 6th, 2018)...
https://wccftech.com/amd-computex-2018-press-conference-6-june/
Whether Nvidia follows suit with some product announcement of their own, or if they just talk about their supercomputer in a box and real time raytracing once again... yeah I guess we'll see next week!
...would be funny if AMD developed a way to run CUDA based render engines on their GPUs.
I don't think they can, legally. Nvidia has already said they've offered CUDA licenses to their competitors at "extremely reasonable" rates, which I think means it's in Nvidia's favor, lol. They both talk like they are open to the idea, but it hasn't happened yet.
Funny?
It would be ffffff awesome. I could drop a company, that at best, can be described as an entity that takes advantage of its strong hardware position. I feel that is also very relevant to the discussion; I'm interested to see what they do, but not looking forward to how much they demand from customers... Especially this one. :(
It would be amazing. AMD does not push the separation of pro vs gaming as much as Nvidia does. Nvidia limits a lot of what GTX cards can do, like how most speculation indicates the 1100 series strips out the Tensor cores. Nvidia reserves a lot of those things for Quadro (remember that Nvidia has it in their EULA to prohibit users from using GTX in workstations...they REALLY don't want people mixing these these cards tasks.) AMD gaming cards consistently do very well at at different tasks that Nivida skimps on with GTX. That's why miners loved AMD cards, they performed very well at that type of computing. They do much better at FP16, and handle DX12 very well.
Remember, Vega 56 and 64 blew the doors off Nvidia GTX in computing benchmarks like Blender. They even topped the 1080ti in some of those marks!
https://www.anandtech.com/show/11717/the-amd-radeon-rx-vega-64-and-56-review/17
So, I'm just guessing here, but I think AMD cards might actually run Iray better than their Nvidia GTX counterparts do. In fact, I think they'd run Iray A LOT better than Nvidia's own cards do.
Considering the position that AMD is in now, I think it would be a good idea to just bite it and license CUDA. If they can turn that around and beat Nvidia at their own game, they could start penetrating the market again. I honestly think that gaming is a lost cause for AMD at this point. Their best chance to compete is to focus on the pro level and adding CUDA to their library would actually be a great way to start.
..yeah, but if a 16 GB GTX 1180 hits the market what would prevent pro users from going with the 1180 over the the higher priced Quadros or even Titan GPUs?
License restrictions; Nvidia have changed the licence to stop server farms using GTX cards instead of Quadro.
OK, so Nvidia apparently had a conference at Computex today. Here's Anandtech's (now completed) live blog
https://www.anandtech.com/show/12861/computex-2018-nvidia-live-blog-1pm-tw-5am-utc
Short form, still no new news on the gaming front. Get those 1080 Ti's while they are hot seemed to be the takeaway...
E3 is next week, so if they are going to do a launch soon, that'd be the next opportunity to showcase it. Based on one of the questions in the Q&A though, Jensen mentioned that we may not see a newly updated gaming card for 'a long time'...
Edit: To be fair, Nvidia did soft launch something which they talked about at the press briefing, but it's targetted at robotics.
https://www.servethehome.com/nvidia-isaac-xavier-launched-to-spur-robotics-revolution/
It was a figure of speech
I suppose too many of my friends ask me if I think there new graphcs card look good (new lights, different flashing colours, combinations etc).
...yeah that's what those side windows on today's cases are for I guess. I'd rather have a big fat exhaust fan there myself.
Yes. For us common folk who don't upgrade every other year, that plexiglass window starts to look pretty rough after a while. Every little thing puts a scratch in it and when you try to clean it, you scratch it even more.
Is there any way to add new fans nowadays? The old ones just used the 5V power supply connector, so you could use splitters and add as many as you wanted. These new ones plug directly into the motherboard and I've run out of ports to plug them into. Am I supposed to buy a separate fan controller or what? Just a general question for anyone who might know.
...interesting, not sure why they would be designed that way.
It's just like the new generation of rechargeable cycling lights. So many today use a USB connection instead of a conventional charger that plugs into the wall. Takes longer as well as puts more load on your computer's power system.
Yeah, the new fans (at least on my system) have a small black 3 prong connector that plugs into the motherboard. My mobo has 4 ports and I've used all of them already (3 case and 1 CPU). So for the case I'm using(Antec Nine Hundred), I still have the top fan sitting idle and the side fan (by the GPUs) empty, because there's nowhere to plug them in to. I guess they figure an 'enthusiast' would just go out and buy a fan controller.
No biggie though. Whenever I get ready to complain about today's PCs, I remember the 'fun' of setting up IRQs, DMAs, and interrupts back in the day. Not to mention the Baby-AT case specs where mounting holes that were 'pretty close' was good enough.
...isn't the 900 a relatively older model? I have a P-193 and all the fan connections go to the PSU.