Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
...so how? Tesla cards are made for raw computing, not graphics production. I know they can assist rendering with a Quadro (not a GTX series) adding the extra cores for speed as they use the same driver set.
Ideally your render cards are connected to nothing, as they can concentrate on rendering. No memory taken up / no resources spent on Windows.
EDIT: Didn't see the other question. CUDA cores are CUDA cores. You cannot mix Quadro and GTX but you can mix Tesla with GTX in terms of drivers. So the little tesla just sits in there and COOKS renders... and is a super innefficient space heater.
EDIT AGAIN:
Here we can see, my monitors are connected to the motherboard instead of the TITAN Xs. Leaving them unmolested to do their work. Also second PSU sitting on table, to power the TITANS... ELEGANT SOLUTIONS. Well one of them anyway...
In the system with the little poop Tesla M2090, the AMD Fire Pro pushes the monitors and the Tesla just renders.
Ok, been checking the NVIDIA site frequently since the wee hours, wasting much of the day. Titan XPs in stock as before, but no 1080's, 1070's or Bigfoot have been sighted in stock. A cruel tease?
TIme for torches and pitchforks! Watch the video with the sound off and add your own comments about cryptominers and NVIDIA marketing : )
https://www.youtube.com/watch?v=qLvGnro4Cgw
Basically, the GPU isn't literally drawing the render in the render window. Forget that it's even called a GPU for a moment and instead just think of it as "specialized hardware that's good at doing the calculations that Iray needs". DAZ sees that this specialized hardware is available, sends the scene to the card(s) and your CPU periodically gets the results back. There are a few separate threads on here talking about CPU load and how the scene gets assembled when there are multiple GPUs involved, but essentially you could just use the onboard graphics on your motherboard or another less powerful video card. When they say there's no video output, that doesn't mean they're not suitable for calculating graphics, they just mean there's literally no monitor connectors on the card so you can't plug your monitor into the card. So just happily run your monitor from another card.
...as I read, it was the other way around Teslas were compatible with Quadros (and often used in combination to help with computational power) but not GTX cards. Also if the displays are patched into the MB, wouldn't you just be using the board's onboard integrated graphics?
...interesting.
Well I'm currently considering running the displays off of the original GT 460 and the rendering off a 4 GB 750 Ti (hopefully Octane 4 will be out in the next couple months). No way will I risk cooking either of my systems with an old fanless Tesla particularly not for just an additional 512 cores. Now if one could build an external box for them with its own cooling and PSU, that would be different.
So could say, a single 8 GB K10 (3,072 cores) perform rendering without a GTX or Quadro GPU on the board?
Take it one step further if this helps you clarify things if you have Iray server you can have a machine with no GPUs whatsoever taking advantage of the GPUs on another machine on the network. I *think* that has to be Quadros or Teslas, not GTX, but that's beside the point. You'll still get updates back to your render window on your machine, even though there are no GPUs participating in the render (say, just using onboard Intel graphics, or if your main machine has an AMD GPU). Your CPU is acting as a render manager, it's looking at what resources it has available. You need to be able to load the scene into the GPU's memory so that it's not constantly going back and forth to the CPU to ask for the data it needs. But forget for a minute that you're dealing with graphics, you're really doing a ton of math calculations, which the GPUs happen to be very good at. The results of those calculations are periodically sent back to the CPU (if you watch the history messages you'll see information about when the canvas is updated, and if memory serves there's even some render settings about how often the canvas will refresh). So basically, your CPU is getting a bunch of data back from the GPUs, e.g., the results of one or more render passes for the scene, and combines those results with the results it already has. Those results end up getting sent to whatever video card, or onboard graphics you're using for driving your monitor.
I think where you're getting confused is more with when you're dealing with a video game and realtime rendering. In that case, you do really need the graphics card to drive the monitor since you're trying to push out the rendering results in real-time, and frequently at 60 FPS or higher, so you really can't have a GPU send the results back to the CPU for assembly.
...as I read, it was the other way around Teslas were compatible with Quadros (and often used in combination to help with computational power) but not GTX cards. Also if the displays are patched into the MB, wouldn't you just be using the board's onboard integrated graphics?
Well yeah. We are not playing Crisis here, we are working in DS. Intel G3100 would be fine. Iris Pro 6200 is overkill. If you have the option to run DS off your integrated graphics and are not... I got nothing for you. I sorta understand when people game with the same system. I found it much easier to just build a different box for gaming.
So could say, a single 8 GB K10 (3,072 cores) perform rendering without a GTX or Quadro GPU on the board?
Totes.
It has been me experience that using something to drive the monitors other than Nvidia is my prefered. If you are using something Nvidia to drive your monitors and the drivers crash you just killed whatever is rendering too (not that, that kind of thing happens too often, but can). I like AMD Firepro personally, if you have a driver issue, your render task just keep happily plugging away.
So this is an example of one setup.
Or using your K10 example.
In my own tests with dual 1080 Ti, where one has 3 monitors connected to it and the other none, I could not see a difference in renderintg speed between the two. In fact, the one without monitors is even a bit slower, probably because it is clocked a little lower.
...have a P6T MB with no on board graphics so I need to use the 460 for the displays which leaves 1 slot (as a K10 is a dual width card).
Hmmm. .... I have seen K10's on ebay for as little as ~$220. The card has two GPUs, so each manages 4 GB. I thought at first it had one GPU with 8 GB. You can also find used a GTX 970 4GB for a similar price, and it has a little better performance rating (for games anyway) compared to the K10, and should use less power. Both have built-in cooling. The features of Octane 4 open up some more possibilities ...
..didn't realise it was a dual GPU. Looking for some way to get that 8 GB without paying 700 - 900$. Still, 3072 cores added to the 640 I already have would be nothing to sneeze at. That's about as much as as a Titan Xp. Even with out of core rendering in Octane 4 it would rip.
If my disability case were resolved 6 months earlier, I'd probably have a 1070 Ti.
Yeah, me too, darn it! But as you say, could be great with Octane 4.
The K40 (single GPU, 12GB) is going for $1100 and up, and the K80 (two GPU, 24 GB) is going for $1400 and up, used on ebay. Not such a great deal for the K40 compared to the 11 GB 1080ti at original price ($700), the k80 would be better if it was single GPU with 24GB. If we see a crypto crash, these prices should drop too.
There are ads for an NVIDIA GRID M40 with 16 GB for $650, but that not the same card. It is really 4 GPUs with 4GB each, basically a double K10 for more $ than two K10s.
Prices have reduced a little, and there seem to be a few available.
... To me, that is a reason to hold off buying. New tech is being rumoured as available soon, so either new will be tempting, or the deals on the old stuff will be - either way I can wait. I like my cash where it is.
I'm guessing this doesn't affect rendering much, but for a $3000 MSRP card this report is nonetheless noteworthy...
WCCFTech: Nvidia Titan V Reportedly Producing Errors in Scientific Simulations
Of course, if the reported error carries over to rendering (in the form of artifacts or some such)... well it's worth keeping an eye on at least.
This Register article indicates that the error was reproducable, causing errors around 10% of the time...
The Register: 2 + 2 = 4, er, 4.1, no, 4.3... Nvidia's Titan V GPUs spit out 'wrong answers' in scientific simulations
In a related note, there's a forum thread here about the Titan V not playing well in Daz...
https://www.daz3d.com/forums/discussion/224441/nvidia-titan-v-iray-render-fails
...for 3000$ I could build a dual 10 core Xeron system with 128 GB of 4 channel DDR3 memory. Yeah, it may not be as blistering fast but then I'm not stuck with only 12 GB to render very huge high quality large resolution scenes in.
On the bitcoin front IBM is developing a new chipset specifically for cheaper blockchain applications which means all these bitcoin miners will hopefully be dumping a hell of a lot of videocards on the market in the next year or so. Finger crossed.
...not sure I would want to buy one considering the punishment many have gone through.
I don't think I would want the used ones either but a prolieration of cheap used cards could translate into lower prices for new ones.
...as you and others here know, I tend value VRAM higher than boatloads of stream processors as I create highly complex and detailed scenes (remember Alpha Channel's epic sweeping scenes? Yeah as a former painter, that is my inspiration). The 999$ Vega Frontier with 16 GB of HBM 2 would be very attractive however Iray is CUDA only and as we are just a small segment of the GPU consumer base next to gaming enthusiasts, that light, the the notion of "competition" from AMD has been and will always will be moot. Add to this Otoy's forthcoming Octane 4 which will be available for 20$ a month and offers fast out of core rendering that makes the need for a super high VRAM GPU unnecessary (and will also support both the OpenCL graphics language as well as Vulkan API).
While Daz incorporating Iray seemed a great advancement back in 2015, I find it has become somewhat limited for many given what has happened with GPU pricing and availability. If they also had a similar out of core render mode, the situation might be different. 3DL and Octane both have standalone solutions (3DL's is free up to 4 cores/8 threads). Iray's standalone is not only expensive (like the full unlimited core version of 3DL) but has no "third party" processing interface for Daz (like Renderman RIB for 3DL or Reality for LuxRender), only the high priced pro grade software. Were there one for Daz, it would at least reduce the resource demand of having to keep the scene open in Daz while rendering in CPU mode. In that light it would be a bit faster as there would be less chance of falling into much slower swap mode.
Nvidia right now is that cat that catches a mouse and just plays with it instead of killing it. Nvidia could easily put AMD (at least the graphics division) out of business, but that would attract attention from the FTC and antitrust regulators. That would eventually lead to fines and a possible breakup. Since the AT&T breakup, companies know exactly what they can and can't get away with. Nvidia has had AMD(ATI) under it's paw since they drove 3dfx out of business. Sure AMD may have a faster card once in a while, or gain more market share, but rest asurred, Nvidia has higher profit margins and is sitting on way more cash. They just make it look like a close race to keep the FTC off their back.
I don't really care as long as I get a decent card at a decent price. My problem is when they try to lock users into an "Nvidia only" upgrade path, with applications that only support Iray or Cuda but not OpenCL. My favorite quote from the last 10 years:
So yeah, in theory anyone, AMD or even Intel could run Cuda code hardware accelerated just as long as they render unto Nvidia what is Nvidia's. (sorry for the bad pun)
...but that would likely be brief, as miners would most likely turn to the newer generation cards which would offer improved performance.
That would be great. If they follow past trends, cardmakers would lower the MSRP for the older generation of cards ... for 6 months then discontinue them. The struggle continues
I'm pretty sure I've reminded you that DS can interface with Iray Server in the past.
For people on a budget this is a great card for iray because of the 6gb vram. You just need a regualar video card to run monitors(this card has no video out). Yes Nvidia stopped supporting this for Iray but it still works. Just make sure you get the front video card bracket to mount in case and add two fans with 4pins that are user controllable that way you can speed up fans to keep your tesla m2090 from burning up.
...Iray Server, which is a one year licence for 300$ and is designed for network rendering requiring a separate render system. I don't see any mention in any product description of a Daz plugin, just Rhino, Maya, and C4D.
https://www.daz3d.com/forums/discussion/comment/1649336/#Comment_1649336
See the Advanced tab of Render Settings.
I'm hoping they won't disable it completely. I mean the whole point of the Cuda core thing is that your cards can always contribute, as long as they are compliant. I hope their "discontinued support" will be the same as Microsoft not supporting Windows 98, as in "You can still use it, just don't call us if anything goes wrong." I can live with that.
...I have and it requires a remote system to link to. That still means more expense.