Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
..so how does it know how to interpret duf scene files? Even Lux and Octane, which are standalone engines, need a plugin for Daz.
My interest is I am looking at using an older Dell workstation as an "in house" render server, not sending files to a remote server on the Net.
...NVLink (Volta Quadro 100 and Tesla 100) was developed to allow for "memory pooling" and that is only for computational purposes.
The attachments appear to be the tab for the Daz remote server.
If memory pooling for rendering with GTX cards (particularly older generation cards as the 980 Ti [Maxwell]) was possible then why haven't people already been doing it?
You might want to investigate your cost to run that box. As an example, just ONE of my render servers will cost approx $100/month in electricity if left running 24/7 and I'm in a very low cost/kWh region. This is a combination of both rendering and idle time. When I fire up the whole rack things get ugly.
Kendall
It doesn't get the .duf file, any more than the local Iray does - it gets data aleady prepared (by DS) for Iray to render.
Many thanks, must look into this!
...well I wouldn't be running it 24/7 only when actually rendering.
I'm also talking about one box with dual 8 - 10 core Xeons, 64 - 128 GB of memory and no dedicated GPU cards (as they are too stupid expensive today) networked with my current systems, not a rack of multiple blades that would need it's own dedicated power trunk due to the high amount of current it would draw (I live in a rent controlled apartment, that kind of modification is a no-no).
100$ per month comes to about 3.50$ per day which is still lot less than 70$ - 95$ per day or 900$ per week renting time on someone else's servers. Also live in a fairly low KW/h region as well and if I didn't have to run the heat like I did during a cold snap in December, my last bill would have been under 40$.
okay I'm not a tech person as I have stated to you a billion time..lol so I'm not even going to try to explain it
so I wrote kal my friend from geek squad this is his replay back
-------------------------------------------------------------------------------------
ME: Hi Kal
I had a convo with one of my friends today about my gpu you put in . is my vram stalking or is it still in SLI
Kal: No neither., it does not stack proper as you would think. but it does stack in the server.
AFR requires EACH card to have a copy of ALL graphical data as it has to render the ENTIRE screen.
SFR does NOT require each card, but at the same time, each card is working on completely different portion of the screen, so they only have the data they need to render their portion. HOWEVER, if there's any object that cuts and/or duplicated across both portions, then that data is also duplicated in BOTH card's VRAM. They do NOT access one from the other.
The reason is because the time and processing power the memory controllers are in each GPU used in communicating between the GPUs just to transfer data is time lost.
So, while you can "technically" call it "stacking" in the sense you're not duplicating data, each GPU is still using it's own memory pool by itself through the windows server we set up. so, it's not shared vram stacking "technically" its server stacking the vram together, which is what we are doing for bitecoin mining for using block rates. and how render farms set up their gpu's . I set your gpu's up the same way. so even though each card is only 12 gigs on their own, running through the windows server combines gpu usage as a memory pool. Tell your friend he is right and pay the bet. lol
I just bought a Laptop with nVidia gtx970 GPU. This renders quick enough for me and I think it‘s from a long term perspective much cheaper than using a render farm.
I'm diggin' this at the moment. Really cool. 1 Quadro P6000 workstation for 90 cents an hour. Lots of options out there at the moment. https://www.paperspace.com/
..so how do their subscriptions and charges work? There is the hourly and the 5$ monthly storage fee. The P5000 would probably be sufficient enough for most rendering and 65¢ an hour comes to only 15,60$ per day far better than 90$/day.
Sounds also like you have to install your software on the rented systems. Isn't that against most EULAs and licencing (like if you only have a single system non floating licence)?
Hmmmm....maybe if I sold a kidney.....
They have a page that explains the charges in detail. You have to work out your own licencing problems. For Daz Studio, this isn't a problem. My free Maya license allows me to install on two machines. but unfortunately they don't have a multi-gpu option so I'll just use them to host my video server.
...so do you have to install Daz on their system, and what about the content library and runtimes so it can find all the meshes/textures? The price is pretty decent considering the other rendering services I've seen, and even the resources of a single P5000 would be a pretty huge step up from rendering on the CPU like I'm stuck with now.
It's your machine, so you would operate it exactly as you would your own. I wouldn't bother uploading my entire runtime, just the files I need for rendering the job, but the process is the same as if it were on your local desktop. or you could have the machine access your hard drive like on a WAN.
...hmm so I wouldn't get a bunch of nicely rendered grey boxes, or multiple path errors?
That link still appears to not be what you've pictured in your setup. You say there's room for 6 slots, 3 dual slot cards, but the link appears to only hold 1 dual slot card. I'd really like to know what the enclosure is that you're using.
When clicked the link it appearss like the site may have changed the products link that was originally posted to that one on sale., I suggest you go to Best Buy that is where I bought mine., like I said Kal Radcliff from the Geek squad set me up. you'll find that Best Buy actually have a bunch of different sizes so if you need bigger ones they do have them. . The display they used was with a box that had 12 - 1080ti's -demos-cards in it (drool), so they do have bigger ones for your needs
Mine is s a VisionTek - its Thunderbolt3 , GFX series , its just a plain jane External Graphics Accelerator Enclosure, this one has 6 slots for 3 dual card, which I have 2 cards in it,
This link talks about these boxes, which they call eGPUs.
Kendall is right about electricity costs being noticeable. I have a 1500VA UPS that keeps track of electricity costs for what's plugged into it. There is also an inexpensive ($20) device called a Kill A Watt Electricity Usage Monitor that can measure power usage.
I had Geek squad set up mine. I'm pretty computer dumb , I'm on win7 using -windows server 2014
If you were to use a Data mining program for the GPU server to do bit-mining it cost like $5000 for a block chain to get started . But if you are combining GPU's in a graphics Accelerator Enclosure, all you need is a way to connect it & to control your GPU's I'm connected through USB 3 using NIVIDA EVGA precision x oc server controller. which comes with its own GPU server & was included with my EVGA titan cards when i bought them.. This options lets me changes my gpu's on the fly like the over cocking and power settings etc. the graphics Accelerator Enclosure is just a place to keep your equipments from being knocked around.
But a lot of people just hang ther GPU'S from wires rigs, so you really don't even need a graphics Accelerator Enclosure, just some kind of wire rack like this so you can cable tie your gpu cards to it
All you have to do just do a Google search for homemade wire rig for gpu cards and you'll see thats you can combine a ton of GPU's like that. its how you connnet, power & control them that whats matters not the enclosure
At the moment I don't have any GPU storage space problems myself, but anything is possible. I've seen the naked rigs. There's something about them that says money is meaningless, like lighting a cigar with a $100 bill. I do think that we are living in a time where processors are making an evolutionary leap. RAM was once $500 a meg. When I bought two meg, all the box would hold, the dumbfounded salesman asked "What are you going to do with it?" I said, "Everything." I'm still working on it.
Old man with new GPU, below.
I'm having a problem that when I try to render a large scene with a remote iRay Server, DAZ crushes while I'm uploading the scene. It just closes and I have to start over. Any idea why this may be happening and how to avoid it?
Thanks!
My UPLOAD speed was too slow when I tried one. Their server kept dropping the connection.
Yes, my upload speed was not fast either, but that should not make DAZ to crush. I tried contacting support but they told me "it's not their problem" and that I should deal with NVIDIA. I tried explaining that the iRay server was not the problem since what was unexpectedly closed was their software but they still did not answer.
Hiii, I'd like to know about these servers too and if you could drop a message to him for me too
Dropped you a PM