Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
And, so what?... As long as they are not Nvidia GPU's they are of no use in rendering IRAY, it doesn't matter what those do in games, we are talking about DS here and in DS the choises are either a new enough Nvidia GPU or rendering on CPU.
Running MS Office, sending and receiving emails and using whatever browser does not push the limits of the technology, that is what integrated graphics are meant for and where the integration helps keeping the cost down and profits up.
...still an integrated APU is fully dependent on system memory, not it's own dedicated graphics memory like a GPU card does. In my old 32 Bit Toshiba notebook, up to 1024 K of the system's 2 GB of useable memory could be accessed to support the graphics chipset, which subtracted from the total system memory available. For Iray the VRAM requirement is compressed. For example a 12 GB scene takes up as around 4 GB in VRAM on my Titan-X. For an APU only a maximum of 2GB can be allocated as VRAM, so that still means primarily rendering on the CPU.
Another matter, it would require Daz3D to adopt a different render engine than Iray for PBR rendering as Iray is CUDA based. This would leave AMD Pro Render and LuxCore Render as the two options which are both OpenCL based.
As to deep learning the pro grade Nvidia "A" series (formerly branded as Quadro™) and computational GPUs (formerly branded as Tesla™) are already there and are a major component of the latest supercomputers. Even RTX Titan was considered useful for deep learning research and development.
DDR5 RAM has been announced as the much faster system RAM type for the coming AM5 motherboards and integrated GPUs being fuller dependent on system RAM when it can be expanded to 64GB, 128GB, 256GB and higher in some cases really isn't a bad thing.
The new Apple M1 machines absolutely do not and cannot use nVidia GPUs. nVidia won't even work on driver support for eGPUs. Apple, intel, and AMD all stepping up their research into faster and more capable integrated and discrete GPU technology and specialized AI compute units. Those things are going to become more specialized and faster, not less so.
So yeah, I'm looking at this artificial logjam for nVidia GPUs being broken.
I did the opposite. I got a card last month at Best Buy, then bought the parts to build around it. I saved a little by repurposing the data drives from my old computer. But my computer budget for the next few years is used up.
Keep your eye out though. I'm hoping the Best Buy sales will continue. It allows non-scalpers a chance to get the RTX cards.
Remember, though -- most gamers don't care about Iray support. At all. Period. The lack of Iray API implementation into AMD or Intel graphics chips would therefore have no impact on most customers' purchasing decisions. Even a flood of powerful new non-nVidia cards would placate a considerable percentage of the consumer base, leaving fewer remaining customers to compete over the scarcity of nVidia-based cards. In short, while having more 30-series nVidia cards out there would of course be preferable for enthusiasts like us, any new video cards entering the market would help lighten the load.
...need some deep pockets for 256 GB or more RAM (DDR4) and most likely DDR5 will be even more expensive. There's also the cost of motherboards and CPUs that can support that much. Only higher end Xeons and Threadrrippers can support 256 or more GB. it may come down to the memory needed to support an APU being more expensive than a high VRAM dedicated GPU.
Don't see many of us "Dazzers" being able to afford that..
Thanks, I do have the Best Buy app installed on my phone & notices turned on. I then went to and added to my "Saved" list every RTX 3000s series card on the BestBuy app and that app did not ever notify me that any cards had shown up at the nearest Best Buy to me so I take it that as they didn't get any because they would have to have added them as stock before the could sell them which means they should have shown in that app but didn't.
I will likely in November buy an LHR RTX 3060 TI on Amazon from the looks of it. That will make the money I spend since November 2019 on my desktop I started building then total out to about $3000! A ridiculous amount really but it is maxed out for what the motherboard is capable is, except of course they is no RTX 3090 in it, but that's more of a GPU capability not a motherboard capability..
I can assume that someone already mentioned it, but in case it has not been mentioned, the minimum of 1Tb of space is requested as well as for temporary files created during the rendering process, if I remember correctly, for complex scenes it may take quite a lot of space.
For laptops you can buy external boxes for graphics. I don’t know how effective it is for current graphics, but it looked pretty handy a few years ago. I haven’t checked that again because my dream now is to have a desktop with a thermaltake WP100 case.
In a calculation I saw in one of the threads of this forum there was talk of 8Tb for content. Looking at what I have: a lot of old freebies and about 1500 products, that calculation seems to me a good approximation for a regular buyer who has a lot of hair and clothes that compressed reach 300 or 700 mb. The textures decompressed usually occupy up to four times their compressed space.
The store currently reaches 82,000 products (some of them vaulted), of which approximately 50,000 belong to the Genesis 3/ Iray era, taking into consideration that Victoria 7 is the product 21750 and we leave out of account the products inherited from RuntimeDNA. I have no way to estimate how many Gen3/Iray products there are in Renderosity and other stores, I don’t think it’s less than half. So the potential to reach 8Tb is there (and in your wallet through the years)
However, to manage the growth of the library you can think to have in the future a Qnap and buy the disks needed over the years.
_ and then there is these "Refubs" ( thought there was an Afraid emoti ??) ,,thanx
So many threads about buying a new pc.Why not you make a thread Looking for new PC ... and any new posts/threads etc go under that roof? God knows you are on top of the hardware and what is and what isn't. All of these repetitive threads should be housed under one roof. And you would be the perfect roofer ;)
DDR5 RAM will be much faster at selling for those crazy enough to jump on the bandwagon right away. From what I've heard, DDR5 RAM at present isn't really faster than today's top spec DDR4 RAM. Its new, and just like DDR4, its going to take time to mature and improve before it blows DDR4 out of the water. I would wait at least a year after its introduced since you'll have to buy a new motherboard that supports it, which basically equates to building/rebuilding a whole system.
A comment and question. I've been buying stuff from daz and other places since 2007. ALL my products still only take up about 600MB so all fit on one external drive. I cannot fathom dealing with 8GB of content. More power to you LOL
My question, does studio's need for disk space for temporary files explain why with complex scenes I often have to create it, save it, exit and reopen the scene in order to render (because if I just render it crashes, but then when I reopen, I can render fine?). THe thing is, on my current system I have a 1TB drive but only about 200GB free on it.
OK, I have a second question since I am typing. I'm looking at many options, but there are some options even via best buy--not custom made, but eons beyond what I currently have. Still the max ram available seems to be 32 even though the machine can take up to 64. Is it worth it to get the machine and then later, if I find it necessary, upgrade. I assume that means buying and installing 2 32 GB memory chips, so replacing the 2 16GB that come installed.
Maybe at first DDR5 will not be that much faster, but I expect that to change pretty quickly. I honestly don't have any info on why, I just have a hunch the spec will move quicker than DDR4 did. Things seem to be moving fast right now. They are already talking about DDR6 and it may be coming out in a few years. The long stagnation that seemed to hang around the Intel 4 core era is over. The arms race is back on again. The only problem is pricing.
But DDR5 will bring a serious spec change that many of you here will get excited about more: capacity. A single DDR4 stick is limited to 32GB. But a single DDR5 stick can have up to 128GB of RAM. On a single stick!
Now obviously we will need motherboards that can handle more memory, and these things will not be cheap. But the possibilities are there, this is a spec beyond even workstations and it can be done on a desktop. You will need a Pro version of Windows to go beyond 128.
Nvidia has been working on a ARM based CPU for a while. It may not ever get released, and it might only be for workstations if it does. But this could change things a great deal. If Nvidia makes a CPU, that opens the door for a Nvidia made APU down the road. That would bring CUDA to APUs. I stress this possibility is waaaaaay out right now. But I just wanted to throw that out there. Currently any APU you buy will not have a CUDA capable GPU packed with it. So the GPU portion of the chip will be doing nothing for Iray. At that rate, it would make sense to skip the APU and get a CPU that just has a bunch of fast cores or hunt down a Nvidia GPU. As for why Nvidia would do this, they have many reasons. I think their CEO has always wanted to make one, so that is one reason, LOL. Plus with Intel jumping into the GPU market, Nvidia jumping into CPU only seems fair. That is not a joke. As Intel ramps up GPU production, you can bet they will push PC makers to use both Intel CPU and GPU solutions in the desktops and especially laptops. So it makes sense that Nvidia would want to strike back with their own all in one solution. But even bigger, Nvidia CPUs could encroach on Intel's main source of business, the workstations.
APUs may become a big deal in the future. Like I said earlier if this GPU price mess keeps going things could grind to a halt. But one thing crypto miners don't really use is CPUs. Many mining rigs use old and crappy ones, just enough to run the thing. It is also much harder to horde CPUs, because you need to buy basically a whole PC to use each one. That is why GPU mining is so attractive to miners, it is so easy to scale up an operation. Anyway, that leaves APUs for gaming. And guys, the PS5 and Xbox Series X are APUs. These things can be made if they really wanted to. I am surprised we haven't seen MS use some of their Xbox APUs in Surface tablets, it just seems like a great fit. Maybe in time, since the Xboxes sell so fast right now. At any rate, APU might just become one of the only ways to game in the future without selling a kidney.
Perhaps in time Daz will bring in a new render engine. I would be surprised, but you never know.
The trouble with AMD GPUs right now goes beyond lacking CUDA. Their software stack outside of gaming is just plain bad. Nvidia pretty much stomps AMD in many professional software. There are exceptions to this of course, but for content creators Nvidia covers all of the bases while AMD is only as good in select software. This is something I expect Intel GPU to take off with. Intel's GPUs were originally intended to be just for professionals, the gaming is just a side for them. So I expect Intel GPU to be pretty solid, though again the lack of CUDA will hurt them as well.
You are too kind. I just like the hardware side of things. I did suggest a hardware sub forum once. I think that would make sense, but I guess they consider that falling under the technical help forum. But everybody just reads the commons. I'm guilty of that a lot too.
...well there is a community member working on a homegrown plugin for LuxCore Render to Daz which was recently shown over on the DazStudio forum. The alpha version seems to work pretty well and the results are rather impressive. the developer also mentioned there would be little if any conversion of Iray materials needed (3DL materials may be a little more problematic but still doable). This would allow users to ditch CUDA as LuxCore is OpenCL based.(Nvidia cards also support OpenCL although AMD cards tend to support more up to date versions) along with the risk of older cards forced into obsolescence by Iray updates
As of Q4 last year, "small" Maxwell cards already have been moved to "depreciated" status and with Nvidia driver support moving exclusively to W10 and Linux on Oct 4th. not sure how that will affect support of older GPUs
Two very bad moves on Nvidia's part given the artificially obscene prices GPU cards are demanding right now that shows few signs of abating.
Due to costs of new tech when the faster AM5 motherboards are introduced that can speed up the DDR5 RAM properly I'll have to wait because I won't be able to afford any of it. If they have an AM4 socket 8 core or 16 core Zen 3+ CPU / RDNA2 iGPU that will work in my Zen 2 era B450M motherboard I'll upgrade that part next year though. Those are liable to be fairly priced then as everyone with money will be going for the AM5 level new tech.
If your content is that old then you've not bought anything new for a long time and you've bought next to no recent sets. There are recent singular products that exceed gigabytes all by themselves.
...and a lot of that is due to 4K and 8K texture files, not so much polys.
I must have been high when I wrote that. I meant I cannot fathom 8TB of content (I thought somebody said they had that). My products take 600GB maybe a bit more. The point is I can fit it all easily on an external drive. Apologies.
Pretty much. It is also that the shaders use a lot more textures in general. In the old days you basically had a base, bump and normal. Sometimes a SSS. Now they often pack several new textures per surface with additional settings for dual lobe spec and more. So the textures are not only bigger, there are more of them. That adds up very quickly.
Also, some models are loading at higher subdivision, which uses up more memory. SubD 4 really kicks up memory use over 3, and 5 cranks it way up. Some PCs cannot even handle subD 5. Each level raises the poly count by a factor of 4, which adds up extremely fast as you go up.
There are already some characters that have SubD at 5 by default...
We have 128, 256, and 512 GB DDR4 RAM sticks(RDIMMs, LRDIMMs, etc., limited to workstation/server boards), so those larger RAM sticks won't be anything new, but I wouldn't expect them to be making something like that for our average desktops anytime soon unless OS & everyday-use programs suddenly become much much more RAM hungry. My reasoning is they will still want to keep the workstations separate. I wouldn't expect it to be faster, but I could be wrong. It has a higher latency than DDR4 RAM, but that as you said should improve over time.
...another reason I am avoiding G8.1.
Is that why Niko 8 nearly crashed my system. All I did was load his basic body morphs, nothing that I thought was HD, and my memory for a spot scan shot up to 98% even with my 3dl shaders (which have no subsurface and render in a minute without his body shape).
I got my new computer. It's an MSI with Nvidia rtx 2070 8Gb and 32 gb ram. DOing some tests and iray is significantly faster. I can actually do spot renders.
Question, if the scene shows up in the task manager as having more than 8GB of memory, does that mean that the program switches to cpu? I ask because a single figure with hair and a house background set and hdri bumped up to 10Gb on my task manager. It's a more complex scene than I could ever do before, and after like 4 minutes it's the quality I used to get after hours. Just wondering about the cpu vs gpu.
Thanks for all the suggestions.
The working memory used by a scene in set up and the amount of RAM used for rendering are not the same - the working scene will usually be using downsampled maps and will have all the active morphs and joint deformations loaded; the render will get the full resolution maps (and geometry) but will get the final shape, without any modifiers or weight maps to handle.
I understand that. What I am asking is if my gpu is 8gb and my taskmanager shows that WHEN RENDERING more than 8Gb is being used. Does that mean the render is happening via cpu rather than gpu?
Along those lines, under the advanced tab for iray both cpu and gpu are selected. SHould I only have gpu?
CPU activity of about 100% would indicate that the CPU was being used (assuming it isn't enabled for standard use); Task Manager will show GPU use in the Performance tab if you switch one of the graphs to CUDA (or Compute 0 if CUDA isn't an option).
In order to render on the GPU DS has to turn the scene into Iray data, so the total RAM used may well be substantially more than the GPU RAM used - allowing for that, if a ttoal of 10GB is used and you have an 8 GB card I think there is a fair chnace the scene has fitted and the card is being used.
...I use MSI Afterburner to monitor rendering and keep track of fan speed/GPU temperature. On average scenes that are say 11 to 12 GB in System memory take up around 3.8 to 4 GB in VRAM (rendering at Quality setting 2).
The cpu was up near 100 percent. Should I change the setting for iray to just the nvidia card and not cpu?