Looking into new computer--advice?

13

Comments

  • PerttiAPerttiA Posts: 10,024

    nonesuch00 said:

    PerttiA said:

    I would never accept a motherboard with integrated graphics... They are generally meant for users that are not doing anything special on their prebuilt, lower price computers, the hardware and drivers tend to create problems when one adds a real graphics card into the system and the only upgrade path with them is changing the motherboard.

    It's like integrating whatever up_to_date entertainment technology into cars... The average usable life of the vehicle as such has been 20+ years (here in Finland), and at that point, the integrated entertainment technology (even if still working) has become obsolete and a burden.

    LOL, as if gaming and doing DAZ renders was actually more special then people actually using their computers to work for a living?

    Yes, integrated graphics and discrete graphics have a big distance in capabilities between each other but they are both getting better by leaps and bounds and now, in the next generation or two or three (LOL), we will have AMD integrated graphics that are on par with current day RTX 3060s or certainly 3050s.

    If anyone that renders tells me that integrated GPUs are never good enough I can remember many thinking the day when realtime iRay rendering would come was an impossibility, but as we see it's not. It's not an impossibility in integrated graphics either. We already know it's coming to AMD's iGPUs next year. That is not my conjecture, that is fact.

    Looking at the amount of power the next gen Ada Lovelace cards draw the concern going forward for most will start being heat generated and one's electric bills rather than the price scalpers are asking. That's a huge change in capabilities if you ask me. 

    I can see the day when GPUs will start splitting in capabilities to special in different types of subject matter A.I. specialties AIPUs instead of GPUs as nVidia, AMD, intel, Apple, Synopsis, and others look to capitalize on A.I. capabilities and these ever more complex and specialized GPUs.

    And, so what?... As long as they are not Nvidia GPU's they are of no use in rendering IRAY, it doesn't matter what those do in games, we are talking about DS here and in DS the choises are either a new enough Nvidia GPU or rendering on CPU.

    Running MS Office, sending and receiving emails and using whatever browser does not push the limits of the technology, that is what integrated graphics are meant for and where the integration helps keeping the cost down and profits up.

  • kyoto kidkyoto kid Posts: 41,857
    edited August 2021

    nonesuch00 said:

    PerttiA said:

    kyoto kid said:

    ...will be interesting to see the results. 

    My concern is it's still integrated graphics which means it shares system RAM.  The Article In Tom's that I just skimmed through mentions it's a good interim solution until dedicated GPU prices come back down to earth, but as usual, only gaming benchmarks were mentioned (and those at low settings) which mean little when it comes to 3D rendering.  

    I would never accept a motherboard with integrated graphics... They are generally meant for users that are not doing anything special on their prebuilt, lower price computers, the hardware and drivers tend to create problems when one adds a real graphics card into the system and the only upgrade path with them is changing the motherboard.

    It's like integrating whatever up_to_date entertainment technology into cars... The average usable life of the vehicle as such has been 20+ years (here in Finland), and at that point, the integrated entertainment technology (even if still working) has become obsolete and a burden.

    LOL, as if gaming and doing DAZ renders was actually more special then people actually using their computers to work for a living?

    Yes, integrated graphics and discrete graphics have a big distance in capabilities between each other but they are both getting better by leaps and bounds and now, in the next generation or two or three (LOL), we will have AMD integrated graphics that are on par with current day RTX 3060s or certainly 3050s.

    If anyone that renders tells me that integrated GPUs are never good enough I can remember many thinking the day when realtime iRay rendering would come was an impossibility, but as we see it's not. It's not an impossibility in integrated graphics either. We already know it's coming to AMD's iGPUs next year. That is not my conjecture, that is fact.

    Looking at the amount of power the next gen Ada Lovelace cards draw the concern going forward for most will start being heat generated and one's electric bills rather than the price scalpers are asking. That's a huge change in capabilities if you ask me. 

    I can see the day when GPUs will start splitting in capabilities to special in different types of subject matter A.I. specialties AIPUs instead of GPUs as nVidia, AMD, intel, Apple, Synopsis, and others look to capitalize on A.I. capabilities and these ever more complex and specialized GPUs.

    ...still an integrated APU is fully dependent on system memory, not it's own dedicated graphics memory like a GPU card does. In my old 32 Bit Toshiba notebook, up to 1024 K of the system's 2 GB of useable memory could be accessed to support the graphics chipset, which subtracted from the total system memory available.  For Iray the VRAM requirement is compressed. For example a 12 GB scene takes up as around 4 GB in VRAM on my Titan-X.  For an APU only a maximum of 2GB can be allocated as VRAM, so that still means primarily rendering on the CPU.

    Another matter, it would require Daz3D to adopt a different render engine than Iray for PBR rendering as Iray is CUDA based.   This would leave AMD Pro Render and LuxCore Render as the two options which are both OpenCL based. 

    As to deep learning the pro grade Nvidia "A" series (formerly branded as Quadro™) and computational GPUs (formerly branded as Tesla™) are already there and are a major component of the latest supercomputers.  Even RTX Titan was considered useful for deep learning research and development.

     

    Post edited by kyoto kid on
  • nonesuch00nonesuch00 Posts: 18,729
    edited August 2021

    kyoto kid said:

    nonesuch00 said:

    PerttiA said:

    kyoto kid said:

    ...will be interesting to see the results. 

    My concern is it's still integrated graphics which means it shares system RAM.  The Article In Tom's that I just skimmed through mentions it's a good interim solution until dedicated GPU prices come back down to earth, but as usual, only gaming benchmarks were mentioned (and those at low settings) which mean little when it comes to 3D rendering.  

    I would never accept a motherboard with integrated graphics... They are generally meant for users that are not doing anything special on their prebuilt, lower price computers, the hardware and drivers tend to create problems when one adds a real graphics card into the system and the only upgrade path with them is changing the motherboard.

    It's like integrating whatever up_to_date entertainment technology into cars... The average usable life of the vehicle as such has been 20+ years (here in Finland), and at that point, the integrated entertainment technology (even if still working) has become obsolete and a burden.

    LOL, as if gaming and doing DAZ renders was actually more special then people actually using their computers to work for a living?

    Yes, integrated graphics and discrete graphics have a big distance in capabilities between each other but they are both getting better by leaps and bounds and now, in the next generation or two or three (LOL), we will have AMD integrated graphics that are on par with current day RTX 3060s or certainly 3050s.

    If anyone that renders tells me that integrated GPUs are never good enough I can remember many thinking the day when realtime iRay rendering would come was an impossibility, but as we see it's not. It's not an impossibility in integrated graphics either. We already know it's coming to AMD's iGPUs next year. That is not my conjecture, that is fact.

    Looking at the amount of power the next gen Ada Lovelace cards draw the concern going forward for most will start being heat generated and one's electric bills rather than the price scalpers are asking. That's a huge change in capabilities if you ask me. 

    I can see the day when GPUs will start splitting in capabilities to special in different types of subject matter A.I. specialties AIPUs instead of GPUs as nVidia, AMD, intel, Apple, Synopsis, and others look to capitalize on A.I. capabilities and these ever more complex and specialized GPUs.

    ...still an integrated APU is fully dependent on system memory, not it's own dedicated graphics memory like a GPU card does. In my old 32 Bit Toshiba notebook, up to 1024 K of the system's 2 GB of useable memory could be accessed to support the graphics chipset, which subtracted from the total system memory available.  For Iray the VRAM requirement is compressed. For example a 12 GB scene takes up as around 4 GB in VRAM on my Titan-X.  For an APU only a maximum of 2GB can be allocated as VRAM, so that still means primarily rendering on the CPU.

    Another matter, it would require Daz3D to adopt a different render engine than Iray for PBR rendering as Iray is CUDA based.   This would leave AMD Pro Render and LuxCore Render as the two options which are both OpenCL based. 

    As to deep learning the pro grade Nvidia "A" series (formerly branded as Quadro™) and computational GPUs (formerly branded as Tesla™) are already there and are a major component of the latest supercomputers.  Even RTX Titan was considered useful for deep learning research and development.

     

    DDR5 RAM has been announced as the much faster system RAM type for the coming AM5 motherboards and integrated GPUs being fuller dependent on system RAM when it can be expanded to 64GB, 128GB, 256GB and higher in some cases really isn't a bad thing.

    The new Apple M1 machines absolutely do not and cannot use nVidia GPUs. nVidia won't even work on driver support for eGPUs. Apple, intel, and AMD all stepping up their research into faster and more capable integrated and discrete GPU technology and specialized AI compute units. Those things are going to become more specialized and faster, not less so.

    So yeah, I'm looking at this artificial logjam for nVidia GPUs being broken.

     

    Post edited by nonesuch00 on
  • winmathwinmath Posts: 142

    nonesuch00 said:

    winmath said:

    RTX cards are in stock on 8/26/21 at select Best Buy stores. One per customer. Get there early - https://www.pcgamer.com/a-massive-restock-of-nvidia-rtx-30-series-gpus-is-happening-at-best-buy-tomorrow/

    You know I bought that new AMD GPU & 2TB M.2 NMVe SSD out of my RTX 3000s savings this week just for you other folk that have been wishing for a new RTX GPU at retail cost because I knew as soon as I spent enough money so that I didn't even have enough cash at retail price for those RTX 3000s, of course they'd get some in. You could set your clock by it. blushlaugh

    I did the opposite. I got a card last month at Best Buy, then bought the parts to build around it. I saved a little by repurposing the data drives from my old computer. But my computer budget for the next few years is used up.

    Keep your eye out though. I'm hoping the Best Buy sales will continue. It allows non-scalpers a chance to get the RTX cards.

  • Remember, though -- most gamers don't care about Iray support.  At all.  Period. The lack of Iray API implementation into AMD or Intel graphics chips would therefore have no impact on most customers' purchasing decisions. Even a flood of powerful new non-nVidia cards would placate a considerable percentage of the consumer base, leaving fewer remaining customers to compete over the scarcity of nVidia-based cards. In short, while having more 30-series nVidia cards out there would of course be preferable for enthusiasts like us, any new video cards entering the market would help lighten the load.

  • kyoto kidkyoto kid Posts: 41,857
    edited August 2021

    ...need some deep pockets for 256 GB or more RAM (DDR4) and most likely DDR5 will be even more expensive.  There's also the cost of motherboards and CPUs that can support that much. Only higher end Xeons and Threadrrippers can support 256 or more GB.  it may come down to the memory needed to support an APU being more expensive than a high VRAM dedicated GPU.

    Don't see many of us "Dazzers" being able to afford that..

    Post edited by kyoto kid on
  • nonesuch00nonesuch00 Posts: 18,729

    winmath said:

    nonesuch00 said:

    winmath said:

    RTX cards are in stock on 8/26/21 at select Best Buy stores. One per customer. Get there early - https://www.pcgamer.com/a-massive-restock-of-nvidia-rtx-30-series-gpus-is-happening-at-best-buy-tomorrow/

    You know I bought that new AMD GPU & 2TB M.2 NMVe SSD out of my RTX 3000s savings this week just for you other folk that have been wishing for a new RTX GPU at retail cost because I knew as soon as I spent enough money so that I didn't even have enough cash at retail price for those RTX 3000s, of course they'd get some in. You could set your clock by it. blushlaugh

    I did the opposite. I got a card last month at Best Buy, then bought the parts to build around it. I saved a little by repurposing the data drives from my old computer. But my computer budget for the next few years is used up.

    Keep your eye out though. I'm hoping the Best Buy sales will continue. It allows non-scalpers a chance to get the RTX cards.

    Thanks, I do have the Best Buy app installed on my phone & notices turned on. I then went to and added to my "Saved" list every RTX 3000s series card on the BestBuy app and that app did not ever notify me that any cards had shown up at the nearest Best Buy to me so I take it that as they didn't get any because they would have to have added them as stock before the could sell them which means they should have shown in that app  but didn't.

    I will likely in November buy an LHR RTX 3060 TI on Amazon from the looks of it. That will make the money I spend since November 2019 on my desktop I started building then total out to about $3000! A ridiculous amount really but it is maxed out for what the motherboard is capable is, except of course they is no RTX 3090 in it, but that's more of a GPU capability not a motherboard capability.. 

  • almahiedraalmahiedra Posts: 1,365

    PerttiA said:

    Choppski said:

    WHen you mention that I need terabytes of storage space--is that for content or is that for running the program? That is, is it because it uses disk space?

    Content 

     I can assume that someone already mentioned it, but in case it has not been mentioned, the minimum of 1Tb of space is requested as well as for temporary files created during the rendering process, if I remember correctly, for complex scenes it may take quite a lot of space.

    For laptops you can buy external boxes for graphics. I don’t know how effective it is for current graphics, but it looked pretty handy a few years ago. I haven’t checked that again because my dream now is to have a desktop with a thermaltake WP100 case.

    In a calculation I saw in one of the threads of this forum there was talk of 8Tb for content. Looking at what I have: a lot of old freebies and about 1500 products, that calculation seems to me a good approximation for a regular buyer who has a lot of hair and clothes that compressed reach 300 or 700 mb. The textures  decompressed usually occupy up to four times their compressed space. 

    The store currently reaches 82,000 products (some of them vaulted), of which approximately 50,000 belong to the Genesis 3/ Iray era, taking into consideration that Victoria 7 is the product 21750 and we leave out of account the products inherited from RuntimeDNA. I have no way to estimate how many Gen3/Iray products there are in Renderosity and other stores, I don’t think it’s less than half. So the potential to reach 8Tb is there (and in your wallet through the years)

    However, to manage the growth of the library you can think to have in the future a Qnap and buy the disks needed over the years.

  • ed3Ded3D Posts: 2,475

    _ and then there is these  "Refubs"  ( thought there was an Afraid emoti  ??)  ,,thanx

  • ArtAngelArtAngel Posts: 1,942

    outrider42 said:

    Honestly you picked the absolute worst time in history to look for new hardware. That is not me exaggerating, that seriously is the situation right now. Prices are insane on everything, especially anything GPU related, and while laptops are not quite as inflated as desktops, there is no question prices should be better than they currently are. So this makes it extremely hard to offer good advice. With the current conditions just the act of finding a decent laptop that is in stock AND at a reasonable price is extremely difficult.

    BTW, there are no 3090s for laptops, and there are no plans for them. Just like there were no 1080ti or 2080ti laptops. They are too power hungry for mobile. Besides, the laptop versions are drastically paired down compared to their desktop counterparts.

    This extended inflation is going to do serious harm to these hobbies if it stays like this. The only people who can afford to pay the prices are people who make money off their GPUs, not hobbyists. While some people have ok hardware, there are also plenty of people who would like to jump into the these hobbies but cannot. The barrier of entry is raising. This is a field that can only grow when people have hardware to try it out! That is why Iray and GPU rendering was a revelation for Daz back in 2015. It opened brand new doors to rendering for people. I do not believe Daz would be nearly as popular today if it still only offered CPU rendering. I can tell you right now I sure wouldn't be here. GPU changed the entire industry from the top down. But if GPUs stay absurdly difficult to buy and stay at sky high prices it is going to chase people away.

    BTW, there is one other option, though it is kind of bonkers. You could use a desktop GPU with an enclosure or some other contraption. This is technically mobile, but slightly dangerous. So you could try connecting an enclosure to your laptop, you'll need an extra power brick, too.

    This video doesn't show a laptop, but it could be done that way. You do not have a m.2 drive on that old laptop, though, we can be sure of that. Still, you may have other options. Of course, you still have to be able to find a GPU to plug in the thing.

    So many threads about buying a new pc.Why not you make a thread Looking for new PC ...  and any new posts/threads etc go under that roof? God knows you are on top of the hardware and what is and what isn't. All of these repetitive threads should be housed under one roof. And you would be the perfect roofer ;)

  • nonesuch00 said:

    kyoto kid said:

    nonesuch00 said:

    PerttiA said:

    kyoto kid said:

    ...will be interesting to see the results. 

    My concern is it's still integrated graphics which means it shares system RAM.  The Article In Tom's that I just skimmed through mentions it's a good interim solution until dedicated GPU prices come back down to earth, but as usual, only gaming benchmarks were mentioned (and those at low settings) which mean little when it comes to 3D rendering.  

    I would never accept a motherboard with integrated graphics... They are generally meant for users that are not doing anything special on their prebuilt, lower price computers, the hardware and drivers tend to create problems when one adds a real graphics card into the system and the only upgrade path with them is changing the motherboard.

    It's like integrating whatever up_to_date entertainment technology into cars... The average usable life of the vehicle as such has been 20+ years (here in Finland), and at that point, the integrated entertainment technology (even if still working) has become obsolete and a burden.

    LOL, as if gaming and doing DAZ renders was actually more special then people actually using their computers to work for a living?

    Yes, integrated graphics and discrete graphics have a big distance in capabilities between each other but they are both getting better by leaps and bounds and now, in the next generation or two or three (LOL), we will have AMD integrated graphics that are on par with current day RTX 3060s or certainly 3050s.

    If anyone that renders tells me that integrated GPUs are never good enough I can remember many thinking the day when realtime iRay rendering would come was an impossibility, but as we see it's not. It's not an impossibility in integrated graphics either. We already know it's coming to AMD's iGPUs next year. That is not my conjecture, that is fact.

    Looking at the amount of power the next gen Ada Lovelace cards draw the concern going forward for most will start being heat generated and one's electric bills rather than the price scalpers are asking. That's a huge change in capabilities if you ask me. 

    I can see the day when GPUs will start splitting in capabilities to special in different types of subject matter A.I. specialties AIPUs instead of GPUs as nVidia, AMD, intel, Apple, Synopsis, and others look to capitalize on A.I. capabilities and these ever more complex and specialized GPUs.

    ...still an integrated APU is fully dependent on system memory, not it's own dedicated graphics memory like a GPU card does. In my old 32 Bit Toshiba notebook, up to 1024 K of the system's 2 GB of useable memory could be accessed to support the graphics chipset, which subtracted from the total system memory available.  For Iray the VRAM requirement is compressed. For example a 12 GB scene takes up as around 4 GB in VRAM on my Titan-X.  For an APU only a maximum of 2GB can be allocated as VRAM, so that still means primarily rendering on the CPU.

    Another matter, it would require Daz3D to adopt a different render engine than Iray for PBR rendering as Iray is CUDA based.   This would leave AMD Pro Render and LuxCore Render as the two options which are both OpenCL based. 

    As to deep learning the pro grade Nvidia "A" series (formerly branded as Quadro™) and computational GPUs (formerly branded as Tesla™) are already there and are a major component of the latest supercomputers.  Even RTX Titan was considered useful for deep learning research and development.

     

    DDR5 RAM has been announced as the much faster system RAM type for the coming AM5 motherboards and integrated GPUs being fuller dependent on system RAM when it can be expanded to 64GB, 128GB, 256GB and higher in some cases really isn't a bad thing.

    The new Apple M1 machines absolutely do not and cannot use nVidia GPUs. nVidia won't even work on driver support for eGPUs. Apple, intel, and AMD all stepping up their research into faster and more capable integrated and discrete GPU technology and specialized AI compute units. Those things are going to become more specialized and faster, not less so.

    So yeah, I'm looking at this artificial logjam for nVidia GPUs being broken.

     

    DDR5 RAM will be much faster at selling for those crazy enough to jump on the bandwagon right away. From what I've heard, DDR5 RAM at present isn't really faster than today's top spec DDR4 RAM. Its new, and just like DDR4, its going to take time to mature and improve before it blows DDR4 out of the water. I would wait at least a year after its introduced since you'll have to buy a new motherboard that supports it, which basically equates to building/rebuilding a whole system.

  • A comment and question. I've been buying stuff from daz and other places since 2007. ALL my products still only take up about 600MB so all fit on one external drive. I cannot fathom dealing with 8GB of content. More power to you LOL

    My question, does studio's need for disk space for temporary files explain why with complex scenes I often have to create it, save it, exit and reopen the scene in order to render (because if I just render it crashes, but then when I reopen, I can render fine?). THe thing is, on my current system I have a 1TB drive but only about 200GB free on it.

    OK, I have a second question since I am typing. I'm looking at many options, but there are some options even via best buy--not custom made, but eons beyond what I currently have. Still the max ram available seems to be 32 even though the machine can take up to 64. Is it worth it to get the machine and then later, if I find it necessary, upgrade. I assume that means buying and installing 2 32 GB memory chips, so replacing the 2 16GB that come installed.

  • souravpadhi89souravpadhi89 Posts: 80
    edited September 2021
    Choppski said:

    A comment and question. I've been buying stuff from daz and other places since 2007. ALL my products still only take up about 600MB so all fit on one external drive. I cannot fathom dealing with 8GB of content. More power to you LOL

    My question, does studio's need for disk space for temporary files explain why with complex scenes I often have to create it, save it, exit and reopen the scene in order to render (because if I just render it crashes, but then when I reopen, I can render fine?). THe thing is, on my current system I have a 1TB drive but only about 200GB free on it.

    OK, I have a second question since I am typing. I'm looking at many options, but there are some options even via best buy--not custom made, but eons beyond what I currently have. Still the max ram available seems to be 32 even though the machine can take up to 64. Is it worth it to get the machine and then later, if I find it necessary, upgrade. I assume that means buying and installing 2 32 GB memory chips, so replacing the 2 16GB that come installed.

    Q1. I don't understand because I haven't faced such situation yet. Q2. You need to check the capacity of the Motherboard. Double check that the Motherboard supports atleast 64GB of RAM. If yes, then you can definitely upgrade in the future. You can also keep your current 2x16GB RAMs and go ahead to buy new 2x16GB ones if the Motherboard has 4 RAM slots. Make sure that all the RAMs are of same speed. Ex: 3200 MHz.
    Post edited by souravpadhi89 on
  • outrider42outrider42 Posts: 3,679
    edited September 2021

    magog_a4eb71ab said:

    nonesuch00 said:

    kyoto kid said:

    nonesuch00 said:

    PerttiA said:

    kyoto kid said:

    ...will be interesting to see the results. 

    My concern is it's still integrated graphics which means it shares system RAM.  The Article In Tom's that I just skimmed through mentions it's a good interim solution until dedicated GPU prices come back down to earth, but as usual, only gaming benchmarks were mentioned (and those at low settings) which mean little when it comes to 3D rendering.  

    I would never accept a motherboard with integrated graphics... They are generally meant for users that are not doing anything special on their prebuilt, lower price computers, the hardware and drivers tend to create problems when one adds a real graphics card into the system and the only upgrade path with them is changing the motherboard.

    It's like integrating whatever up_to_date entertainment technology into cars... The average usable life of the vehicle as such has been 20+ years (here in Finland), and at that point, the integrated entertainment technology (even if still working) has become obsolete and a burden.

    LOL, as if gaming and doing DAZ renders was actually more special then people actually using their computers to work for a living?

    Yes, integrated graphics and discrete graphics have a big distance in capabilities between each other but they are both getting better by leaps and bounds and now, in the next generation or two or three (LOL), we will have AMD integrated graphics that are on par with current day RTX 3060s or certainly 3050s.

    If anyone that renders tells me that integrated GPUs are never good enough I can remember many thinking the day when realtime iRay rendering would come was an impossibility, but as we see it's not. It's not an impossibility in integrated graphics either. We already know it's coming to AMD's iGPUs next year. That is not my conjecture, that is fact.

    Looking at the amount of power the next gen Ada Lovelace cards draw the concern going forward for most will start being heat generated and one's electric bills rather than the price scalpers are asking. That's a huge change in capabilities if you ask me. 

    I can see the day when GPUs will start splitting in capabilities to special in different types of subject matter A.I. specialties AIPUs instead of GPUs as nVidia, AMD, intel, Apple, Synopsis, and others look to capitalize on A.I. capabilities and these ever more complex and specialized GPUs.

    ...still an integrated APU is fully dependent on system memory, not it's own dedicated graphics memory like a GPU card does. In my old 32 Bit Toshiba notebook, up to 1024 K of the system's 2 GB of useable memory could be accessed to support the graphics chipset, which subtracted from the total system memory available.  For Iray the VRAM requirement is compressed. For example a 12 GB scene takes up as around 4 GB in VRAM on my Titan-X.  For an APU only a maximum of 2GB can be allocated as VRAM, so that still means primarily rendering on the CPU.

    Another matter, it would require Daz3D to adopt a different render engine than Iray for PBR rendering as Iray is CUDA based.   This would leave AMD Pro Render and LuxCore Render as the two options which are both OpenCL based. 

    As to deep learning the pro grade Nvidia "A" series (formerly branded as Quadro™) and computational GPUs (formerly branded as Tesla™) are already there and are a major component of the latest supercomputers.  Even RTX Titan was considered useful for deep learning research and development.

     

    DDR5 RAM has been announced as the much faster system RAM type for the coming AM5 motherboards and integrated GPUs being fuller dependent on system RAM when it can be expanded to 64GB, 128GB, 256GB and higher in some cases really isn't a bad thing.

    The new Apple M1 machines absolutely do not and cannot use nVidia GPUs. nVidia won't even work on driver support for eGPUs. Apple, intel, and AMD all stepping up their research into faster and more capable integrated and discrete GPU technology and specialized AI compute units. Those things are going to become more specialized and faster, not less so.

    So yeah, I'm looking at this artificial logjam for nVidia GPUs being broken.

     

    DDR5 RAM will be much faster at selling for those crazy enough to jump on the bandwagon right away. From what I've heard, DDR5 RAM at present isn't really faster than today's top spec DDR4 RAM. Its new, and just like DDR4, its going to take time to mature and improve before it blows DDR4 out of the water. I would wait at least a year after its introduced since you'll have to buy a new motherboard that supports it, which basically equates to building/rebuilding a whole system.

    Maybe at first DDR5 will not be that much faster, but I expect that to change pretty quickly. I honestly don't have any info on why, I just have a hunch the spec will move quicker than DDR4 did. Things seem to be moving fast right now. They are already talking about DDR6 and it may be coming out in a few years. The long stagnation that seemed to hang around the Intel 4 core era is over. The arms race is back on again. The only problem is pricing.

    But DDR5 will bring a serious spec change that many of you here will get excited about more: capacity. A single DDR4 stick is limited to 32GB. But a single DDR5 stick can have up to 128GB of RAM. On a single stick!

    Now obviously we will need motherboards that can handle more memory, and these things will not be cheap. But the possibilities are there, this is a spec beyond even workstations and it can be done on a desktop. You will need a Pro version of Windows to go beyond 128.

    Nvidia has been working on a ARM based CPU for a while. It may not ever get released, and it might only be for workstations if it does. But this could change things a great deal. If Nvidia makes a CPU, that opens the door for a Nvidia made APU down the road. That would bring CUDA to APUs. I stress this possibility is waaaaaay out right now. But I just wanted to throw that out there. Currently any APU you buy will not have a CUDA capable GPU packed with it. So the GPU portion of the chip will be doing nothing for Iray. At that rate, it would make sense to skip the APU and get a CPU that just has a bunch of fast cores or hunt down a Nvidia GPU. As for why Nvidia would do this, they have many reasons. I think their CEO has always wanted to make one, so that is one reason, LOL. Plus with Intel jumping into the GPU market, Nvidia jumping into CPU only seems fair. That is not a joke. As Intel ramps up GPU production, you can bet they will push PC makers to use both Intel CPU and GPU solutions in the desktops and especially laptops. So it makes sense that Nvidia would want to strike back with their own all in one solution. But even bigger, Nvidia CPUs could encroach on Intel's main source of business, the workstations.

    APUs may become a big deal in the future. Like I said earlier if this GPU price mess keeps going things could grind to a halt. But one thing crypto miners don't really use is CPUs. Many mining rigs use old and crappy ones, just enough to run the thing. It is also much harder to horde CPUs, because you need to buy basically a whole PC to use each one. That is why GPU mining is so attractive to miners, it is so easy to scale up an operation. Anyway, that leaves APUs for gaming. And guys, the PS5 and Xbox Series X are APUs. These things can be made if they really wanted to. I am surprised we haven't seen MS use some of their Xbox APUs in Surface tablets, it just seems like a great fit. Maybe in time, since the Xboxes sell so fast right now. At any rate, APU might just become one of the only ways to game in the future without selling a kidney.

    Perhaps in time Daz will bring in a new render engine. I would be surprised, but you never know.

    The trouble with AMD GPUs right now goes beyond lacking CUDA. Their software stack outside of gaming is just plain bad. Nvidia pretty much stomps AMD in many professional software. There are exceptions to this of course, but for content creators Nvidia covers all of the bases while AMD is only as good in select software. This is something I expect Intel GPU to take off with. Intel's GPUs were originally intended to be just for professionals, the gaming is just a side for them. So I expect Intel GPU to be pretty solid, though again the lack of CUDA will hurt them as well.

    So many threads about buying a new pc.Why not you make a thread Looking for new PC ...  and any new posts/threads etc go under that roof? God knows you are on top of the hardware and what is and what isn't. All of these repetitive threads should be housed under one roof. And you would be the perfect roofer ;)

    You are too kind. I just like the hardware side of things. I did suggest a hardware sub forum once. I think that would make sense, but I guess they consider that falling under the technical help forum. But everybody just reads the commons. I'm guilty of that a lot too.

    Post edited by outrider42 on
  • kyoto kidkyoto kid Posts: 41,857
    edited September 2021

    ...well there is a community member working on a homegrown plugin for LuxCore Render to Daz which was recently shown over on the DazStudio forum.  The alpha version seems to work pretty well and the results are rather impressive. the developer also mentioned there would be little if any conversion of Iray materials needed (3DL materials may be a little more problematic but still doable). This would allow users to ditch CUDA as LuxCore is OpenCL based.(Nvidia cards also support OpenCL although AMD cards tend to support more up to date versions) along with the risk of older cards forced into obsolescence by Iray updates 

    As of Q4 last year, "small" Maxwell cards already have been moved to "depreciated" status and with Nvidia driver support moving exclusively to W10 and Linux on Oct 4th. not sure how that will affect support of older GPUs

    Two very bad moves on Nvidia's part given the artificially obscene prices GPU cards are demanding right now that shows few signs of abating. 

    Post edited by kyoto kid on
  • nonesuch00nonesuch00 Posts: 18,729

    magog_a4eb71ab said:

    nonesuch00 said:

    kyoto kid said:

    nonesuch00 said:

    PerttiA said:

    kyoto kid said:

    ...will be interesting to see the results. 

    My concern is it's still integrated graphics which means it shares system RAM.  The Article In Tom's that I just skimmed through mentions it's a good interim solution until dedicated GPU prices come back down to earth, but as usual, only gaming benchmarks were mentioned (and those at low settings) which mean little when it comes to 3D rendering.  

    I would never accept a motherboard with integrated graphics... They are generally meant for users that are not doing anything special on their prebuilt, lower price computers, the hardware and drivers tend to create problems when one adds a real graphics card into the system and the only upgrade path with them is changing the motherboard.

    It's like integrating whatever up_to_date entertainment technology into cars... The average usable life of the vehicle as such has been 20+ years (here in Finland), and at that point, the integrated entertainment technology (even if still working) has become obsolete and a burden.

    LOL, as if gaming and doing DAZ renders was actually more special then people actually using their computers to work for a living?

    Yes, integrated graphics and discrete graphics have a big distance in capabilities between each other but they are both getting better by leaps and bounds and now, in the next generation or two or three (LOL), we will have AMD integrated graphics that are on par with current day RTX 3060s or certainly 3050s.

    If anyone that renders tells me that integrated GPUs are never good enough I can remember many thinking the day when realtime iRay rendering would come was an impossibility, but as we see it's not. It's not an impossibility in integrated graphics either. We already know it's coming to AMD's iGPUs next year. That is not my conjecture, that is fact.

    Looking at the amount of power the next gen Ada Lovelace cards draw the concern going forward for most will start being heat generated and one's electric bills rather than the price scalpers are asking. That's a huge change in capabilities if you ask me. 

    I can see the day when GPUs will start splitting in capabilities to special in different types of subject matter A.I. specialties AIPUs instead of GPUs as nVidia, AMD, intel, Apple, Synopsis, and others look to capitalize on A.I. capabilities and these ever more complex and specialized GPUs.

    ...still an integrated APU is fully dependent on system memory, not it's own dedicated graphics memory like a GPU card does. In my old 32 Bit Toshiba notebook, up to 1024 K of the system's 2 GB of useable memory could be accessed to support the graphics chipset, which subtracted from the total system memory available.  For Iray the VRAM requirement is compressed. For example a 12 GB scene takes up as around 4 GB in VRAM on my Titan-X.  For an APU only a maximum of 2GB can be allocated as VRAM, so that still means primarily rendering on the CPU.

    Another matter, it would require Daz3D to adopt a different render engine than Iray for PBR rendering as Iray is CUDA based.   This would leave AMD Pro Render and LuxCore Render as the two options which are both OpenCL based. 

    As to deep learning the pro grade Nvidia "A" series (formerly branded as Quadro™) and computational GPUs (formerly branded as Tesla™) are already there and are a major component of the latest supercomputers.  Even RTX Titan was considered useful for deep learning research and development.

     

    DDR5 RAM has been announced as the much faster system RAM type for the coming AM5 motherboards and integrated GPUs being fuller dependent on system RAM when it can be expanded to 64GB, 128GB, 256GB and higher in some cases really isn't a bad thing.

    The new Apple M1 machines absolutely do not and cannot use nVidia GPUs. nVidia won't even work on driver support for eGPUs. Apple, intel, and AMD all stepping up their research into faster and more capable integrated and discrete GPU technology and specialized AI compute units. Those things are going to become more specialized and faster, not less so.

    So yeah, I'm looking at this artificial logjam for nVidia GPUs being broken.

     

    DDR5 RAM will be much faster at selling for those crazy enough to jump on the bandwagon right away. From what I've heard, DDR5 RAM at present isn't really faster than today's top spec DDR4 RAM. Its new, and just like DDR4, its going to take time to mature and improve before it blows DDR4 out of the water. I would wait at least a year after its introduced since you'll have to buy a new motherboard that supports it, which basically equates to building/rebuilding a whole system.

    Due to costs of new tech when the faster AM5 motherboards are introduced that can speed up the DDR5 RAM properly I'll have to wait because I won't be able to afford any of it. If they have an AM4 socket 8 core or 16 core Zen 3+ CPU / RDNA2 iGPU that will work in my Zen 2 era B450M motherboard I'll upgrade that part next year though. Those are liable to be fairly priced then as everyone with money will be going for the AM5 level new tech.

  • nonesuch00nonesuch00 Posts: 18,729

    Choppski said:

    A comment and question. I've been buying stuff from daz and other places since 2007. ALL my products still only take up about 600MB so all fit on one external drive. I cannot fathom dealing with 8GB of content. More power to you LOL

    My question, does studio's need for disk space for temporary files explain why with complex scenes I often have to create it, save it, exit and reopen the scene in order to render (because if I just render it crashes, but then when I reopen, I can render fine?). THe thing is, on my current system I have a 1TB drive but only about 200GB free on it.

    OK, I have a second question since I am typing. I'm looking at many options, but there are some options even via best buy--not custom made, but eons beyond what I currently have. Still the max ram available seems to be 32 even though the machine can take up to 64. Is it worth it to get the machine and then later, if I find it necessary, upgrade. I assume that means buying and installing 2 32 GB memory chips, so replacing the 2 16GB that come installed.

    If your content is that old then you've not bought anything new for a long time and you've bought next to no recent sets. There are recent singular products that exceed gigabytes all by themselves.

  • kyoto kidkyoto kid Posts: 41,857

    ...and a lot of that is due to 4K and 8K texture files, not so much polys.

  • ChoppskiChoppski Posts: 627
    edited September 2021

    nonesuch00 said:

    Choppski said:

    A comment and question. I've been buying stuff from daz and other places since 2007. ALL my products still only take up about 600MB so all fit on one external drive. I cannot fathom dealing with 8GB of content. More power to you LOL

    My question, does studio's need for disk space for temporary files explain why with complex scenes I often have to create it, save it, exit and reopen the scene in order to render (because if I just render it crashes, but then when I reopen, I can render fine?). THe thing is, on my current system I have a 1TB drive but only about 200GB free on it.

    OK, I have a second question since I am typing. I'm looking at many options, but there are some options even via best buy--not custom made, but eons beyond what I currently have. Still the max ram available seems to be 32 even though the machine can take up to 64. Is it worth it to get the machine and then later, if I find it necessary, upgrade. I assume that means buying and installing 2 32 GB memory chips, so replacing the 2 16GB that come installed.

    If your content is that old then you've not bought anything new for a long time and you've bought next to no recent sets. There are recent singular products that exceed gigabytes all by themselves.

    I must have been high when I wrote that. I meant I cannot fathom 8TB of content (I thought somebody said they had that). My products take 600GB maybe a bit more. The point is I can fit it all easily on an external drive. Apologies.

    Post edited by Richard Haseltine on
  • outrider42outrider42 Posts: 3,679

    kyoto kid said:

    ...and a lot of that is due to 4K and 8K texture files, not so much polys.

    Pretty much. It is also that the shaders use a lot more textures in general. In the old days you basically had a base, bump and normal. Sometimes a SSS. Now they often pack several new textures per surface with additional settings for dual lobe spec and more. So the textures are not only bigger, there are more of them. That adds up very quickly.

    Also, some models are loading at higher subdivision, which uses up more memory. SubD 4 really kicks up memory use over 3, and 5 cranks it way up. Some PCs cannot even handle subD 5. Each level raises the poly count by a factor of 4, which adds up extremely fast as you go up.

  • PerttiAPerttiA Posts: 10,024

    outrider42 said:

    Also, some models are loading at higher subdivision, which uses up more memory. SubD 4 really kicks up memory use over 3, and 5 cranks it way up. Some PCs cannot even handle subD 5. Each level raises the poly count by a factor of 4, which adds up extremely fast as you go up.

    There are already some characters that have SubD at 5 by default... 

  • outrider42 said:

    magog_a4eb71ab said:

    DDR5 RAM will be much faster at selling for those crazy enough to jump on the bandwagon right away. From what I've heard, DDR5 RAM at present isn't really faster than today's top spec DDR4 RAM. Its new, and just like DDR4, its going to take time to mature and improve before it blows DDR4 out of the water. I would wait at least a year after its introduced since you'll have to buy a new motherboard that supports it, which basically equates to building/rebuilding a whole system.

    Maybe at first DDR5 will not be that much faster, but I expect that to change pretty quickly. I honestly don't have any info on why, I just have a hunch the spec will move quicker than DDR4 did. Things seem to be moving fast right now. They are already talking about DDR6 and it may be coming out in a few years. The long stagnation that seemed to hang around the Intel 4 core era is over. The arms race is back on again. The only problem is pricing.

    But DDR5 will bring a serious spec change that many of you here will get excited about more: capacity. A single DDR4 stick is limited to 32GB. But a single DDR5 stick can have up to 128GB of RAM. On a single stick!

    Now obviously we will need motherboards that can handle more memory, and these things will not be cheap. But the possibilities are there, this is a spec beyond even workstations and it can be done on a desktop. You will need a Pro version of Windows to go beyond 128.

    We have 128, 256, and 512 GB DDR4 RAM sticks(RDIMMs, LRDIMMs, etc., limited to workstation/server boards), so those larger RAM sticks won't be anything new, but I wouldn't expect them to be making something like that for our average desktops anytime soon unless OS & everyday-use programs suddenly become much much more RAM hungry. My reasoning is they will still want to keep the workstations separate. I wouldn't expect it to be faster, but I could be wrong. It has a higher latency than DDR4 RAM, but that as you said should improve over time.

  • kyoto kidkyoto kid Posts: 41,857

    PerttiA said:

    outrider42 said:

    Also, some models are loading at higher subdivision, which uses up more memory. SubD 4 really kicks up memory use over 3, and 5 cranks it way up. Some PCs cannot even handle subD 5. Each level raises the poly count by a factor of 4, which adds up extremely fast as you go up.

    There are already some characters that have SubD at 5 by default... 

    ...another reason I am avoiding G8.1.

  • Is that why Niko 8 nearly crashed my system. All I did was load his basic body morphs, nothing that I thought was HD, and my memory for a spot scan shot up to 98% even with my 3dl shaders (which have no subsurface and render in a minute without his body shape).

  • I got my new computer. It's an MSI with  Nvidia rtx 2070 8Gb and 32 gb ram. DOing some tests and iray is significantly faster. I can actually do spot renders.

    Question, if the scene shows up in the task manager as having more than 8GB of memory, does that mean that the program switches to cpu? I ask because a single figure with hair and a house background set and hdri bumped up to 10Gb on my task manager. It's a more complex scene than I could ever do before, and after like 4 minutes it's the quality I used to get after hours. Just wondering about the cpu vs gpu.

    Thanks for all the suggestions.

  • Choppski said:

    I got my new computer. It's an MSI with  Nvidia rtx 2070 8Gb and 32 gb ram. DOing some tests and iray is significantly faster. I can actually do spot renders.

    Question, if the scene shows up in the task manager as having more than 8GB of memory, does that mean that the program switches to cpu? I ask because a single figure with hair and a house background set and hdri bumped up to 10Gb on my task manager. It's a more complex scene than I could ever do before, and after like 4 minutes it's the quality I used to get after hours. Just wondering about the cpu vs gpu.

    Thanks for all the suggestions.

    The working memory used by a scene in set up and the amount of RAM used for rendering are not the same - the working scene will usually be using downsampled  maps and will have all the active morphs and joint deformations loaded; the render will get the full resolution maps (and geometry) but will get the final shape, without any modifiers or weight maps to handle.

  • Richard Haseltine said:

    Choppski said:

    I got my new computer. It's an MSI with  Nvidia rtx 2070 8Gb and 32 gb ram. DOing some tests and iray is significantly faster. I can actually do spot renders.

    Question, if the scene shows up in the task manager as having more than 8GB of memory, does that mean that the program switches to cpu? I ask because a single figure with hair and a house background set and hdri bumped up to 10Gb on my task manager. It's a more complex scene than I could ever do before, and after like 4 minutes it's the quality I used to get after hours. Just wondering about the cpu vs gpu.

    Thanks for all the suggestions.

    The working memory used by a scene in set up and the amount of RAM used for rendering are not the same - the working scene will usually be using downsampled  maps and will have all the active morphs and joint deformations loaded; the render will get the full resolution maps (and geometry) but will get the final shape, without any modifiers or weight maps to handle.

    I understand that. What I am asking is if my gpu is 8gb and my taskmanager shows that WHEN RENDERING more than 8Gb is being used. Does that mean the render is happening via cpu rather than gpu?

    Along those lines, under the advanced tab for iray both cpu and gpu are selected. SHould I only have gpu?

  • Choppski said:

    Richard Haseltine said:

    Choppski said:

    I got my new computer. It's an MSI with  Nvidia rtx 2070 8Gb and 32 gb ram. DOing some tests and iray is significantly faster. I can actually do spot renders.

    Question, if the scene shows up in the task manager as having more than 8GB of memory, does that mean that the program switches to cpu? I ask because a single figure with hair and a house background set and hdri bumped up to 10Gb on my task manager. It's a more complex scene than I could ever do before, and after like 4 minutes it's the quality I used to get after hours. Just wondering about the cpu vs gpu.

    Thanks for all the suggestions.

    The working memory used by a scene in set up and the amount of RAM used for rendering are not the same - the working scene will usually be using downsampled  maps and will have all the active morphs and joint deformations loaded; the render will get the full resolution maps (and geometry) but will get the final shape, without any modifiers or weight maps to handle.

    I understand that. What I am asking is if my gpu is 8gb and my taskmanager shows that WHEN RENDERING more than 8Gb is being used. Does that mean the render is happening via cpu rather than gpu?

    Along those lines, under the advanced tab for iray both cpu and gpu are selected. SHould I only have gpu?

    CPU activity of about 100% would indicate that the CPU was being used (assuming it isn't enabled for standard use); Task Manager will show GPU use in the Performance tab if you switch one of the graphs to CUDA (or Compute 0 if CUDA isn't an option).

    In order to render on the GPU DS has to turn the scene into Iray data, so the total RAM used may well be substantially more than the GPU RAM used - allowing for that, if a ttoal of 10GB is used and you have an 8 GB card I think there is a fair chnace the scene has fitted and the card is being used.

  • kyoto kidkyoto kid Posts: 41,857

    ...I use MSI Afterburner to monitor rendering and keep track of fan speed/GPU temperature. On average scenes that are say 11 to 12 GB in System memory take up around 3.8 to 4 GB in VRAM (rendering at Quality setting 2).

  • Richard Haseltine said:

    Choppski said:

    Richard Haseltine said:

    Choppski said:

    I got my new computer. It's an MSI with  Nvidia rtx 2070 8Gb and 32 gb ram. DOing some tests and iray is significantly faster. I can actually do spot renders.

    Question, if the scene shows up in the task manager as having more than 8GB of memory, does that mean that the program switches to cpu? I ask because a single figure with hair and a house background set and hdri bumped up to 10Gb on my task manager. It's a more complex scene than I could ever do before, and after like 4 minutes it's the quality I used to get after hours. Just wondering about the cpu vs gpu.

    Thanks for all the suggestions.

    The working memory used by a scene in set up and the amount of RAM used for rendering are not the same - the working scene will usually be using downsampled  maps and will have all the active morphs and joint deformations loaded; the render will get the full resolution maps (and geometry) but will get the final shape, without any modifiers or weight maps to handle.

    I understand that. What I am asking is if my gpu is 8gb and my taskmanager shows that WHEN RENDERING more than 8Gb is being used. Does that mean the render is happening via cpu rather than gpu?

    Along those lines, under the advanced tab for iray both cpu and gpu are selected. SHould I only have gpu?

    CPU activity of about 100% would indicate that the CPU was being used (assuming it isn't enabled for standard use); Task Manager will show GPU use in the Performance tab if you switch one of the graphs to CUDA (or Compute 0 if CUDA isn't an option).

    In order to render on the GPU DS has to turn the scene into Iray data, so the total RAM used may well be substantially more than the GPU RAM used - allowing for that, if a ttoal of 10GB is used and you have an 8 GB card I think there is a fair chnace the scene has fitted and the card is being used.

    The cpu was up near 100 percent. Should I change the setting for iray to just the nvidia card and not cpu?

Sign In or Register to comment.