$750 and $900 rigs for DAZ Studio

PadonePadone Posts: 3,481
edited May 2019 in Daz Studio Discussion

I feel this may be useful to many newcomers wondering what specs are needed to run daz studio. The configuration below is just $750 and runs both daz studio 4.10 and 4.11 fine. You will need the scene optimizer addon to fit complex scenes to the gpu but this is almost always true even for 11G cards. Consider that just halving the textures takes 1/4 vram, so if you need 8G with 4K textures then you need 2G with 2K textures.

https://pcpartpicker.com/user/Padone/saved/pkxZRB

https://www.daz3d.com/scene-optimizer

 

EDIT. One may ask why not intel .. Well the strong point of this rig is that the vega gpu performs about as a 1030 so you will have a fast responsive opengl viewport while the 1060 is rendering. No slowdowns ever.

EDIT. Also one may ask why not a 2060. Here I was aiming at a rig powerful enough to run daz studio quite smoothly but with an affordable price. To me the 2060 seems not needed for this goal, also because the rtx features are not used by DS. The rig is capable enough to support a 2060 though so it is a matter of choice.

 

EDIT. On a second thought, by following the suggestions of Takeo and DrunkMonkey, I added a second rig with overclocking capabilities for gaming and a hdd for large online asset collections. I also upgraded the cpu from 2200G to 2400G for better cpu rendering. This brings the budget from $750 to $900 but it is a more powerful and general purpose rig.

https://pcpartpicker.com/user/Padone/saved/JvbNQ7

As a personal note I'd avoid both overclocking and large online collections, and I also believe cpu rendering is quite obsolete at the time being. But I guess it's fair to leave those points as a personal choice.

 

EDIT. Here's a fix to make amd and nvidia opencl work together, otherwise they disable each other ..

https://community.amd.com/message/2909519

EDIT. Here's a couple of tools to help managing your ssd. See the post below for more details.

https://docs.microsoft.com/en-us/sysinternals/downloads/sdelete

https://github.com/CyberShadow/trimcheck

Post edited by Padone on

Comments

  • Thanks for bringing this up. It looks like I have to switch to PC from Mac to get decent render times, now I have something to start with.

    Erik

  • I'm gonna preface this by saying I don't own these parts, so I can't speak from personal experience, my statements are based on the user guide of the motherboard, the benchmark results of the CPU and gpu and a little bit of research and comparison to personal parts and testing results.
     

    For gaming this system is going to be fine.

    For 3d work this system is mediocre at best.

    The first problem I came across is that, according to the user guide pg. 33, the integrated GPU may be disabled depending on the GPU in the pci-e slot.

    Second problem is that using an igpu means less system ram.

    Third,the igpu maxes at 2gb. While this sounds like a lot, it really isn't when working in 3d.

    System ram.

    16gb may seem like a lot of system ram, but its not.

    Considering you're going to lose upto 2gb to the igpu, and another 1-2gb just from the o.s. and, and.

    It disappears rather quickly.


    Then when you go to render you can be using 3x(or more) the amount of ram used during scene setup.

    A 2gb scene takes 6gb, 8gb takes 24gb.

    This is regardless of CPU or GPU rendering.

     

    The hard drive.

     

    500gb is fine for a gaming system.

    But again its insufficient for 3d.

    Asset libraries can get very large, very quickly.

    Just going through the rendo, sharecg, and daz freebies you could fill that drive in less than a day.

     

    The CPU.

    The benchmarks are not that impressive.

    They come in comparable to processors that can be purchased for $5 on the secondary market from 7 years ago.

    Cb score of 585 avg.

    My 5645 and 6174 do better than that.

    So when that thing drops to CPU rendering its going to tank.

     

    The secondary GPU.

    A 1060 is an OK card for GPU rendering.

    The benchmarks are kinda all over the place.

    In some its performance is well below a 980 or 970.

    Those are lower vram though.

    So a trade off.


     

    Conclusion.

    In all honesty you can run ds on a total potato.

    As long as the GPU is opengl 1.3, that's pretty much all that matters.

    I'm not going to recommend any specific parts, just some general specs.

    RAm: 32gb ram minimum.

    Graphics out:4gb, amd or nvidia, doesn't matter. Not for rendering.

    GPU rendering: 6gb GPU minimum. This is a secondary GPU that doesn't get connected to a monitor.

    Check out crypto mining equipment for adding GPUs to an existing system.

    Pci-e x1 to x16 adapters, port multipliers, breakout boards and server power supplies to power them.

    There are some more serious solutions, such as VCAs and 16 bay GPU boxes for server, 8 GPU mobos, and some even weirder products of the mining craze.

     

    CPU rendering: look into multi CPU servers and iray server.

     

    Just my two cents, take it for what it's worth.

     

  • PadonePadone Posts: 3,481
    edited March 2019
    1. The first problem I came across is that, according to the user guide pg. 33, the integrated GPU may be disabled depending on the GPU in the pci-e slot.

    2. the igpu maxes at 2gb. While this sounds like a lot, it really isn't when working in 3d.

    3. 16gb may seem like a lot of system ram, but its not.

    4. 500gb is fine for a gaming system. But again its insufficient for 3d. Asset libraries can get very large, very quickly.

    5. The CPU. The benchmarks are not that impressive. So when that thing drops to CPU rendering its going to tank.

    6. The secondary GPU. A 1060 is an OK card for GPU rendering.

    First, thank you very much for your evaluation. I do appreciate it. Keep in mind that this rig is designed to be both powerful enough and affordable, so tradeoffs are needed. Of course it is not excellent. This is a $700 rig. Now I'm going to reply for what I can do. I added some numbers for better management.

     

    1. If you look at page 31 of the mobo manual you can choose the display you want in the bios, integrated gpu or pcie gpu. My suggestion is to use the vega for the viewport and the 1060 for rendering.

    Initial Display Output
    Specifies the first initiation of the monitor display from the installed PCI Express graphics card or the onboard
    graphics.
    >>IGD Video Sets the onboard graphics as the first display.
    >>PCIe 1 Slot Sets the graphics card on the PCIEX16 slot as the first display. (Default)

    2. 2gb is enough for the opengl viewport, because you don't have to use full size textures here. You can choose the texture quality in the DS control panel. For the iray preview the 1060 will be used. Again, using the vega for the viewport prevents slowdowns and lockups because windows will not poll the 1060.

    3. The rule of thumb here is ram = 2 x vram minimum. 16gb are good for a 6gb card, and there's space for the vega 2gb framebuffer. If you go for a 8gb card or more then I agree that 16gb ram are not enough. But this is not the case with this configuration. Of course I'm talking about Iray here, since 3Delight may require more ram being cpu based and not having texture compression. For CPU rendering the mobo supports up to 64gb anyway.

    4. This is a matter of personal preference. I agree that if you want to keep all your assets online then you may need a 4T HDD or more, depending on your collection. Personally I believe that keeping a huge database online is to ask for troubles. I keep my assets in a external backup disk and I only install the assets I need for the current project. When I switch a project I simply switch the library. This keeps things reliable and fast, it's "ssd friendly" and also avoids data loss and conflicts among assets.

    5. Again, this rig is configured for Iray, not 3Delight. CPU rendering is not what this rig is for. If you need both GPU and CPU rendering then I agree that a Ryzen 5 2400G is better. The rig is fully capable to support it. No other components need to be changed. Or you could even install a Ryzen 7 2700X and add a 1030 for the viewport since the mobo gets two pcie slots.

    6. I agree, as I said you will need the scene optimizer to fit complex scenes to the gpu, then the 1060 will work fine enough with 6gb, and 16gb ram will be fine as well as a consequence. Working without the scene optimizer is possible but you will be very limited for complex scenes even with 11gb cards. Since textures size is exponential in vram/ram allocation, not linear.

     

    EDIT. I'd add some notes on the comparison you did with old processors since I feel this may be useful. It is true that overall the xeon e5645 and the opteron 6174 are comparable in performances (about 6000-6500 on passmark compared to 7500 of the ryzen so the ryzen is anyway somewhat better). But if you consider the single thread performance then things change a lot. You have 800-1000 vs 1800 of the ryzen. Single thread performance is important in DS because it doesn't use multithreading too much. For example the geometry smoothing is single-threaded afaik.

    And anyway comparing a server-class processor to a pc-class processor is not too much fair in my opinon because they serve different purposes and use different architectures.

    https://www.cpubenchmark.net/cpu.php?cpu=AMD+Ryzen+3+2200G&id=3186

    https://www.cpubenchmark.net/cpu.php?cpu=AMD+Opteron+6174&id=1916

    https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+E5645+@+2.40GHz&id=1252

    As for the 1060 vs 970 and 980 they are comparable on passmark, but as you said yourself the vram makes the difference.

    https://www.videocardbenchmark.net/gpu.php?gpu=GeForce+GTX+1060&id=3548

    https://www.videocardbenchmark.net/gpu.php?gpu=GeForce+GTX+970&id=2954

    https://www.videocardbenchmark.net/gpu.php?gpu=geforce+gtx+980&id=2953

    Post edited by Padone on
  • Ron KnightsRon Knights Posts: 1,736

    Wow, this is too complicated. Skip it.

  • edited March 2019

     some testing i wanted to do before i responded this post, so here goes.

    1. You're misinterpretering what is being said on page 31.
    The setting Initial display output, is regarding the gpu that will be used during POST.
    It may or may not determine which gpu is used past that.
    That is why the setting, Integrated graphics, on page 33 states that it may automatically disable the IGPU depending on what gpu is in the pci-e slot.

    I have several systems, including servers, that allow me to boot off the IGPU, but when it goes to load windows it switches over to the Discreet gpu.
    Depending on the particular system the IGPU then becomes a passthrough for the Discreet gpu.
    Meaning, it doesn't matter the settings for the IGPU or it's specs, it's just a pipeline for the discreet gpu to output to a monitor.

    Or it's just disabled completly.


    2. As someone that utilizes a 2GB gpu, discreet, daily i will agree that 2GB is fairly sufficient.
    And turning down the settings under Interface is a good idea, regardless of the available vram.

    Focusing on textures alone, may, result in ignoring other factors.
    Geometry, sub-d, instances, shader settings, light setup, etc.
    You always need to take a scene in it's entirity, not just one aspect of it.

    "windows doesn't poll the 1060", isn't true.
    While the Vram usage may be low, checking via smi or gpu-z, there is always some, depending on gpu.
    From my experience this primarily depends on if the gpu has a graphics out port.
    Working with teslas, i find my C class does have a higher ram usage than my non-C.
    The C class has a graphics out port, where as the others don't.
    Even without it there is still reported utilization of vram, even when there is no direct load on the card.

    But again, this may be a moot discussion if the system is not utilizing the IGPU.

    3. That "rule of thumb" is in regards to video games.
    As i said previously, IRay when rendering will take 3x(+) the system ram used when working, when it renders.
    Cpu or GPU rendering is irrelevant.

    Check it yourself.
    Load a scene in DS and open a task manager.
    Look at how much ram is utilized before rendering and during rendering.
    May need to switch to Resource monitor to get a more accurate usage level.

    I've attached a few screen shots to demonstrate what i'm talking about.

    The scene only uses 6GB of system ram, working, but utilizes 21GB when rendering.
    And still fits within the 6GB of the c2070 i was using.

    This test did bring up some weirdness with iray.
    If i just straight went to rendering on load, it would consistently drop to cpu rendering.
    But, if i dropped the render size to 720p(1280x720) let it render for an itteration or two, then canceled and closed out the render, reset to UHD(3840x2160), it would now fit within the 6GB vram of the c2070 i was testing on.
    This is with default texture compression.


    Regarding the ram usage of 3delight vs Iray.

    In my experience, 3delight has a tendency to be lower, by a couple magnitudes on average.
    This will of course depend on scene composition.


    Regarding the 64Gb of ram the system maxes on, there's something you're not considering.
    Instead of just adding 2 more sticks at a later date, you're going to need to buy a full four sticks.
    If you start with half the max, 32Gb, in two sticks, you just need to purchase a new pair in the future.
    At a cost increase of only $60 usd for g.skill($150) on newegg for the same base clock, 2666, it'll save you a bit in the future.
    A full 64GB is running $337 for 4x16GB.
    So buying two of the g.skills, will save $37.

    I've personally had this issue bite me in the butt.

    4. This is not "...a matter of personal preference", you only included a single drive with a 500GB capacity in the build list.

    What you do, or i do, is irrelevant to this build as no secondary storage is included.

    Regarding the rest of your statement.

    You're going to have to clarify a bit here as your workflow seems a bit convoluted.
    If i'm reading it correctly you keep your assets compressed on an external drive and then when you decide to use them, you install to the ssd.

    Correct me if i'm interpretering what you posted incorrectly.


    As far as the difference between using an external and an internal drive for assets, the failure rate is fairly comparable.
    You run the same risk of data loss with either.
    The difference is that you're adding in additional failure points with external drives.

    Unless you are disconnecting the external drive from the system, it's always "online" as well.

    Another consideration is the performance hit you may be taking.
    Depending on drive and connection type, this could be negligable compared to an internal or it could be significant.

    In regards to "..conflicts among assets.", sorry mate, there's no way to keep that from happening unless you test every combination of assets you have.
    Just having them on separate drives, or even a split installation, they can still happen.
     

    5. I never said anything about 3delight.
    I implied iray.
    Thought i was pretty clear when i said "...Drops to CPU.."
    3delight goes straight to cpu, while iray, generally, goes to gpu, and if ram is exceeded it "drops" to cpu rendering.

    I'm only discussing the build as presented.

    6. You don't 'need' scene optimizer for anything.
    You need to know how to analyze your scene and determine if a tool like S.O. is even going to be beneficial.

    I've gotten mixed results so far in the limited testing i've done.
    I'm still on the fence about useability.
    Most of the functionality isn't anything you can't already do, it's just 'slightly' more efficient using the script.


    Regarding the Edit you made.

    I just brought up those two particular cpus as i use them daily and they're $5-10 on ebay with close performance, per Cinebench scores.

    Since you want to discount one class of cpu, here's a couple sub $5, consumer class, cpus that provide better single thread performance than the xeon and opteron.

    C2d e8400:1248
    c2d e7400:1144

    Their multi-core is crap though only being dual cores.

    I also own both of those cpus.

     

    "And anyway comparing a server-class processor to a [consumer] class processor is not too fair in my opinion because they serve different purposes and use different architectures"

    I'll kindly disagree.
    They serve the exact same purpose, running software.
    Their architectural differences are fairly irrelevant.
    They both have their strengths and weaknesses.
    Limiting oneself to one class of hadware, consumer vs enterprise, is not a good thing.
    You'll be missing out on options that you may not have considered.

    This is part of the reason i brought up the crypto mining parts.
    These parts open up a world of new possibilites.
    Actually get some use out of those x1 slots on your motherboard.
    Get a mobo that has 19 pcie connections.
    Use port extenders.
    With the recent unlocking of the gpu limitation, ho boy the possibilities.

     

    prerender2.png
    1933 x 1089 - 293K
    render1.png
    1925 x 1077 - 383K
    720.png
    1921 x 1085 - 1M
    2160.png
    1921 x 1077 - 1M
    Post edited by DrunkMonkeyProductions on
  • PadonePadone Posts: 3,481
    edited April 2019

     some testing i wanted to do before i responded this post, so here goes.

    Hi Drunk Monkey, again thank you for taking the time to evaluate this rig. I believe your considerations are useful both to me and anyone interested.

    The news is I built this rig myself in the weekend. The only difference is the mobo where I got a GA-A320M-S2H with only two ram slots and one pcie, that's the little brother of the GA-AB350M-DS3H. And I got a Manli 1060 instead of the Zotac. I also checked it with passmark to be sure of performances and everything works fine. So I know I can do reliable tests. I'll keep our numbering scheme since it's clean for anyone to follow.

    1. As a first test I can already tell you that the vega and the 1060 work fine together. I can use the vega for the viewport and the 1060 for rendering. So if I plug the monitor to the mobo the bios doesn't disable the vega graphics.

    I'll do tests in the next days and let you know what I find out to reply from 2 to 6. I do absolutely agree that mining rigs using pcie risers are an excellent option for production rendering. Though I'd rather go with online render farms.

    Bye for now.

     

    EDIT. Also I see that prices rised up a little so I updated the title accordingly.

    Post edited by Padone on
  • Takeo.KenseiTakeo.Kensei Posts: 1,303

    My two cents :

    - CPU : you could go further down with the AMD 200GE if you want to lower the price. The problem with this processor is that it's a dual core and you will only be able to get PCIe at 4x max. Really only for very tight budget

    Otherwise I'd rather wait to see new Ryzen 3300G

    - Motherboard : B450 chipset may have been a better choice as they should have better memory support. Or you could also try to get a low priced X370 which could be good if you ever plan to change the CPU.

    - Memory : you could get some 3000 Mhz memory for 1$ more

    - Gfx card : a GTX 1660 would have been a better choice for a few more dollars. The GTX 1060 should have drop in price but that doesn't seem to be the case

     

  • PadonePadone Posts: 3,481
    edited April 2019

    My two cents :

    Hi. The goal of this rig is to be both affordable and powerful enough to work smooth with 4.10 and 4.11. I chose the ryzen 2000 series because it is stable with most mobos now. While newer processors may have compatibility issues. That is, you may have to first update the bios using another cpu. I believe stability is an important factor when designing a rig.

    Likewise, the X370 chipset and 3000 ram are for overclocking. That is useful for games. Since the rig is intended for 3D rendering again stability and reliability are much more important. This is why I chose to stay away from overclocking. This also helps to be safe in overnight rendering sessions because temperatures are easier.

    Finally the 1660 doesn't work with 4.10 and previous versions, it only works with 4.11 and it seems there are issues yet, such as optix and the denoiser not working fine. This is why I chose a 1060 instead.

     

    EDIT. On a second thought I added overclocking capabilities to the rig so it can be used both for rendering and gaming.

    Post edited by Padone on
  • PadonePadone Posts: 3,481
    edited April 2019

     some testing i wanted to do before i responded this post, so here goes.

    2. and 3. I did some tests on memory usage, using the standard free content provided by DAZ. I just resized the textures. Now what I found out is that the pergola scene with the four G1F-G8F characters takes about 500M opegl vram on the vega card. While windows itself takes 100M on the vega card (300M with ie11) and nothing at all on the 1060. So the 1060 is free from windows polling and completely dedicated to rendering.

    While rendering I got 2.4G vram on the 1060 and 3.7G ram on the ryzen. While the OS itself takes about 2G. So I have a total of 5.7G while rendering. Now as you can see in this test the vram/ram ratio is 2.4/3.7. So I guess in your example there's something odd. I mean, may be you used a lot of instances without instances optimization, or there's something else bloating the ram usage. Anyway, being this a real example on common assets, I guess this is proof enough that for a medium scene the vram/ram ratio can be 1/2. That is, ram = 2 x vram works fine if you optimize the scene.

    Below the screeshots showing the 1060 vram while rendering, then the vega with the scene loaded, then the vega and 1060 with windows only.

     

    EDIT. I did the same test with GPU-Z that seems more accurate than the task manager. Most values match fine. But I have 90M vram allocated on the 1060 when not used, right at the system startup. The bus interface and memory controller loads are always zero though. So apart from a minimum vram allocation from windows, it seems there's no polling.

    scene.jpg
    1920 x 1080 - 504K
    vega-scene.jpg
    1399 x 655 - 245K
    vega-0.jpg
    931 x 654 - 135K
    1060-0.jpg
    946 x 652 - 135K
    Post edited by Padone on
  • PadonePadone Posts: 3,481
    edited April 2019

     some testing i wanted to do before i responded this post, so here goes.

    4. and 5. I upgraded the cpu to a Ryzen 5 2400G that comes with better performances. Though I believe it's not necessary for Iray gpu rendering, it can help 3Delight cpu rendering. Also I added a hdd for keeping large asset collections online.

    As for my way of managing assets it is quite simple. I keep all the assets in backup offline hdds in their original zip format, as downloaded from the shop. When I start a new project I unzip the assets I need in a new content folder. Then I switch the content folder in DAZ Studio. It is really nothing complicated, it is ssd friendly and it helps to avoid conflicts among assets just because you have much less assets in the same library.

    Personally I believe that keeping a huge collection of assets in the content folder is to ask for troubles. Specially without a backup. But I also agree that this is a personal choice so I upgraded the rig to handle it.

    Post edited by Padone on
  • PadonePadone Posts: 3,481
    edited April 2019

    This is just another test I had to do. That's GPU temperature on long rendering sessions. Well it comes out that the 1060 works 100% at just 60 celsius degrees. And the fan speed is just 40%. This is without overclocking of course, but the GPU clock is 1847 anyway so not bad, it seems to work at full speed compared to the 1060 specs.

    In the case I mounted two front fans and one back fan, so nothing special, everything 12". I have to say I'm impressed with the result, the 1060 is practically freezing inside there.

    temperature.jpg
    1149 x 643 - 229K
    Post edited by Padone on
  • PadonePadone Posts: 3,481
    edited April 2019

    UPDATE

    One issue that I came across in this rig, is that I couldn't have both nvidia and amd OpenCL together. OpenCL was only available for the last installed driver. Indeed it seems that both nvidia and amd drivers overwrite each other thus disabling the other side OpenCL. Finally I found the solution by adding the khronos registry keys.

    https://community.amd.com/message/2909519

    For example if you install the amd driver last then you have to add the nvidia OpenCL.

    HKEY_LOCAL_MACHINE\SOFTWARE\Khronos\OpenCL\Vendors
    "C:\Windows\System32\nvopencl64.dll"=dword:00000000
    HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Khronos\OpenCL\Vendors
    "C:\Windows\SysWOW64\nvopencl32.dll"=dword:00000000

    One interesting fact with dForce is that the vega gpu seems to be only 2.5x slower than the 1060. That is, if the 1060 takes 10 seconds then the vega takes 25 seconds to complete the job. Since the vega is scored about 1700 at passmark while the 1060 is scored 9000, I expected the vega to be much slower.

    https://www.videocardbenchmark.net/gpu.php?gpu=Radeon+Vega+8&id=3895

    https://www.videocardbenchmark.net/gpu.php?gpu=GeForce+GTX+1060&id=3548

     

    EDIT. I can't believe it .. if you go to the amd link above you can't see my solution that I reported here because my post is actually being "moderated" .. so it seems they really don't want amd and nvidia to work together LOL

    EDIT. Nope .. it was just "standard" moderation since I'm a new user there .. so you can see the whole solution now.

    Post edited by Padone on
  • JamesJABJamesJAB Posts: 1,760

    Honestly, if you aske me about a good $900 Daz Studio rig, I would say to get a used business class computer with a big power supply and spend the rest on a GPU.

    Example:
    Ebay as of this posting - $350 - Dell Precision T5610 with 32GB RAM, Xeon E5-2637v2 (quad core 3.5Ghz)
    newegg - $330 - Geforce GTX 1070 (For your $750 build)
    newegg - $500 - Geforce RTX 2070 (For your $900 build)

  • PadonePadone Posts: 3,481
    edited April 2019
    JamesJAB said:

    Honestly, if you aske me about a good $900 Daz Studio rig, I would say to get a used business class computer with a big power supply and spend the rest on a GPU.

    It's just that I don't like too much used stuff because you know, components have a limited life time, so you never know. Also a business class psu may not have pcie connectors, and the case may not have room for extra fans and/or for a gaming geforce card. Then old xeons tend to have a low single-thread score that's not good for DS. And a second gpu is needed for the viewport anyway that's why I went for the vega. Also I got a 1060 because you need less ram to manage it, it's all balanced. If you get a 8G card then you also need 24G ram for rendering.

    But yes, recycling stuff is an option indeed.

    Post edited by Padone on
  • TheKDTheKD Posts: 2,674

    I have heard it's a lot less of an issue now, but I burned myself going with the buy used business class PC and plan to update it. The PSU had no extra connectors, and after I bought a new PSU, I found out the MOBO had some proprietary PSU connector on it to force you to buy their over prices PSU instead of aftermarket PSU.

  • kyoto kidkyoto kid Posts: 40,576

    ...as I mentioned on another thread I have a used Titan X and 4 GB 750 Ti (both Maxwell).  They have been working fine for me.  The Titan is on the 24 GB system (won't fit in the other case).  Unfortunately this is the best I can get as I am on a fixed retirement income and thus do not have the resources for a newer more powerful system (and I don't want W10).  As I have an SSD for the boot drive, should virtual memory be required it will be much faster. 

  • PadonePadone Posts: 3,481
    edited May 2019

    UPDATE very important note about ssd performances

    Since this rig uses a single ssd for everything it is extremely important to keep it fast. You will find a lot of articles on the web about ssd write performances degrading over time. And it happens very quickly if you use the ssd for large data operations such us rendering animations.

    So here's a couple of links and a quick guide of what to do to keep the ssd fast.

    https://www.crucial.com/usa/en/ssd-used-to-be-faster-but-has-slowed-down

    https://www.crucial.com/usa/en/trim-and-ssd-performance-importance

    1) It is very important to disable the ssd power off in the windows power plans, otherwise the garbage collector will never run. Standard plans are not optimized for ssds and windows is dumb enough not to manage itself this important option. So in a typical balanced plan you will find the ssd turned off after 20 minutes, that's a killer for the garbage collector.

    2) A 10% over provisioning in addition to the factory default is an extremely good thing to do, it will make the garbage collector much faster. You can use the crucial storage executive utility for this.

    https://www.crucial.com/usa/en/support-storage-executive

    3) If you, like me, never let the pc idle because you turn it off when it's not used, then at least once a week let the pc powered on at the bios screen for a couple hours, so the garbage collector will run. Also be sure to retrim the ssd every while and then using the windows disk optimize tool (ssd > properties > tools > optimize). This is defaulted to run once a week when the pc is not used, but it will not run if you power off the pc.

    4) ssd resurrection. It may happen that you write and delete large amounts of data before running the garbage collector. To the point that the ssd gets extremely slow. If this happens then running the garbage collector is too late because it will take forever, unless there's a large over provisioning that is. So, if for any reason you get stuck with your ssd performances you have two options.

    The first option is to clear back the ssd to factory state using the crucial storage executive. That's the sanitize operation. But this can't apply to the os disk unless you do it from another pc. And in this case you will also have to reinstall windows anyway.

    The second option is to use a microsoft utility called sdelete. This tool will clear all the free space in your ssd that's a sort of manual garbage collector. Be aware that it will take some hours to do its job, but your ssd will be back on track after that.

    https://docs.microsoft.com/en-us/sysinternals/downloads/sdelete

    This command has many options. What you have to use to get the ssd clean is reported below. Again beware that it will take some hours. But it's the only way when the ssd garbage collector gets stuck with too little over provisioning and too much data to recycle.

    sdelete64 -z C:

    5) Trim check. The trim feature is very important to keep your ssd fast. So it is good to have a tool to check if it is working properly.

    https://github.com/CyberShadow/trimcheck

    Post edited by Padone on
  • I can see that this thread was done in good faith, but there are a lot of mis-nomers and otherwise false information being given. Most have been taken care though.

    First and foremost, the $750 machine is significantly under powered- an integrated GPU doesn't have the computation power to provide reasonable render times. Plus, it's also sharing the system RAM and is heavily limited to a small amount of RAM. 

    The power/idle/trim command statements are incorrect. Windows 7 and up takes care of everything if the 'high performance' power option is set. SSDs don't need hard drive power settings as they have no mechanical parts. They also have their own internal energy modes.  Windows 7 and up also detects most solid state drives and enables the trim command automatically. This command runs when the system is idle or no user input. So it's not required for it to sit in bios or whatever, users can simply let the computer sit there while in windows.  to make sure the command is enabled, is a command through the command line found via google. 

    If someone was going to experiment in daz to see if they liked it, then the $900 version (assuming it has a dedicated video card), as i started out with an 8 core, 16 GB RAM and dedicated video card.  If it becomes a hobby, then 16 GB is the absolute minimum, 32 Recommended with 6 or higher GB video card. And these days, people can easily accomplish these requirements and more.

    Also, keeping multiple libraries- each for a different project, that's asking for trouble.  Keeping everything together cuts down user error and makes things more efficient. I've got 2.6 TB of content in one place with combined paths for specific content- never had an issue.

  • I can see that this thread was done in good faith, but there are a lot of mis-nomers and otherwise false information being given. Most have been taken care though.

    First and foremost, the $750 machine is significantly under powered- an integrated GPU doesn't have the computation power to provide reasonable render times. Plus, it's also sharing the system RAM and is heavily limited to a small amount of RAM. 

    The power/idle/trim command statements are incorrect. Windows 7 and up takes care of everything if the 'high performance' power option is set. SSDs don't need hard drive power settings as they have no mechanical parts. They also have their own internal energy modes.  Windows 7 and up also detects most solid state drives and enables the trim command automatically. This command runs when the system is idle or no user input. So it's not required for it to sit in bios or whatever, users can simply let the computer sit there while in windows.  to make sure the command is enabled, is a command through the command line found via google. 

    If someone was going to experiment in daz to see if they liked it, then the $900 version (assuming it has a dedicated video card), as i started out with an 8 core, 16 GB RAM and dedicated video card.  If it becomes a hobby, then 16 GB is the absolute minimum, 32 Recommended with 6 or higher GB video card. And these days, people can easily accomplish these requirements and more.

    Also, keeping multiple libraries- each for a different project, that's asking for trouble.  Keeping everything together cuts down user error and makes things more efficient. I've got 2.6 TB of content in one place with combined paths for specific content- never had an issue.

    You can if you are not doing Iray! Some of the users don't have deep pockets or are retired! Unless you are making money off of this I don't recommend spending so much to make it work with Iray. Use 3delight or OpenGL and Blender if you want a higher quality render. This is a hobby for me that I play with on my spare time which I don't have much of. I do not see spending so much on hardware everytime a new feature comes out. I would strongly suggest getting Plaq Comix Life for windows or mac and use Opengl or 3delight for your renders and create your own comics. I don't use Iray too much because my comic renders are mostly done in Opengl and if I need too>> I dumb down renders using GIMP. You don't need the Nvidia Titan rtx and a 32 core Ryzen cpu to do 2D stills in Daz Studio. If you need high hardware requirements it is because you are doing animation and each frame needs to be rendered. If that is the case I would recommend either Unity or Unreal game engine renders. If you are lazy like me you can go the Iclone 7 route but that is expensive but all the work is done for you and you can buy the animations premade. This is if you have deep pockets.

  • PadonePadone Posts: 3,481
    edited December 2019

    1. an integrated GPU doesn't have the computation power to provide reasonable render times

    2. The power/idle/trim command statements are incorrect. Windows 7 and up takes care of everything ..

    3. .. 16 GB is the absolute minimum, 32 Recommended with 6 or higher GB video card. And these days, people can easily accomplish these requirements and more.

    4. Also, keeping multiple libraries- each for a different project, that's asking for trouble.

    1. The integrated gpu is for the viewport, not for rendering. Both the $750 and $900 rigs include a dedicated nvidia card for rendering and iray preview.

    2. My suggestions about the ssd are correct. Then if you use the high performance plan and always leave your pc on and it gets enough idle time among working sessions plus windows updates plus may be overnight rendering sessions then the suggestions above may not apply to you. But if you generate high write traffic as it's common in production, and/or turn off your pc when not used, then chances are that you'll need the suggestions above to help your ssd staying in good shape.

    3. I agree it's time to update the rigs. I'd wait adding 20xx cards until the drivers are fixed though.

    4. I do not agree. Most issues with the content library that you can find here in the forum would likely not arise with backed-up per project libraries. From slow loading times to morphs confilcts to duplicated formulas to library migrations.

    Post edited by Padone on
  • nonesuch00nonesuch00 Posts: 17,929
    edited December 2019

    Not sure if you can see these links but these are the two cheapest builds I could get done for Blender (that aren't essentially already obsolete):

    $750 - The Cheopo AMD 3rd Generation Ryzen 5 3600 Gaming Build - https://pcpartpicker.com/user/Jack.Seas.Hobby/saved/QBzHBm

    $975 - Tug - The Upgraded Cheapo AMD 3rd Generation Ryzen 5 3600 3D Modeling Build - https://pcpartpicker.com/user/Jack.Seas.Hobby/saved/cqWk6h

    I started with the $750 but then upgraded to the $975. You'll see they are compute wise, the same computer. Only RAM & storage is expanded.

    Now, because I bought some of the parts in my build as returns to Amazon and because it was during the run-up to Black Friday and not the actual Black Friday - Cyber Monday Sale (when prices for most of the parts on my list actually went up or sold out at existing prices) what I paid was about $100 cheaper in total than what you see. 

    2 parts (the motherboard and the power supply) are currently sold out. Every part on my list has sold out at least once since the 1st of November 2019 so there are lots of people building and upgrading lots of PCs. 

    The main thing to know beside the CPU model shown in the build is I chose an AMD Radeon RX 570 M"k 8GB OC GPU. There are many similar models of Radeon RX 570 / 580 / 590 with 8GB RAM and if you can get one of them for $150 or less I recommend one of those but if you can't than you need to factor in the nVidia 1060 6GB GPU or better into the equation and if you need to go over $200 to get such a card you need to step back and throw in the towel and save two or three months to get an nVidia RTX 2060 8GB or nVidia RTX 2070 8GB as long as the price is less than $475. If  you do that then of course you'll be set for Blender & DAZ Studio both. The Radeon gets you ready for Blender, Unity, and UE4 as much as the nVidia card does so it's only with DAZ Studio that Radeon support is lacking.

    Post edited by nonesuch00 on
  • PadonePadone Posts: 3,481
    edited December 2019

    The Radeon gets you ready for Blender, Unity, and UE4 as much as the nVidia card does so it's only with DAZ Studio that Radeon support is lacking.

    afaik cycles out of core rendering is only available for nvidia cards though .. also your links are not visible it says it's a private list

    Post edited by Padone on
  • nonesuch00nonesuch00 Posts: 17,929
    Padone said:

    The Radeon gets you ready for Blender, Unity, and UE4 as much as the nVidia card does so it's only with DAZ Studio that Radeon support is lacking.

    afaik cycles out of core rendering is only available for nvidia cards though .. also your links are not visible it says it's a private list

    No, all Radeon and Intel GPUs use openCL which means cycles works with them so you are mistaken. Blender does offer direct support of CUDAs (which nVidia paid for) for support of cycles via CUDA architecture by nVidia but that's not the only hardware architecture supported by cycles because of the openCL support. Eevee is supported also by intel and Radeon GPUs although the intel GPU support is still very buggy.

    Finally, there is the AMD Radeon ProRenderer plugin for Blender which was written by AMD staff for Blender that supports AMD Radeon and other AMD GPUs directly. It comes with a decent sized material library as well.

  • Padone said:

    1. an integrated GPU doesn't have the computation power to provide reasonable render times

    2. The power/idle/trim command statements are incorrect. Windows 7 and up takes care of everything ..

    3. .. 16 GB is the absolute minimum, 32 Recommended with 6 or higher GB video card. And these days, people can easily accomplish these requirements and more.

    4. Also, keeping multiple libraries- each for a different project, that's asking for trouble.

    1. The integrated gpu is for the viewport, not for rendering. Both the $750 and $900 rigs include a dedicated nvidia card for rendering and iray preview.

    2. My suggestions about the ssd are correct. Then if you use the high performance plan and always leave your pc on and it gets enough idle time among working sessions plus windows updates plus may be overnight rendering sessions then the suggestions above may not apply to you. But if you generate high write traffic as it's common in production, and/or turn off your pc when not used, then chances are that you'll need the suggestions above to help your ssd staying in good shape.

    3. I agree it's time to update the rigs. I'd wait adding 20xx cards until the drivers are fixed though.

    4. I do not agree. Most issues with the content library that you can find here in the forum would likely not arise with backed-up per project libraries. From slow loading times to morphs confilcts to duplicated formulas to library migrations.

    1.) maybe in textured mode, etc, but not for iray preview. an APU is simply too slow for iray anything.

    2.) If you believe you are correct, then I don't think you understand how a solid state works. Because if you really did, you would know that your suggestions are completely unnecessary, border line incorrect. If the user switches to the high performance plan, all issues with drive downtime are eliminated and the machine shifts into performance mode. But since Solid states don't have mechanical parts in the sense of hard drives for ramp up and turn down time, the high performance plan doesn't affect them- as they have their own internal power plans that they themselves work with. They have been like this for some time.  As for the 'good shape'. Simply running the manufacturer's software for optimizing the solid state is good enough. Switching on the Trim command (which even Windows 7 does automatically at install of the OS) takes care of all garbage collection- even in the windows environment- hence the need to stay in bios for hours is unnecessary.   If Windows couldn't handle the Trim command, then Microsoft wouldn't have put a command into the system for enabling the trim command.  I suggest you research the Trim command a bit more.

    3.) agreed. I'm finding a 2060 actually cheaper than a 1080...but the drivers won't ever be 'fixed'. Fixed would imply a finality.

    4.) disagree all you want, i feel you're making extra work for yourself that opens the door to massive user error. I'm curious though, how do you start projects if you are constantly switching out content? As for your slow loading times- if you have your system set for high performance, the drives never spin down. I have all my content stored on a 10 TB drive; i see at most a 5 second lag when i'm switching between daz sessions after hours of doing something else and the drive spins back up.

    Some duplicated formulas happen when duf files aren't coded right- trust me, I've found a few.

  • PadonePadone Posts: 3,481
    edited December 2019

    1) The iray preview is always done by the nvidia card, even if you use the integrated card for the viewport.

    I'm not arguing anymore on the other points. We simply disagree.

    EDIT. Just to clarify about the trim command. It does nothing by itself. That is, it simply marks the blocks for deletion, but then the ssd garbage collector has to do the job, and if there's not enough idle time it can't do it. The deletion process is slow because the ssd needs to write zeros on the blocks to delete them, it's not the same as a hdd.

    Post edited by Padone on
  • TheKDTheKD Posts: 2,674

    I think people overstate the fragility of SSD really. I use the second partition of mine for games and as a scratch disk for programs like photoshop. I make large paintings, and a huge fan of nondestructive, so my files get huge and it's getting heavy daily use. I think SSD got a bad rep that stuck when they first came out.   

  • Padone said:

    1) The iray preview is always done by the nvidia card, even if you use the integrated card for the viewport.

    I'm not arguing anymore on the other points. We simply disagree.

    EDIT. Just to clarify about the trim command. It does nothing by itself. That is, it simply marks the blocks for deletion, but then the ssd garbage collector has to do the job, and if there's not enough idle time it can't do it. The deletion process is slow because the ssd needs to write zeros on the blocks to delete them, it's not the same as a hdd.

    It's not that we 'disagree', it's that, well, you're simply mistaken on how they work in conjunction with Windows 7/10 (because if it was the bios, all non-savvy users would be screwed). I double checked my understandings and found multiple websites, including Tomshardware, that state the same concept that i've explained.  I'm not going to explain the workings- it's easier for you to google and read for yourself. But you're correct in one instance- a Solid state isn't the same as a hard drive. And while the writing 0s process is slow, it's not as slow as you're leading yourself to believe. A SSD's latency is < 1 ms. A HDD's latency is > 10 ms.

    ThKD is correct; solid states aren't nearly as fragile as people believe, especially yourself, Padone. If you google 'Solid State Endurance Test', you'll find that Solid states are rather resiliant and capable. I believe the Samsung 850 Pro was able to endure nearly 2 Petabytes of data before it finally gave out.

     

  • edited December 2019

    Just go out and buy a core i9, 2080 GForce Nvidia,SSD, 64 GIGs RAM Liquid Cooling system and you all will be fine; no more rendering problems. Trust me my system runs fine after spending $2,350. That's what it takes to run THE DAZ GUI. 

    Post edited by Softimage_Graphic_Artist on
  • nonesuch00nonesuch00 Posts: 17,929
    Padone said:

    1) The iray preview is always done by the nvidia card, even if you use the integrated card for the viewport.

    I'm not arguing anymore on the other points. We simply disagree.

    EDIT. Just to clarify about the trim command. It does nothing by itself. That is, it simply marks the blocks for deletion, but then the ssd garbage collector has to do the job, and if there's not enough idle time it can't do it. The deletion process is slow because the ssd needs to write zeros on the blocks to delete them, it's not the same as a hdd.

    Windows 10 schedules the trim command to run daily on my machine. If the computer is off during the scheduled time it simple does it during a bit of idle time when it is next on. And it does that on clean install or upgrade. I did absolutely nothing to get that behavior. Now I'm welcome to run the trim command manually on the Disk Properties - Tools - Optimize command but it's not needed.

  • rrwardrrward Posts: 556

    Back to the topic of what to buy: Be wary of used business class machines as many of them can't take a two-slot wide video card, nor do most of them have PSUs that have the connectors or power to run one. Anything "small form factor" is an automatic no-go. Mid and full towers might work, but you'll want to check out their internal specs and layout very carefully. I would avoid them completely, actually.

Sign In or Register to comment.