PSA: "Out of Memory" Error In Windows and A Free Solution

L'AdairL'Adair Posts: 9,479
edited August 2017 in The Commons

I brought this up in my Kitchen thread over in the Art Studio, but I thought I'd bring this up in here as well, so more people will see it.

While working on my latest render with LAMH presets on both a squirrel and a house cat, I had a lot of Daz Studio crashes, usually preceeded by a Windows "out of memory" error. Just attempting to convert the squirrel's hair to fiber mesh with the house cat's fur already converted crashed DS. To get the effect I wanted, I had to render with the cat, delete the LAMH fur and then prepare and spot render with the squirrel.

I was getting that out or memory error with only 62% of my RAM used. So I googled it. And what I found could be helpful to a lot of people using Daz Studio; Especially those with a Windows 10 operating system, which by default, uses a smaller page file than previous versions. You can always add more RAM, of course, but increasing the Page File won't cost you any money.


If you get Out of Memory errors from Windows while working in DS, you can, (should if DS is your only open program,) increase your "Page File." I made screenshots using this computer, which is Win7Pro, but the Dialogs are the same or very similar. From the Control Panel, click on System, then Advanced System Settings, (in the left column.) The System Properties dialog will open.

System Properties, Advanced Tab screenshot

If you are in Windows 10 and having trouble finding the Control Panel, open your Settings from the main menu and do a search for Control Panel. (That's how I found it, anyway, in Win10Pro! Why does MS think they need to hide everything from us...? Rhetorical question...)

In System Properties, go to the Advanced tab and click on the Settings button under Performance.

Performance Options Dialog, Advanced tab screenshot

In the Performance Options dialog, click on the Advanced tab. It will display your "total paging file size for all drives" under Virtual Memory. To increase Page File, click on the Change button and the Virtual Memory dialog will open.

Virtual Memory Dialog screenshot

In the Virtual Memory dialog you can see all of your hard drives and their paging file size. As you can see from this screenshot, I have this computer set to let Windows manage my page file on my D drive. I've never had any issues with this computer with the high paging file. On the Win 10 computer, my C drive had the settings you see I've entered into the Custom Size in the dialog above. (Well, it was 1600-something for the Maximum, anyway. Memory is a bit fuzzy, even if I did do this just last night.) I selected my second drive and set the Custom Size to the same size as my C drive.

After you're happy with the settings on your drive(s), click on OK. I don't recall rebooting, but you might have to reboot your computer.

There is a lot of information out there on why your computer is reporting only 60%+ memory used while getting the error. If you're like me, you don't really care about why, just how to put a stop to it, so you don't lose any of your work in Daz Studio.

I hope some of you find this information helpful.

 

Post edited by L'Adair on

Comments

  • Regarding Microsft "hiding" Control Panel on Windows 10: the truth is that they are trying to phase it out in favor of their new "Settings" application. A number of the applets in the current Windows 10 CP actually redirect you to the Settings page instead of giving you the one you asked for.

  • L'AdairL'Adair Posts: 9,479

    I'm not really a fan of Win10. But it was the only OS offered by the company that built my render machine, and it does allow for a lot more RAM. My motherboard can hold up to 128GB. I paid a premium to get 32GB in two modules instead of four when I had it built. Did so again yesterday, when I bought another 32GB. Now if the nVidia cards ever go back to normal pricing, I'll be getting a second GTX 1080 for the beast.

  • jakibluejakiblue Posts: 7,281

    L'Adair, I'm not very techn minded, so I was wondering - how do you know what SIZE to put in as the maximum? Do you just guesstimate some random figure, or do you work it out as a percentage of "space available" or something?

  • L'AdairL'Adair Posts: 9,479
    jakiblue said:

    L'Adair, I'm not very techn minded, so I was wondering - how do you know what SIZE to put in as the maximum? Do you just guesstimate some random figure, or do you work it out as a percentage of "space available" or something?

    I really don't know the answer to that. If you have more than one drive, you could set it so Windows manages two of the drives. With two drives in the mix, Windows should, (according to the one article I read that made sense to me,) use more of your actual RAM.

    If you're using Windows 10 and have only one drive, try setting the maximum to 1.5 - 2.0 times the number Windows allocated. My C drive is a 512GB SSD, and Windows seemed to think less than 1700MB was sufficient. Really, MS? Just 1% of my drive would be 5100MB... Maybe that's a good starting point, 1% of the drive capacity for the Maximum setting.

  • Kendall SearsKendall Sears Posts: 2,995

    The swapfile settings are dependent on your needs and OS.  For "normal" Windows users the rule-of-thumb is 1/2 the amount of your system RAM.  For heavy resource users 1.5x system RAM is recommended.  One could monitor average use and then allocate from there, but that is more work than many want to do.

    Kendall

  • frank0314frank0314 Posts: 14,704

    how much system RAM do you have?

  • L'AdairL'Adair Posts: 9,479
    edited August 2017

    The swapfile settings are dependent on your needs and OS.  For "normal" Windows users the rule-of-thumb is 1/2 the amount of your system RAM.  For heavy resource users 1.5x system RAM is recommended.  One could monitor average use and then allocate from there, but that is more work than many want to do.

    Kendall

    Thank you, Kendall! Good to have someone who knows what he's talking about chime in.

    So on my Win10 machine with 32GB of RAM, it should have a maximum setting of 16000-ish? (No wonder the guy in the article was saying Win10 went to a smaller page file, it's about 1/10th that.) I essentially doubled the settings by using the same on the second drive, and that's still only 1/10th of the RAM.

    I've read that SSD's don't last as long as magnetic drives, the number of writes to the drive being the contributing factor. Is it possible the software recognizes the C drive is an SSD, and therefore uses a smaller page file? Do you think it might be a better idea to set the C drive to none and keep the page file on the second, magnetic drive?

    Post edited by L'Adair on
  • TaozTaoz Posts: 10,232

    If you have so much RAM that you probably won't ever hit the limit you shouldn't really need a large swap file, the way Windows memory management works it will often just slow things down. The system needs a small amount of virtual memory (swap file), but not very much, 100 MB may be enough. Last time I restored my system from an image I forgot that my swap drive had changed drive letter and since I only have 100 mb swap file on the system drive that was all that was available. It worked fine until I opened a lot of tabs in IE which used a lot of RAM, then it started complaining about low memory.

    If you want to know how much is the minimum to avoid low memory warnings or system crashes then set it to e.g. 1 GB initial size and let Windows manage the size, then it will expand it automatically whenever necessary. Check after a month of average use and see how big it is.

     

     

  • SixDsSixDs Posts: 2,384

    Personally, L'Adair, I configure my SSD to minimize the number of necessary writes for the reason you have given. So, in my case, no paging file on the boot drive (SSD). Some will claim that this is unnecessary as the SSD will probably be replaced before the NAND gives out, but that depends on how much of the SSD's capacity is in use and therefore how much overprovisioning is possible. On some of my machines I have even created a separate swap partition to hold nothing but the paging file so I know it will always be available. (Often those misleading "out of memory" error messages can be triggered if the drive in question becomes filled and there is simply not enough free space left to accommodate the specified paging file.) In fact, I have even gone to the length of having a small, fast hard drive in my main workstation dedicated entirely to the paging file, but that may be a tad over the top for most.

    As for hard drives lasting longer than SSDs, that may or may not be the case. Hard drives are subject to mechanical failure, while SSDs are not. But hard drives do not suffer from the degradation of the media from repeated writes to individual NAND cells as SSDs do.

  • Kendall SearsKendall Sears Posts: 2,995
    edited August 2017
    L'Adair said:

    The swapfile settings are dependent on your needs and OS.  For "normal" Windows users the rule-of-thumb is 1/2 the amount of your system RAM.  For heavy resource users 1.5x system RAM is recommended.  One could monitor average use and then allocate from there, but that is more work than many want to do.

    Kendall

    Thank you, Kendall! Good to have someone who knows what he's talking about chime in.

    So on my Win10 machine with 32GB of RAM, it should have a maximum setting of 16000-ish? (No wonder the guy in the article was saying Win10 went to a smaller page file, it's about 1/10th that.) I essentially doubled the settings by using the same on the second drive, and that's still only 1/10th of the RAM.

    I've read that SSD's don't last as long as magnetic drives, the number of writes to the drive being the contributing factor. Is it possible the software recognizes the C drive is an SSD, and therefore uses a smaller page file? Do you think it might be a better idea to set the C drive to none and keep the page file on the second, magnetic drive?

    If your system has 32GB of RAM then you shouldn't be swapping much at all.  A swap file will still be used for out-of-scope storage, but unless you're really pushing hard you should not be swapping with any regularity.  At the RAM levels you're talking about, you should monitor the resource usage using any number of tools and then allocate based on the results.

    SSD's do have a write/erase cycle limit, and under normal cases an average user would never hit them.  Putting swap onto an SSD is not normally recommended due to the amount of cycles used AND the fact that SSD's have relatively large requirements for changing.  In order to change a single cell a large block must be stored/copied into buffer, erased, the change made, and then the whole block re-written.  How big this block is depends on the specific drive, but can be as small as 256KB and as large as 4MB.  So to change a couple of bytes you have to manipulate a very large area.

    Kendall

    Post edited by Kendall Sears on
  • L'AdairL'Adair Posts: 9,479
    frank0314 said:

    how much system RAM do you have?

    I have 32GB, soon to be 64GB. I bought the beast last October specifically for 3D graphics, with the intent of adding another nVidia 1080 and more RAM. I wasn't expecting to get more RAM first, however the cryptominer issue has made buying another 1080 a bit too expensive for the time being.

    SixDs said:

    Personally, L'Adair, I configure my SSD to minimize the number of necessary writes for the reason you have given. So, in my case, no paging file on the boot drive (SSD). Some will claim that this is unnecessary as the SSD will probably be replaced before the NAND gives out, but that depends on how much of the SSD's capacity is in use and therefore how much overprovisioning is possible. On some of my machines I have even created a separate swap partition to hold nothing but the paging file so I know it will always be available. (Often those misleading "out of memory" error messages can be triggered if the drive in question becomes filled and there is simply not enough free space left to accommodate the specified paging file.) In fact, I have even gone to the length of having a small, fast hard drive in my main workstation dedicated entirely to the paging file, but that may be a tad over the top for most.

    As for hard drives lasting longer than SSDs, that may or may not be the case. Hard drives are subject to mechanical failure, while SSDs are not. But hard drives do not suffer from the degradation of the media from repeated writes to individual NAND cells as SSDs do.

    I changed the Page File last night, using only the hard drive. I'm pretty sure I made it too big, though. The machine slowed down noticably! lol

    If your system has 32GB of RAM then you shouldn't be swapping much at all.  A swap file will still be used for out-of-scope storage, but unless you're really pushing hard you should not be swapping with any regularity.  At the RAM levels you're talking about, you should monitor the resource usage using any number of tools and then allocate based on the results.

    SSD's do have a write/erase cycle limit, and under normal cases an average user would never hit them.  Putting swap onto an SSD is not normally recommended due to the amount of cycles used AND the fact that SSD's have relatively large requirements for changing.  In order to change a single cell a large block must be stored/copied into buffer, erased, the change made, and then the whole block re-written.  How big this block is depends on the specific drive, but can be as small as 256KB and as large as 4MB.  So to change a couple of bytes you have to manipulate a very large area.

    Kendall

    The Page File on the SSD is how the computer was configured by the company who built it, CyberSourcePC. Prior to trying to use two animals with LAMH converted to Fiber Mesh hair, I'd only seen the Out of Memory error once or twice. When I doubled the Page File by duplicating the values with the second drive, I was able to convert both presets to Fiber Mesh and render without an error. At this point, I'm thinking I can get away with a single Page File on the hard drive that's about double of what the system managed was for the SSD.

    I really appreciate so many posters coming in to clarify things, for me and anyone else watching this thread.
    yes heart

    It's been nearly two decades since I worked in electronics, and longer still since I built my own computers. The tech has grown and evolved so much since then, I feel like I know next to nothing about computers anymore. Just as well, for the most part. I really rather concentrate on the artist these days. But every now and then, that ignorance, er... lack of keeping up, comes back to bite me.
    laugh

  • Oso3DOso3D Posts: 15,084

    I increased my page file to 16 gb, it promptly made Daz unusably sluggish. I have no idea what to set it to, so I set it back and will just have to cross my fingers. :/

     

  • Kendall SearsKendall Sears Posts: 2,995
    edited August 2017

    I increased my page file to 16 gb, it promptly made Daz unusably sluggish. I have no idea what to set it to, so I set it back and will just have to cross my fingers. :/

     

    Just an FYI for folks... don't just go into the settings and increase your swap size.  Doing so will almost guarantee that you end up with a fragmented swapfile and things will go south very quickly.  If you feel that you need to make a larger swapfile, FIRST you will need to remove the first one by setting the swap to 0.  After that completes, the swapfile will still exist but will have reduced its size.  At this point, set the settings to "No paging file".  THEN defragment your drive until you get fragmentation to the minimum amount you can (multiple runs after reboots may be necessary).  Then you can create a larger swapfile that is contiguous.

    A fragmented swapfile is murder on performance.

    EDIT:  I usually create a separate drive partition or drive (depending on the system needs) for my swap regardless of OS.  On my big Linux servers I have a 72GB SAS HDD allocated solely for swap.

    Kendall

    Post edited by Kendall Sears on
  • L'AdairL'Adair Posts: 9,479

    I increased my page file to 16 gb, it promptly made Daz unusably sluggish. I have no idea what to set it to, so I set it back and will just have to cross my fingers. :/

     

    Just an FYI for folks... don't just go into the settings and increase your swap size.  Doing so will almost guarantee that you end up with a fragmented swapfile and things will go south very quickly.  If you feel that you need to make a larger swapfile, FIRST you will need to remove the first one by setting the swap to 0.  After that completes, the swapfile will still exist but will have reduced its size.  At this point, set the settings to "No paging file".  THEN defragment your drive until you get fragmentation to the minimum amount you can (multiple runs after reboots may be necessary).  Then you can create a larger swapfile that is contiguous.

    A fragmented swapfile is murder on performance.

    EDIT:  I usually create a separate drive partition or drive (depending on the system needs) for my swap regardless of OS.  On my big Linux servers I have a 72GB SAS HDD allocated solely for swap.

    Kendall

    That makes a lot of sense. but with one possible caveat...? I've read you should not defrag your SSD drives...

  • Kendall SearsKendall Sears Posts: 2,995
    edited August 2017
    L'Adair said:

    I increased my page file to 16 gb, it promptly made Daz unusably sluggish. I have no idea what to set it to, so I set it back and will just have to cross my fingers. :/

     

    Just an FYI for folks... don't just go into the settings and increase your swap size.  Doing so will almost guarantee that you end up with a fragmented swapfile and things will go south very quickly.  If you feel that you need to make a larger swapfile, FIRST you will need to remove the first one by setting the swap to 0.  After that completes, the swapfile will still exist but will have reduced its size.  At this point, set the settings to "No paging file".  THEN defragment your drive until you get fragmentation to the minimum amount you can (multiple runs after reboots may be necessary).  Then you can create a larger swapfile that is contiguous.

    A fragmented swapfile is murder on performance.

    EDIT:  I usually create a separate drive partition or drive (depending on the system needs) for my swap regardless of OS.  On my big Linux servers I have a 72GB SAS HDD allocated solely for swap.

    Kendall

    That makes a lot of sense. but with one possible caveat...? I've read you should not defrag your SSD drives...

    You should not use a "defrag program" on SSD's.  You defrag SSD's by moving the files to another drive and then bringing them back once the drive is (mostly) empty.

    Kendall

    Post edited by Kendall Sears on
  • L'Adair said:

    I increased my page file to 16 gb, it promptly made Daz unusably sluggish. I have no idea what to set it to, so I set it back and will just have to cross my fingers. :/

     

    Just an FYI for folks... don't just go into the settings and increase your swap size.  Doing so will almost guarantee that you end up with a fragmented swapfile and things will go south very quickly.  If you feel that you need to make a larger swapfile, FIRST you will need to remove the first one by setting the swap to 0.  After that completes, the swapfile will still exist but will have reduced its size.  At this point, set the settings to "No paging file".  THEN defragment your drive until you get fragmentation to the minimum amount you can (multiple runs after reboots may be necessary).  Then you can create a larger swapfile that is contiguous.

    A fragmented swapfile is murder on performance.

    EDIT:  I usually create a separate drive partition or drive (depending on the system needs) for my swap regardless of OS.  On my big Linux servers I have a 72GB SAS HDD allocated solely for swap.

    Kendall

    That makes a lot of sense. but with one possible caveat...? I've read you should not defrag your SSD drives...

    You should not use a "defrag program" on SSD's.  You defrag SSD's by moving the files to another drive and then bringing them back once the drive is (mostly) empty.

    Kendall

    Is fragmentation on SSDs a significant issue? I thought they were essentially random access.

  • Kendall SearsKendall Sears Posts: 2,995
    edited August 2017
    L'Adair said:

    I increased my page file to 16 gb, it promptly made Daz unusably sluggish. I have no idea what to set it to, so I set it back and will just have to cross my fingers. :/

     

    Just an FYI for folks... don't just go into the settings and increase your swap size.  Doing so will almost guarantee that you end up with a fragmented swapfile and things will go south very quickly.  If you feel that you need to make a larger swapfile, FIRST you will need to remove the first one by setting the swap to 0.  After that completes, the swapfile will still exist but will have reduced its size.  At this point, set the settings to "No paging file".  THEN defragment your drive until you get fragmentation to the minimum amount you can (multiple runs after reboots may be necessary).  Then you can create a larger swapfile that is contiguous.

    A fragmented swapfile is murder on performance.

    EDIT:  I usually create a separate drive partition or drive (depending on the system needs) for my swap regardless of OS.  On my big Linux servers I have a 72GB SAS HDD allocated solely for swap.

    Kendall

    That makes a lot of sense. but with one possible caveat...? I've read you should not defrag your SSD drives...

    You should not use a "defrag program" on SSD's.  You defrag SSD's by moving the files to another drive and then bringing them back once the drive is (mostly) empty.

    Kendall

    Is fragmentation on SSDs a significant issue? I thought they were essentially random access.

    Filesystems are the problem, not the hardware.  NTFS, for instance, takes a massive hit to jump to new clusters when a file is fragmented or sparse.  There are a lot of table lookups (possibly even leading all the way back to the MFT in some instances) and possibilities for interruption during the transaction.  Swapfiles need to be as contiguous as possible to minimize possible sync problems, especially in multithreaded environments.

    EDIT:  This is especially the case on writes to swap.  If the swapfile's boundary(ies) are in the blocks with other files on write, the other files may be locked during the copy/erase/modify/rewrite cycle possibly causing an access domino (this can happen on HDD as well with fragmentation, these conditions may lead to a "swap storm" where the OS is so busy maintaining read/writes to swap -- which has higher priority -- that the whole system grinds to a halt).  This becomes a larger problem when there are several blocks that are impacted.  You're looking at possibly impacting performance on multiple 4MB cells and all other files that have pieces in those cells.

    Kendall

    Post edited by Kendall Sears on
  • Oso3DOso3D Posts: 15,084
    edited August 2017

    Well, I've possibly really effed my system, then. Sigh

    Edit: Well, looks... ok. Did a defrag, maybe it's a little off, but not grinding...

    Post edited by Oso3D on
  • Oso3DOso3D Posts: 15,084

    And now everything is hanging. So, suggestions on undoing whatever damage I did from increasing the page file?

     

  • Oso3DOso3D Posts: 15,084

    Resetting page file seems to have fixed things. Whew.

    (set to zero, reboot, defrag, turn back on, reboot)

  • Kendall SearsKendall Sears Posts: 2,995

    Resetting page file seems to have fixed things. Whew.

    (set to zero, reboot, defrag, turn back on, reboot)

    Glad to hear it.  Over the years I've seen uncounted times where people just increase their swapfile size without realizing that it will fragment.  Usually I get called in after they've started hanging and crashing.

    Kendall

  • Oso3DOso3D Posts: 15,084

    This is why I do as little as possible with the computer stuff beyond things I use. People think I'm stupid for not, say, building my machine, but I've also not had stuff just catch fire since I stopped trying to do it myself.

    (Well, ok, that computer didn't catch fire, it just never worked again after I tried to upgrade it)

    More memory and MAYBE swapping video cards, that's it.

     

  • L'Adair said:

    I increased my page file to 16 gb, it promptly made Daz unusably sluggish. I have no idea what to set it to, so I set it back and will just have to cross my fingers. :/

     

    Just an FYI for folks... don't just go into the settings and increase your swap size.  Doing so will almost guarantee that you end up with a fragmented swapfile and things will go south very quickly.  If you feel that you need to make a larger swapfile, FIRST you will need to remove the first one by setting the swap to 0.  After that completes, the swapfile will still exist but will have reduced its size.  At this point, set the settings to "No paging file".  THEN defragment your drive until you get fragmentation to the minimum amount you can (multiple runs after reboots may be necessary).  Then you can create a larger swapfile that is contiguous.

    A fragmented swapfile is murder on performance.

    EDIT:  I usually create a separate drive partition or drive (depending on the system needs) for my swap regardless of OS.  On my big Linux servers I have a 72GB SAS HDD allocated solely for swap.

    Kendall

    That makes a lot of sense. but with one possible caveat...? I've read you should not defrag your SSD drives...

    You should not use a "defrag program" on SSD's.  You defrag SSD's by moving the files to another drive and then bringing them back once the drive is (mostly) empty.

    Kendall

    Is fragmentation on SSDs a significant issue? I thought they were essentially random access.

    Filesystems are the problem, not the hardware.  NTFS, for instance, takes a massive hit to jump to new clusters when a file is fragmented or sparse.  There are a lot of table lookups (possibly even leading all the way back to the MFT in some instances) and possibilities for interruption during the transaction.  Swapfiles need to be as contiguous as possible to minimize possible sync problems, especially in multithreaded environments.

    EDIT:  This is especially the case on writes to swap.  If the swapfile's boundary(ies) are in the blocks with other files on write, the other files may be locked during the copy/erase/modify/rewrite cycle possibly causing an access domino (this can happen on HDD as well with fragmentation, these conditions may lead to a "swap storm" where the OS is so busy maintaining read/writes to swap -- which has higher priority -- that the whole system grinds to a halt).  This becomes a larger problem when there are several blocks that are impacted.  You're looking at possibly impacting performance on multiple 4MB cells and all other files that have pieces in those cells.

    Kendall

    I see, thanks for the explanation.

Sign In or Register to comment.