Studio crashes running multiple GPUs
I am just now trying running multiple GPUs thanks to the glacier like speed of Iray renders and have discovered that more complex scenes tend to crash when running a pair of GTX 1070s. (I pulled one from another computer for test purposes.) The same scene works OK if I turn off the second card, Not de-install, just turn off in Iray. I was interested in maybe buying a 1070ti or 1080 and returning the second 1070 to the machine I swipped it from. Not sure I wand to spend half a kilobuck or more if it's going to be iffy. The GTX 1070s are bothe ZOTAC, one full length and one stubby that clears the hard drive cage..
Anyone else ran into this glitch or is it something weird about this box. ASUS motherboard with a Ryzen processor and 16 Gigs of RAM. When it runs it's a great improvement, 40 minutes verses 75 on one scene.

Comments
Quite a few people here (myself included) are running multiple GPUs without incident. I take it you'r running curent drivers?
Are you running ZOTAC's performance/fan controller? There's some issue (not sure if its an NVidea or MSFT issue) with monitoring software probing the driver in post-April Update Windows 10 systems. I had problems with EVGA's Precision software crashing my system and switched to Afterburner. (I'm not O/Cing my GPUs, just speeding up the fans.)
Otherwise, I don't know.
Looked at the log file and the single GPU render was really a CPU render. (and I had CPU turned off in settings) Somehow a slight change from a scene that worked with GPUs inflated things. The log data is confusing though since it seems to have about a third of a Terrabyte memory consumption but so did the successful render. I need to learn what all this stuff on the log means, lots of failed this and unsupported that.
This wasn't the only scene that crashed though. I do mostly scenes with lots of people and props, wonder when NVIDIA will build a card with a Terrabyte of ram? Just kidding. If the prices go up like the 20 series it'd cost about $10K.
If you havent gotten scene optimizer, I would look into it. It helps me a lot to keep scenes rendering on GPU, and not dropping to CPU due to VRAM usage. Prettymuch everything sold at daz has textures large enough to look great at extreme close up, but not needed at distances. The farther away from the camera something is, the more I lower the texture with the tool. Setup to render takes a bit longer, but the speed of using both my GPU to render more than makes up for it.
That sounds like something I need. I suspect I should also make billboards of most of my trees and plants for distant shots. City scenes like several people and for instance some of Faverel's houses work fine but put those same folks in an outdoor scene with lots of trees and bushes and straight to CPU.
I also think something has a memory leak like the Titanic leaked water since some scenes seem to only work right the first try but either crash or go to CPU if I attempt another render. Gotta save, and reload either Studio or the whole OS.
I'm learnning a lot about the pitfalls and pratfalls of using Iray with my foray into multi-GPUs.
I've been using Iray for about a year or so and just put down slow renders to only having one video card with only 1920 cores. Never thought to check what actually happened during render, it either worked or it didn't. Now it looks like theres a lot going on, some of which probably only programmers at NVIDIA understand. There's got to be some memory swaping going on since for example even simple scenes that work fine appear to have about three times as much texture memory consumption as the total RAM (17.8GiB vs 5.6GiB) in the GPUs but what is the limit? (Where did this Gibi Mibi stuff come from? I always thought Gigabytes Megabytes etc. were supposed to be binary. Was it ust because vendors lie like politicians to make their hardware look more impressive?)
One thing for sure, lots of greenery can toss you into CPU only mode pretty quickly. There needs to be a warning buzzer or something to tell you that you need to simplify the scene. It's easy to miss stuff in that little fast scrolling history.(What little I can understand.) Can you turn off fallback so you can ditch a tree or two instead of wasting a couple of hours while it renders with the CPU which seems to be two to three times the single GPU render time?
It's all voodoo. Even the size you are rendering at has a factor in all of it. A scene that renders fine in GPU at 2k x 4k, might fall back to CPU if you try the same scene at larger size. I have long since made a habit of having the log open in notepad++ when starting a render, so I can check to make sure it hasn't fallen to CPU and need to try and tone down map sizes even more, or split the scene up into more render passes.
You need to keep everything inside the video ram of the 1070. You shoud know if you are running win10 it uses a large chuck of vram unlike win7. If you have two cards and one has more vram make this the primary card and run your monitor off this card. Another solution to this is if you have three pcie slots. Place a cheap video card in first slot to run windows and use the other 2 slots to render with Iray that way windows will not steal the vram from your 1070's. I would suggest a 1050 or a 1060 the cheapest you can find and use this as the primary desktop video card.
I see that there's a lot more to this than I realized. Kind of like buying a new car and then finding out wheels are a $100,000 option. I see why Silver Dolphin has so much hardware but I really don't have any excuse for video cards that each cost more than a 70" UHD TV.
What the heck do folks doing animation run, or are they rendering at VGA resolution or less? I'm trying for very high res (Fantasy scenes, therefore all the foliage.) so it looks good on a TV. I wonder if Studio 5 or whatever will allow a render farm like some of the other programs? 10 series video cards are coming down in price and I've got some other machines that could handle them. One even has a 1070 now, a Dell XPS with an i7 and 16 Gigs of RAM but it only has one big cardslot.
Worst part about having multiple cards is VRAM doesn't add up, and to use all the cards, you have to make sure you are under the weakest card's VRAM limit. Much as I love iray, if I was doing animation, I would not be using it. I usually render at around 3-4k for the longerst side, and that takes anywhere from 20 mins to a few hours depending on the scene lighting. I miss mantra's speed and power, if I could find a quick easy way to get G3 or G8 into it, it's what I would still be rendering in. It can render at obscene sizes somehow, and fast as hell. I could get scenes in there with mostly no issues, but skin and hair I could never get looking anywhere near realistic like what I get in iray with barely any messin about.
I've come across a new weirdness while testing to see what happens with Iray renders gone wrong. Some scenes that I've created that worked perfectly when first built using one or both GPUs or the CPU as directed by which boxes are checked in the render settings go straight to the CPU if they've been saved and re-loaded. No changes made to them in any way. Since I often save scenes in case I later want to modify them and re-render with different outfits, props, or some character's pose or expression just doesn't look right this is a problem since one scene that rendered in 22 minutes with both GPUs took an hour and 51 minutes rendered with the CPU. Even worse, a scene that rendered in an hour and 25 minutes under two GPUs timed out not quite done at nearly 7 hours under CPU.
I'm guessing this is a Studio or Windows problem rather than Iray. Something to do with the way the scene is saved or reloaded that causes it to look different to the render engine.
Testing one now that seems to render on single GPU at HD resolution, but before it rendered at UHD using both GPUs with no problems. Very strange? Anyone know of a RAM cleaner for GPUs?
Something definitely happening to memory or something at startup. Rendering the scene that went to CPU when loaded and run right after starting Studio worked OK at Ultra HD after I did a lower resolution render of the scene. Not only that but I then loaded a second scene (without restarting Studio) that was exhibiting the same annoying tendency and it worked at UHD.
It appears that after a successful render, the memory usage (or I suppose possibly some unknown configuration settings) are reset to where they need to be and the ensuing renders, even at max resolution, function properly.
Still a truckload of error messages in the successful renders but they run in less than half a day and look OK.