Iray: minimum card spec that's worked for you
Tobor
Posts: 2,300
So by now we have a pretty good database growing of current and modestly current nVidia cards that are able to handle Iray. I'm putting together some older machines for a photography class -- using D|S and Iray to teach virtual photography -- and I've collected up some old donated cards for testing.
Though they are nVidia with CUDA cores, few of them work, even when used with the latest 2015 driver available to them. Almost all of these are 5+ years old.
A main issue is insufficient VRAM. The cards need to have enough VRAM to hold the video display (these are single-card machines), and that doesn't leave much for the Iray scene. I was not surprised to see a 512MB card fail, but thought a 1GB card could work with a test scene of only a camera headlamp and a square primitive. The log is only partially helpful. It does say the "1.2" level version card does not support Iray. But I'm not sure all that entails.
It got me thinking what specs nVidia cards with CUDA cores have proven to work in a GPU-only test render, even if it's just a simple scene. Would be interested in your findings.

Comments
The oldest/lowest card that I know that works is a 1 GB GT 440. It has 96 CUDA cores and it can handle small scenes. It surprised me at what it can handle...and how fast it can be with the small/simple scenes.
In the case of the GeForce 220, the D|S logs show it's behind the times with a compute capability level 1.2; Iray needs 2.0+. Unfortunately, this isn't something they note on their specs page. Maybe another resource somewhere has this. Would be handy for those small scenes with Iray Draw Style if some of these piddly cards could be used.
Oopos, found it:
https://developer.nvidia.com/cuda-legacy-gpus
The compute capability is listed here for a bunch of legacy cards. Except for just a couple, all are under the 2.0 minimum, and are therefore unusable for Iray. Pretty much sews that one up.
My gts250 1GB 128 CUDA cores is a door stop when I try it. Yea, I'm U-T-D with drivers.
I've used a GT 740 for rendering - 384 cores, 4 GB Vram, priced at $100+. Works reasonably well, but don't try to do anything else on the system while rendering; the card is too busy doing the render to do trivial things like update the monitor screen for some other application.
I've used a GT 525M, 96 cores, 2GB DDR3
Mine is one of the oldest, Gt-545. 144 cuda cores, 1GB.
When a tiny scene fits, and I mean teeny tiny, it's fine. Oh, had to update to current drivers before it would work. Got it end of 2011.
(My CPU is an i5 2320 3.00GHz 4 cores. Not fast enough for my patience limit and am heading back to mostly 3Delight rendering. If I were into photoreal I'd tough it out but I'm not so I won't.
)
I just got through installing a Titan X... I don't think I could use anything else after this. Holy CRAP!
My 2Gb Radeon R270 handles Iray just fine... albeit slower then a similar Nvidia card with CUDA cores, but still faster then I had expected.
Wait, an ATI card? I thought iRay is compatible only with NVIDIA cards?
My 1Gb GT 430 with 96 CUDA cores handles small scenes without any trouble, it can take a while but it gets there in the end.
People the best card for budget oriented people is the Nvidia Tesla M2090 with 6gb of ddr5 (the lowest price I have seen them is $65 dollars on ebay). This is not a video card it has no connections to hook up your monitor what it does have is 512 cuda cores and 6gb of vram to do your iray renders. If you need to get one cheap card to do everything I would suggest a nvidia gtx 780 6gb edition. I got mine from newegg when they had them refurbished and they were about $360 dollars. This card has 2300 cuda cores and is the best value for your dollars on the market for iray renders bar none. I have 2 of these 780's and for under the price of a (titan x with 3200 cuda cores and 12gb of vram), I have 4600 cuda cores doing iray renders for me. Yes the titan x is more powerful but at over $1000 dollars it is too much money for hobbie use.The other choice is a nvidia 980 gtx with 6gb of video ram at about $680. One warning anyone going into Iray rendering will also need to get good powersupply for all the extra video cards you will be installing in your system.
As you probably already know but for the sake of clarity for those who might not: Two 780's with 2300 CUDA cores and 6GB VRAM each comes out to 4600 CUDA cores and 6GB VRAM. Iray will utilize and combine the cores but is incapable of combining the VRAM. If you want 12GB VRAM you need a single card with 12GB VRAM, but you cant combine card VRAM.
This is correct. Iray is incapable of using ATI cards.
Incorrect, ATI users can use Iray, but the final render gets done by the processor instead of the graphics card... with a Hexcore AMD FX6350 at 4Ghz per core and 16Gb ram this is easy.
Interesting. So your card appears as one of the options you can use? My ATI card doesn't show up as one.
Tesla cards should not, according to nVidia, be mixed with GeForce due to driver conflicts - Tesla and Quadros can mix. That means the saving from the M2090 is more than likely offset by the cost of the basic Quadro display card.
I have a ATI Radeon HD 5770 1024 MB DDR5, it's not an option. You can have an ATI card in your system and render with Iray but that card is not usable any point for the render, otherwise I would see Redeon HD 5570 in my list of compatible hardware. Iray is using CPU and system RAM exclusivly, and not the card. What I said was "Iray is incapable of using ATI cards." but not "you can't use Iray if you have an ATI card" which you apparenly interpreted my statement as.
I see. His comments are confusing then. He should have frased them differently. Here I was thinking he was using the ATI card to render.
I have an OEM-provided nVidia GeForce GTX 745 with 4GB VRAM and 384 Cuda cores. (Only available in HP and Dell computers, I think.) I tried a couple times to use Iray in both GPU-only mode and hybrid mode, and both ways, the system becomes essentially unusable until its done rendering. Since I kind of need to use it while its rendering, I wind up using CPU only mode. Still pretty fast, all things considered. More complicated scenes may take a while.
You're running into the same thing I did - I had a GT 740 - 384 cores, 4 GB, as my only video card - and when it got busy the screen went massively non-responsive. As a result, I tend to recommend two (or more) cards, with a cheap one to drive the monitor(s) and not used for rendering.
...for the slight differnce in cost, one would be better off going with the Maxwell 980 TI instead of a Kepler 780 SC (Amazon price for a 6 GB 780 SC is 631$, ebay 605$). For about the same price (or less) one could do better with even a 6 GB Titan (2688 cores) which I've seen on ebay for between 500$ - 650$.
I wonder if the issues reported for the 740 are due to a setting. You'd think all CUDA-based cards would max out during an Iray render, regardless of the number of cores or available VRAM.
What else is common in these cards? DDR3 as opposed to DDR5 VRAM? Installed in PCIe 2.0 x16 slots instead of 3.0?
I had meant CUDA-capable cards, of course. My old Quadro with 512MB had a couple of cores, but only 512MB RAM. I use my machine at 2K resolution. So obviously, it couldn't do anything other than say "no thanks," and let the Xeons take over.
True but an ati video card user can use their new Ati card to run their monitors and use the Tesla to do their Iray renders. 6gb of Vram and 512 cuda cores for under one hundred dollars is impossible to beat.