x4 PCIe 4.0 bottleneck for RTX A4000, how relevant is it?

Dear All,

I'm considering upgrading my older HP z440 Intel rig to something AMD AM5-based. I have a RTX A4000, which i lurve. Relatively low-power, quiet, fast enough for me. The motherboard I'm looking to build my new rig on (ASRock B650M PG Riptide) has two x16 PCIe 4.0 slots, but only one has 16 lanes. The second one has 4 lanes.

My question is whether my understanding of Daz Iray rendering is correct. As I understand it, a scene is pushed to the GPU and fills up the framebuffer of the GPU. If the scene fits, there will be minimal back and forth over the PCIe bus - just the results of a render pass. This would suggest that the performance penalty of sticking an additional RTX A4000 in the four-lane slot would be less punative than say in a video editing or gaming scenario where lots of data runs over the PCIe bus at any given moment.

Is my (primitive) understanding correct?

Thanks.

Comments

  • Richard HaseltineRichard Haseltine Posts: 96,909

    privatepixels99 said:

    Dear All,

    I'm considering upgrading my older HP z440 Intel rig to something AMD AM5-based. I have a RTX A4000, which i lurve. Relatively low-power, quiet, fast enough for me. The motherboard I'm looking to build my new rig on (ASRock B650M PG Riptide) has two x16 PCIe 4.0 slots, but only one has 16 lanes. The second one has 4 lanes.

    My question is whether my understanding of Daz Iray rendering is correct. As I understand it, a scene is pushed to the GPU and fills up the framebuffer of the GPU. If the scene fits, there will be minimal back and forth over the PCIe bus - just the results of a render pass. This would suggest that the performance penalty of sticking an additional RTX A4000 in the four-lane slot would be less punative than say in a video editing or gaming scenario where lots of data runs over the PCIe bus at any given moment.

    Is my (primitive) understanding correct?

    Thanks.

    Yes, I believe so - you would see an impact in the initial data transfer, but once the render started it should make very little difference (certainly not enough to lose the benenfit of having two cards instead of one working on the render).

  • TimberWolfTimberWolf Posts: 236

    PCIE 4 can transfer a theoretical maximum of 2GB/s per lane. I doubt you'd see that maximum in everyday work but it won't be too far off. A 14GB scene transferred via your slowest lane (4X) would take less than 2 seconds to be pushed from RAM to VRAM. Even if you had a PCIE 3 board it would be under 4 or 5 seconds. If you only used one card on the 16X lane the transfer would be almost instantaneous but I doubt you'll miss 2 seconds!

    You will notice essentially zero difference using a second card in a slower lane, or even a previous gen. motherboard. Unless you are a droid; in which case all bets are off.

    The better your processor the faster the scene will be assembled in system memory before it's drop-kicked over to the GPU. That's where you'll see a performance increase which will more than negate the eye-blink time loss of adding another card.

    Just be aware that although the A-series cards operate at lower temperatures than their gaming equivalents the blowers do need good airflow and can't be butted right up against each other. Check your motherboard slots have clearance to allow a case fan to blow through them. Yours are single slot cards so this hopefully shouldn't be an issue - ours are dual slot and it took a while to pin down a motherboard that would work.

  • Jay VersluisJay Versluis Posts: 245

    I agree with TimberWolf 100%.

    Something else to keep in mind in regards to file transfers: even the fastest lane takes a while to transfer lots of smaller files. Textures are large, but a scene is usually composed of many other smaller files too (light objects, geometry etc) so you'll never get that onto the GPU in under 10 seconds, regardless of architecture. Even a sphere without textures takes a few seconds to get going.

  • Wow! Daz community royalty chiming-in on my question, I'm honoured :)

    Thanks for your input, much appreciated.

  • crosswindcrosswind Posts: 4,777

    Jay and TimberWolf, a bit long time no see. wink  I also agree with you BTW.

  • TimberWolf said:

    Just be aware that although the A-series cards operate at lower temperatures than their gaming equivalents the blowers do need good airflow and can't be butted right up against each other. Check your motherboard slots have clearance to allow a case fan to blow through them. Yours are single slot cards so this hopefully shouldn't be an issue - ours are dual slot and it took a while to pin down a motherboard that would work.

    Just a bit of a correction here.

    Blower style gpus are desinged to be 'butted up' against each other. The reason is that the fan is pulling air from the rear of the card, blowing it out the front of the card,  and not pulling from the side and mostly dumping the heat into the case.

    The problem with using 'professional' gpus in consumer cases(the mobo is basically irrelevant), has more to do with case design(specifically airflow characteristics) and intake fans. In servers and workstations, we generally use much higher rpm intake fans than consumer types can achieve(6-15k vs 500-2k, respectively), use ducting to direct airflow(which is assisted by the gpu's support plate in many cases), often have fans closer to the area they are intended to cool(my t5810 is just the,shorter, support plate distance, plus a couple mm of ducting, from the intake fans to the back of my gpus), and don't have obstructions, such as drive cages.

     

     

     

     

     

     

     

     

     

     

     

     

  • TimberWolfTimberWolf Posts: 236

    You are absolutely correct but I doubt very few people here (including my little company which uses Studio professionally) have a proper rack setup. If you jam two Quadro series cards up against each other on a consumer board in a consumer case they do, in my experience anyway, get hotter than you might wish. There are a few motherboard manufacturers (Asus and MSI spring to mind) which offer boards with fairly substantial spacing between the slots. For most people I wouldn't say it was irrelevant and that spacing can be an issue, even with blowers. I take your point though so if you are one of those people with a properly ducted server case, commercial grade high-RPM fans and no hard drives then you can safely ignore what I wrote.

    Otherwise, you might want to look at it if you want to run a couple of cards and *especially* if you want to run a couple of gaming cards with side-mounted fans as DM points out.

  • crosswindcrosswind Posts: 4,777

    Agreed... I'm still having my Gigabyte TRX40 AORUS XTREME mainboard which used to hold 4 cards for me... Though I had built a pretty good cooling / ventilation system within a big Corsair full tower, it turned out to be still hot within the case when rendering. Then I had to make it hold 3 cards, two are with PCIe sockets, one is placed vertically.

Sign In or Register to comment.