Daz Studio Iray - Rendering Hardware Benchmarking

1192021222325»

Comments

  • System Configuration
    System/Motherboard: DoradoOC-AMP (Z490/pcie 3)
    CPU: Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz
    GPU: Nvidia RTX 3090
    System Memory: HyperX XMP RGB 32 Go DDR4 @ 3200 Mhz
    OS Drive: SSD WD_BLACK 1 To PCIe NVMe TLC (M.2 2280)
    Asset Drive = Seagate IronWolf 10 To (SATA 6 Gbit/s 7 200 tr/min)
    Operating System: Windows 10 64bits 20H2
    Nvidia Drivers Version: 471.96
    Daz Studio Version: 4.15.0.30 Public Build

    Benchmark Results

    2021-09-10 19:04:44.381 Total Rendering Time: 1 minutes 36.43 seconds
    2021-09-10 19:05:09.890 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2021-09-10 19:05:09.890 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 1.470s init, 92.906s render

    Iteration Rate: (1800 / 92.906) 19.37
    Loading Time: (1*60+36,43)-92.906= 3.524

  • Mark_e593e0a5Mark_e593e0a5 Posts: 1,105
    edited September 10

    System Configuration
    System/Motherboard: Apple iMac Pro
    CPU: Intel(R) Xeon(R) W-2150B CPU @ 3.00GHz   3.00 GHz
    GPU: Gigabyte Aorus RTX3090 Gaming Box (Thunderbolt 3 eGPU)
    System Memory: Built in 32 GB 2666 Mhz DDR4 ECC 
    OS Drive: Apple SSD AP2048
    Asset Drive: Network Drive - NAS Synology DS920+
    Operating System: Apple Bootcamp, Windows 10 Pro 21H1, Build 19043.1165
    Nvidia Drivers Version: Studio 471.68
    Daz Studio Version: 4.15.0.30
    Optix Prime Acceleration: default

    Benchmark Results
    Total Rendering Time: 1 minutes 40.93 seconds
    CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 2.164s init, 95.411s render
    Iteration Rate: (1800/100.93) = 18,87
    Loading Time: 5.519 seconds

    Post edited by Mark_e593e0a5 on
  • KCMustangKCMustang Posts: 31
    edited September 14

    System Configuration
    System/Motherboard: Infinity W5-11R7N Laptop PCIe 4
    CPU: CPU: Intel(R) Core(TM) i7-11800H CPU @ 2.30GHz
    GPU: Nvidia GeForce RTX3070 Laptop 8GB Max-P
    System Memory: 64 GB 3200 Mhz DDR4
    OS Drive: Samsung SSD 980 Pro 1TB
    Asset Drive: Same
    Operating System: Microsoft Windows 10 Home (x64) Build 19042.1165
    Nvidia Drivers Version: 471.96
    Daz Studio Version: 4.15.0.30
    Optix Prime Acceleration: N/A

    Benchmark Results
    Total Rendering Time: 2 minutes 41.27 seconds
    CUDA device 0 (NVIDIA GeForce RTX 3070 Laptop GPU):      1800 iterations, 2.189s init, 157.186s render
    Iteration Rate: 11.16 iterations per second
    Loading Time: 4.084 seconds

     

    Post edited by KCMustang on
  • KCMustangKCMustang Posts: 31
    edited September 14

    And I ran this on my 4 year-old laptop out of curiosity:

    System Configuration
    System/Motherboard: Acer Aspire F5-573G (laptop)
    CPU: CPU: Intel(R) Core(TM) i7-7500U CPU @ 2.70GHz
    GPU: Nvidia GeForce 940MX 4GB
    System Memory: 16 GB 2133 Mhz DDR4
    OS Drive: Crucial M4 SSD 128GB
    Asset Drive:  Same
    Operating System: Microsoft Windows 10 Home (x64) Build 19042.1165
    Nvidia Drivers Version: 471.11
    Daz Studio Version: 4.15.0.30
    Optix Prime Acceleration: N/A

    Benchmark Results
    Total Rendering Time: 1 hours 4 minutes 49.37 seconds
    CUDA device CUDA device 0 (NVIDIA GeForce 940MX): 1800 iterations, 7.883s init, 3878.309s render
    Iteration Rate: 0.463 iterations per second
    Loading Time: 11.061 seconds

     

    Post edited by KCMustang on
  • RayDAntRayDAnt Posts: 914

    RE: @JamesJAB @Skyeshots @Saxa -- SD

    Finally took the plunge on an RTX A5000 after seeing proof from Igor's Lab that the A5000 (and likely the A6000 as well) is directly compatible with all RTX 3080/3090 Reference (not Founders) Edition waterblocks for custom watercooling. For what it's worth to you. First,

    System Configuration
    System/Motherboard: Gigabyte Z370 Aorus Gaming 7
    CPU: Intel 8700K @ stock (MCE enabled)
    GPU: Nvidia Titan RTX @ stock (custom watercoooled)
    GPU: Nvidia RTX A5000 @ stock
    System Memory: Corsair Vengeance LPX 32GB DDR4 @ 3000Mhz
    OS Drive: Samsung Pro 980 512GB NVME SSD
    Asset Drive: Sandisk Extreme Portable SSD 1TB
    Operating System: Windows 10 Pro version 21H1 build 19043
    Nvidia Drivers Version: 471.96 GRD
    Daz Studio Version: 4.15.0.30 64-bit
    Optix Prime Acceleration: N/A


    Titan RTX Solo Results

    Benchmark Results: Titan RTX (WDDM driver mode - used as display device)
    Total Rendering Time: 3 minutes 19.48 seconds
    CUDA device 1 (NVIDIA TITAN RTX): 1800 iterations, 1.901s init, 196.068s render
    Iteration Rate: 9.180 iterations per second
    Loading Time: 3.412 seconds

    Benchmark Results: Titan RTX (WDDM driver mode) 
    Total Rendering Time: 3 minutes 17.94 seconds
    CUDA device 1 (NVIDIA TITAN RTX): 1800 iterations, 2.058s init, 193.673s render
    Iteration Rate: 9.294 iterations per second
    Loading Time: 4.267 seconds

    Benchmark Results: Titan RTX (TCC driver mode)
    Total Rendering Time: 3 minutes 15.32 seconds
    CUDA device 1 (NVIDIA TITAN RTX): 1800 iterations, 2.615s init, 190.427s render
    Iteration Rate: 9.452 iterations per second
    Loading Time: 4.893 seconds

     

    RTX A5000 Solo Results

    Benchmark Results: RTX A5000 (WDDM driver mode - used as display device)
    Total Rendering Time: 1 minutes 53.69 seconds
    CUDA device 0 (NVIDIA RTX A5000): 1800 iterations, 1.941s init, 110.225s render
    Iteration Rate: 16.330 iterations per second
    Loading Time: 3.465 seconds

    Benchmark Results: RTX A5000 (WDDM driver mode) 
    Total Rendering Time: 1 minutes 52.44 seconds
    CUDA device 0 (NVIDIA RTX A5000): 1800 iterations, 1.993s init, 108.904s render
    Iteration Rate: 16.528 iterations per second
    Loading Time: 3.536 seconds

    Benchmark Results: RTX A5000 (TCC driver mode)
    Total Rendering Time: 1 minutes 49.37 seconds
    CUDA device 0 (NVIDIA RTX A5000): 1800 iterations, 2.068s init, 105.757s render
    Iteration Rate: 17.020 iterations per second
    Loading Time: 3.613 seconds

     

    Titan RTX + RTX A5000 Combo Results:

    Benchmark Results: Titan RTX + RTX A5000 (both WDDM driver mode)
    Total Rendering Time: 1 minutes 15.7 seconds
    CUDA device 0 (NVIDIA RTX A5000): 1160 iterations, 1.950s init, 71.427s render
    CUDA device 1 (NVIDIA TITAN RTX): 640 iterations, 2.065s init, 71.389s render
    Iteration Rate: 25.201 iterations per second
    Loading Time: 4.273 seconds

    Benchmark Results: Titan RTX + RTX A5000 (both TCC driver mode)
    Total Rendering Time: 1 minutes 12.96 seconds
    CUDA device 0 (NVIDIA RTX A5000): 1160 iterations, 1.968s init, 69.293s render
    CUDA device 1 (NVIDIA TITAN RTX): 640 iterations, 1.977s init, 69.366s render
    Iteration Rate: 25.949 iterations per second
    Loading Time: 3.594 seconds


    Conclusions:

    Interesting to note that the Titan RTX looks to be almost a perfect performance match for the A4000 these days...

    As indicated, all the above benchmarks were done with the A5000 in its default air-cooled configuration. Am actually quite happy with how well the blower design performs even in the restrictive environment that the Tower 900 presents for air-cooling (it simply isn't designed for that.) But I do plan to convert it over to watercooling eventually, since there is clearly some additional performance to be had out of it. The way to tell if operating temp is adversely effecting a GPU's performance is to watch it's frequency stats during a render and noticing whether it immediately shoots up to a high number and stays there (no thermal limiting) or if it fluctuates in any way over time.

    It's also worth mentioning imo that these cards are each operating over a mere 8 lanes of PCI-E 3.0. So lest there be any lingering doubts about the unimportance of latest gen/high PCI-E lane count motherboards for Iray type rendering systems... you really are better off getting more ram/a faster GPU instead.

  • outrider42outrider42 Posts: 2,964

    Unless the card is running on the high side I really wouldn't bother water cooling it, I think any performance gain would be pretty small in the scope of things. 

    It is pretty cool to see just how much faster the A5000 is over the last generation Titan. It is not even close. For those wondering the A5000 is similar to a 3080, with just a few less CUDA cores, but 24GB of VRAM.

    The A5000 has 8192 CUDA, while the 3080 has 8704. 

  • Hey, congratz on your RTX A5000 plunge!

    Huh, good to know that PCI3 vs 4 has basically no diff.
    When I get re-focussed in next month or two, will test that out too.
    Will be interesting to see if PCI4 will make a diff at all in this next hardware lifecycle.

    Will for sure check GPU frequency when I test then as well.
    Will keep your comment in mind.

    If you do add water would be interested to hear what you decided on and what you did.
    I'm still firmly in the air is safest route, but very open to suggestions.

  • chrislbchrislb Posts: 58

    RayDAnt said:

    RE: @JamesJAB @Skyeshots @Saxa -- SD

    Finally took the plunge on an RTX A5000 after seeing proof from Igor's Lab that the A5000 (and likely the A6000 as well) is directly compatible with all RTX 3080/3090 Reference (not Founders) Edition waterblocks for custom watercooling. For what it's worth to you. First,

    The A5000 is a 230 watt card according to Nvidia.  

    https://www.nvidia.com/content/dam/en-zz/Solutions/gtcs21/rtx-a5000/nvidia-rtx-a5000-datasheet.pdf

    With a waterblock, I don't think you will see any significant improvement in render times with water cooling unless your air cooler temperatures are rather high.  The main benefit might be noise reduction.  It appears that the A5000's default fan curve keeps the GPU around 78C.  You can drop those temperatures with a higher fan speed and more noise.  Dropping the GPU temps to 45C with water cooling might get you up to an additional 85-100 MHz in GPU clock speed with the default VBIOS on the card.

    It may be possible to use certain versions of MSI afterburner and raise the power limit a little along with increasing the GPU clock speed.  However, I doubt that even with that method you will be getting much more than 250 watts power draw at the peak.

    With the A5000 having its 8 pin power connector attached by a pigtail, would it cause any waterblock clearance issues?

  • RayDAntRayDAnt Posts: 914
    edited September 21

    outrider42 said:

    Unless the card is running on the high side I really wouldn't bother water cooling it, I think any performance gain would be pretty small in the scope of things. 

    Most likely. However, since I have so much watercooling headroom in my system already (right now it's the single Titan RTX being cooled by an entire one of these) and the cost of an additional waterblock + fittings is almost negligible at this point, I'll most likely try it out.

     

    It is pretty cool to see just how much faster the A5000 is over the last generation Titan. It is not even close.

    And at a considerably smaller power budget too - 230 vs. 280 watts. Which might not sound like all that much of a difference by itself. But when you're talking about having too or more of these in a system together... that can easily be difference between needing a new power supply or not (my 750 watt EVGA G2 is still going strong with this setup.)

     

    Saxa -- SD said:

    Hey, congratz on your RTX A5000 plunge!

    Huh, good to know that PCI3 vs 4 has basically no diff.
    When I get re-focussed in next month or two, will test that out too.
    Will be interesting to see if PCI4 will make a diff at all in this next hardware lifecycle.

    Barring major changes to the way Iray handles data transfer during the rendering process (preloading all render data to GPU memory) the chances of it making any sort of difference are basically zero. Unless Nvidia were to bring back memory pooling between multiple GPUs via the PCI-E bus (it is a little known fact that prior to the introduction of NVLink with the GP100, Nvidia GPUs already had a system for memory pooling: GPUDirect P2P.) But the chances of that are pretty small imo for the obvious reasons...

     

    If you do add water would be interested to hear what you decided on and what you did.
    I'm still firmly in the air is safest route, but very open to suggestions.

    Everyone's gonna have their own comfort level when it comes to intentionally bringing water near/in your computing equipment. For me it's being able to have the waterloops themselves located below (rather than above or sandwiched in between as you often see these days) the core components of a PC that make me comfortable with it. Which is why I am such a big fan of the Tower 900 case specifically for watercooling - since it allows you to do just that (you can see the pre A5000 version of my implementation of it here.)

     

    chrislb said:

    RayDAnt said:

    RE: @JamesJAB @Skyeshots @Saxa -- SD

    Finally took the plunge on an RTX A5000 after seeing proof from Igor's Lab that the A5000 (and likely the A6000 as well) is directly compatible with all RTX 3080/3090 Reference (not Founders) Edition waterblocks for custom watercooling. For what it's worth to you. First,

    The A5000 is a 230 watt card according to Nvidia.  

    https://www.nvidia.com/content/dam/en-zz/Solutions/gtcs21/rtx-a5000/nvidia-rtx-a5000-datasheet.pdf

    With a waterblock, I don't think you will see any significant improvement in render times with water cooling unless your air cooler temperatures are rather high.  The main benefit might be noise reduction.  It appears that the A5000's default fan curve keeps the GPU around 78C.  You can drop those temperatures with a higher fan speed and more noise. 

    Can confirm that the A5000 likes to keep its core temp in the lower to mid 70s (with clock speeds hovering in the 1650-1850 range) with the default cooling setup. In my partcular case, increased computing noise is a major issue (the system doubles as an audio production PC in a home recording studio) so turning up fan speeds only for last resort.

     

    Dropping the GPU temps to 45C with water cooling might get you up to an additional 85-100 MHz in GPU clock speed with the default VBIOS on the card.

    It may be possible to use certain versions of MSI afterburner and raise the power limit a little along with increasing the GPU clock speed.  However, I doubt that even with that method you will be getting much more than 250 watts power draw at the peak.

    As a rule, I don't mess with voltages on my pc components (other than turning them down when undervolting is possible) for both component longevity and power efficiency reasons. But I'm more than willing to spend the extra buck necessary to get them the best cooling/power delivery subsystems possible so that they perform the best they can while still maintianing spec. Especially in systems where the end goal is to have 2+ high end GPUs in them (where thermal interaction between GPUs starts to become a concern

     

    With the A5000 having its 8 pin power connector attached by a pigtail, would it cause any waterblock clearance issues?

    Since the Tower 900's vertical layout puts GPUs in a hanging downward (from the rear panel IO end) position, I don't see it as much of a concern in my specific case. But it is something I will be evaluating closely if/when I make the change to watercooling.

     

    ETA: That's another thing I'm really appreciating about this A5000 right now: All that performance - just a single 8-pin power connector.

    Post edited by RayDAnt on
  • outrider42outrider42 Posts: 2,964

    There is one note about the A series (formerly Quadro) cards; they only have Displayport. The gaming models have both Displayport and HDMI 2.1. For most people this is probably no big deal, but if you are using a TV as a monitor, for example LG OLED, then this does matter. TVs do not have Displayports, so you would need to use a Displayport to HDMI adapter. Doing this will not grant you all HDMI 2.1 features, including Gsync/freesync, which is obviously a serious deal breaker for gamers. You may get 4K at 120Hz, and even HDR if the Displayport is 1.4, but the lack of Gsync hurts. If you are not a gamer, then this is no bid deal and you can happily use an OLED TV with the A series with an adapter.

  • RayDAntRayDAnt Posts: 914

    outrider42 said:

    There is one note about the A series (formerly Quadro) cards; they only have Displayport. The gaming models have both Displayport and HDMI 2.1. For most people this is probably no big deal, but if you are using a TV as a monitor, for example LG OLED, then this does matter. TVs do not have Displayports, so you would need to use a Displayport to HDMI adapter. Doing this will not grant you all HDMI 2.1 features, including Gsync/freesync, which is obviously a serious deal breaker for gamers. You may get 4K at 120Hz, and even HDR if the Displayport is 1.4, but the lack of Gsync hurts. If you are not a gamer, then this is no bid deal and you can happily use an OLED TV with the A series with an adapter.

    This did occur to me as a potential limitation as I was first plugging the card in - it's been a long time since I've seen a GPU without at least one HDMI port.

  • outrider42outrider42 Posts: 2,964
    edited September 24

    RayDAnt said:

    outrider42 said:

    There is one note about the A series (formerly Quadro) cards; they only have Displayport. The gaming models have both Displayport and HDMI 2.1. For most people this is probably no big deal, but if you are using a TV as a monitor, for example LG OLED, then this does matter. TVs do not have Displayports, so you would need to use a Displayport to HDMI adapter. Doing this will not grant you all HDMI 2.1 features, including Gsync/freesync, which is obviously a serious deal breaker for gamers. You may get 4K at 120Hz, and even HDR if the Displayport is 1.4, but the lack of Gsync hurts. If you are not a gamer, then this is no bid deal and you can happily use an OLED TV with the A series with an adapter.

    This did occur to me as a potential limitation as I was first plugging the card in - it's been a long time since I've seen a GPU without at least one HDMI port.

    I hadn't really thought of it either, until just recently. I was thinking seriously about getting the A4000. It has the core count of a 3070ti but with 16GB of VRAM. Not only that, but the A4000 is a just a single slot card and only 140 Watts on 6 pins, which is pretty incredible for such a high end GPU. I believe it might be the most powerful single slot GPU right now. Plus at around $1200 it is actually decently priced. It is not marked up much over its MSRP (though being Quadro class those are already high), but considering the VRAM and single slot size it really isn't a bad deal.

    However, I realized that it only offers Displayports, and this gave me pause. I do not have an OLED yet, but my plans are to buy the 42" LG OLED when it launches. Yep, a 42" OLED is coming soon guys! It may be still a bit big for a monitor, my current one is just 32". But I don't like to sit directly on top of my screen, I tend to sit pretty far back from my screen. So the 42" should be fine, and the OLED has so many advantages over LCD based screens. The 42" model should be around $1000. That is way cheaper than the dedicated OLED PC monitors LG is producing, which will have Displayport...but cost $4000. The $4000 OLEDs are a hard pass, but a $1000 42" OLED TV with great gaming features and that OLED performance is hard to pass up. Hardware Unboxed reviewed the 48" LG OLED TV as a gaming monitor and it performed, well like you expect an OLED would. It trounced the competition in many catagories.

    **Just a small edit to add that the A series OLEDs (not to be confused with A series GPUs) are an exception. The LG OLED A series lacks the HDMI 2.1 features, and also have weaker CPUs powering them. These are cheaper OLEDs, but don't perform nearly as well. I am talking about the C series OLEDs here, these have full HDMI 2.1 and better processing. So if this post has anybody considering OLED TVs for their monitors, I want you guys to be aware of these differences!**

    Anyway, it turns out the A4000 would not fit into those OLED plans. I have to admit this really bums me out, because I thought I the A4000 would be a great fit for me. If I didn't play games, it could still work, but alas, I do play games.

    So I am kind of torn. I do need the extra VRAM. I think I may go after the A4000 anyway and buy a gaming GPU with HDMI 2.1 down the road for when I do get the OLED and use the GPUs together.

    BTW, since the A4000 is a single slot card, it is possible to cram 4 of these babies into a board that would normally only fit 2 GPUs. Another plus is that 16GB is more proper amount of VRAM for a computer with 64GB of RAM. The reason I bring this up is that a pair of A4000's should be able to beat the 3090 or A6000, while using the SAME amount of space. The A4000's would use less power than the 3090 as well. And to top it off a pair of A4000's would even cost less than the 3090's current street price. This makes the A4000 a very compelling product, if 16GB is a good fit. That is because the A4000 sadly does NOT support Nvlink, which I find rather disgusting to be honest. There are only 3 Ampere GPUs that support Nvlink, the 3090, A6000 and A5000. That is kind of messed up. Nvidia has been ridiculously stingy with VRAM this generation, with the 3060 being the only exception.

     

    Post edited by outrider42 on
  • nonesuch00nonesuch00 Posts: 15,619

    My TV broke last month after 7 years (It is Sceptre brand for those that keep count on measures of quality and durability) and want to know if the 4K OLED TVs are worth the $1500+ more then the regular 4K LED TVs. I know the contrasts are more accurate and the NIT is not as high on the OLED vs LED but that is in words in article only, what does it look to be the difference in person?

  • As my time allows am checking this and that.  Thanks RayDAnt for foto of your tower900 & watercool setup. Been revisiting the infos I chose then, and will write back when have things more tidy.

  • outrider42outrider42 Posts: 2,964

    nonesuch00 said:

    My TV broke last month after 7 years (It is Sceptre brand for those that keep count on measures of quality and durability) and want to know if the 4K OLED TVs are worth the $1500+ more then the regular 4K LED TVs. I know the contrasts are more accurate and the NIT is not as high on the OLED vs LED but that is in words in article only, what does it look to be the difference in person?

    We probably shouldn't discuss the screen tech too much in this thread. I mainly wanted to point out the A series lack of HDMIs. However, I am a fan of OLED, I have seen many, many screens over the years and the OLEDs always stand out to me. The perfect blacks are really does it for me, and while the best LEDs have come close, they still are not there, and all of the things they do to try and control LED backlights only add to the complexity and thus potential problems that such screens can have. OLED isn't perfect though, and there is a possibility of burn in depending on the content you have on screen. There is no such thing as a perfect display. All I can say is that if you have a local big screen store around with good demonstrations, check them out. I'm not talking about Best Buy or Costco, because the store lighting is just too bright to properly show how these things look. You can still look at them this way, though, because they will have OLED and probably QLED near side by side. My Costco has OLEDs by the front door so you can't miss them. They do this this because they know the picture is eye catching.

    I can point you to the Hardware Unboxed review of the LG 48" C1 model. This is a gaming focused review, but it covers a lot of ground and shows calibrated versus non calibrated results. It also compares performance directly to other gaming monitors. Since it is about gaming, you will not find content production monitors discussed very much.

    And keep in mind you need a GPU with HDMI to use these as a monitor, and specifically HDMI 2.1 to fully support what the screen can do, many of which are gaming features. Like my 1080ti does not have 2.1, and so I would not be able to use Gsync on these TVs.

  • nonesuch00nonesuch00 Posts: 15,619

    outrider42 said:

    nonesuch00 said:

    My TV broke last month after 7 years (It is Sceptre brand for those that keep count on measures of quality and durability) and want to know if the 4K OLED TVs are worth the $1500+ more then the regular 4K LED TVs. I know the contrasts are more accurate and the NIT is not as high on the OLED vs LED but that is in words in article only, what does it look to be the difference in person?

    We probably shouldn't discuss the screen tech too much in this thread. I mainly wanted to point out the A series lack of HDMIs. However, I am a fan of OLED, I have seen many, many screens over the years and the OLEDs always stand out to me. The perfect blacks are really does it for me, and while the best LEDs have come close, they still are not there, and all of the things they do to try and control LED backlights only add to the complexity and thus potential problems that such screens can have. OLED isn't perfect though, and there is a possibility of burn in depending on the content you have on screen. There is no such thing as a perfect display. All I can say is that if you have a local big screen store around with good demonstrations, check them out. I'm not talking about Best Buy or Costco, because the store lighting is just too bright to properly show how these things look. You can still look at them this way, though, because they will have OLED and probably QLED near side by side. My Costco has OLEDs by the front door so you can't miss them. They do this this because they know the picture is eye catching.

    I can point you to the Hardware Unboxed review of the LG 48" C1 model. This is a gaming focused review, but it covers a lot of ground and shows calibrated versus non calibrated results. It also compares performance directly to other gaming monitors. Since it is about gaming, you will not find content production monitors discussed very much.

    And keep in mind you need a GPU with HDMI to use these as a monitor, and specifically HDMI 2.1 to fully support what the screen can do, many of which are gaming features. Like my 1080ti does not have 2.1, and so I would not be able to use Gsync on these TVs.

    Cool thanks. So I need to go to a big city with a Costco. The local Walmart will not have OLEDs I don't think. I am mainly interested in the difference between QLED and OLED. I came within a hair of buying an OLED 4K laptop recently but decided at such prices I would wait and buy a 4K OLED laptop with a RTX 4000 series Ada Lovelace GPU in another year & a half instead. Which is just as well as all this stuff adds up to a lot of money.

Sign In or Register to comment.