Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
I vote for 64 GB too.
Oh good, although the 64 GB is probably somewhat more important than the speed of the ram.
I run all SSDs except for backups which still get written to 8TB and 10TB spinning hard drives.
Most new computers/motherboards now have Thunderbolt built in. That means you will have new options. There are "hubs" and "docks". If you're just wanting more ports or a camera card reader, get a hub. If you want to plug in a monitor or Thunderbolt audio interface, prepare to pay more and use the word "dock" in your Amazon searches. I have both and love them.
Noisy is a comparative term. If one or more of the VRMs in your system happen to generate enough noise to be audible above what ever other sources of noise originate there, then those VRM are - in fact - noisy. All VRMs generate noise by virtue of their function. Whether or not that noise is audible depends entirely on what the noise floor is in the environment.
To be clear, my particular concerns over VRM noise stem from the fact that 3D rendering isn't the primary function of my system. This is first and foremost an audio capture/realtime audio production workstation which lives inside a home recording studio. Meaning that, when needed, it has to be able to "silently" coexist under low to moderate workloads within spitting distance of thousand dollar microphones in the closest thing to an anachoic chamber possible. So it goes without saying that my familiarity with/concerns about computer generated environmental noise are a bit more comprehensive than most people's.
In an acoustically isolated (and consequently zero airflow) case with internally mounted pumps set to harmonically unsympathetic RPMs paired with externally mounted radiators + fans at or near zero RPM, the loudest part of the computer is you moving your mouse around. Which is how it should be.
? Very few Mobo's have tbolt. There are 2
To be clear, my particular concerns over VRM noise stem from the fact that 3D rendering isn't the primary function of my system. This is first and foremost an audio capture/realtime audio production workstation which lives inside a home recording studio. Meaning that, when needed, it has to be able to "silently" coexist under low to moderate workloads within spitting distance of thousand dollar microphones in the closest thing to an anachoic chamber possible. So it goes without saying that my familiarity with/concerns about computer generated environmental noise are a bit more comprehensive than most people's.
In an acoustically isolated (and consequently zero airflow) case with internally mounted pumps set to harmonically unsympathetic RPMs paired with externally mounted radiators + fans at or near zero RPM, the loudest part of the computer is you moving your mouse around. Which is how it should be.
No VRM's do not inherently make noise at all. Coil whine only occurs when the frequency of the signal passing through the coil is being changed frequently or the coil itself is defective (the sounds have different causes but sounds the same). A CPU VRM should only change frequency at boot, going from zero to the operating frequency, and when the CPU boosts and then returns to normal. Since those aren't continual if you report continual coil whine from the CPU VRM's that means defective components. GPU VRM's can more frequently change frequency as the load on the GPU is more dynamic and frequency changes in the GPU are more common and shorter but generally if the computer is idle and there is coil whine from the GPU it means defective coils.
If you're running audio capture system next to a computer you're always going to pick up sound. There's no getting around that. The PSU has far larger coils and a transformer that steps wall voltage down. Since that wall supply's frequency is not well controlled, i.e. at all, you always get some coil whine from the transformer and likely at least some from the inductors as a result. The case itself is experiencing dynamic temp changes which will cause it to expand and contract, which cause all sorts of sounds. Why you have studio level audio gear right next to a PC I have no idea but an an anechoic chamber only prevents echoes not the actual sound in the first place.
If you could build an acoustically isolated case that might be true but it isn't possible. The Thermaltake case has grommets from the back of the case, and the PSU, to the front. Those pass sound and air very well as well as allowing cable pass through. That back chamber is not soundproofed, it is actually very heavily ventilated to let air from the rads escape as well as inlet fans to bring in cool air to pass over the rads.
While I find the idea of canceling the noise of opne pump with another interesting in practice I don't see how that could be done. The MoBo fan controls simply do not allow for that fine a level of control. A multitude of issues would affect the two headers and cables to result in slight variations from the settings input which would not result in less noise.
It is also a very bad idea to build a WC system and set any fan curves to zero. The loop will eventually saturate and with no airflow over the rad fins it will catastrophically over heat resulting, if you're lucky, in a shutdown.
I build computers for a living and have built systems for people who did audio recording and the real solution is to remove the PC from the recording chamber. But realistically unless you're in a soundproofed room the ambient noise level will far exceed anything coming from a properly built and functioning PC, excluding things like all fans set to 100%.
I just checked on my work PC. The ambient noise level 36 inches from the case doesn't change at all with the PC on or off. And my office is soundproofed since my door leads into the server room which might as well be a jet engine. We require ear protection when working in there.
Yeah, that's what I thought too until I built a zero fan noise open air system and plopped a Titan RTX into it. What I discovered is that these things generate audible (when you don't have a traditional case surrounding them to deflect primary reflections) levels of noise any time they are powered on. And I'm not talking about coil whine - at least not in the form it traditionally manifests (as, well, a noticeable whining sound coming from poorly constructed inductors whose frequency/volume is linked to workload intensity.) I'm talking about the fixed high frequency chirping (for lack of a better term) that these cards generate even at idle, that comes as a result of even a healthy SMPS-based VRM doing its job. This is a type/level of electronics noise that even the cheapest of closed cases easily masks, but that an open test bench/case design in a properly sound dampened room immediately makes clear.
Not with this particular case design/cooling combination. For all intents and purposes under low to moderate workloads specifically this system is sub-audible (both in terms of decibal level AND spectral frequency signature) at 18 inches. Watercooling and all.
This PC IS a piece of studio level audio gear. It serves as the central realtime sound engine for a room's worth of MIDI based acoustical instrument modeling controllers like this one (hence the need for proximity) while simultaneously serving as the recording hub for acoustical instruments/voices also being captured in that space in realtime. Hence the hyperfocus in design on mitigating baseline electronics noise.
Depending on how well you set it up internally and place it in your room (tempered glass windows at waist height, backside facing a wall) the Tower 900 has no direct paths for sound from internal components to travel into the room perpendicular to it (ie. the most common zone for microphone placement.) Which is actually all the sound isolation you need to effectively eliminate the electronics noises I'm talking about (again, this is only regarding the inherent levels of noise for low to moderate intensity workloads. Eg. rendering a scene in Iray is a whole different mattter.)
I have no idea what you're talking about here. When I said "pumps set to harmonically unsympathetic RPMs" I was talking about whatever speed generates the least amount of sympathetic vibration noise audible from outside the case that also exhibits sufficient waterflow for effective cooling (which in this case happened to be a PWM dutycycle of 25%.) Not somehow magically canceling noise by tuning the pumps against each other.
All fans/pump speeds in this machine are temperature controlled via multiple Corsair Commander Pros and are set to go to minimum/zero RPMs whenever possible. Which with this design is pretty much any time you aren't actively rendering something. That's how effective a system built around two 560mm radiator waterloops can be.
For kicks, after completing the full build I decided to see what the worst case scenario for cooling would be while running an intensive GPU workload (in this case Furmark) in a poorly cooled room (ambient temp: 30c) over an extended period of time with cooling set to absolute minimum (fans: 0 RPM, pump: 1200 RPM - the lowest setting possible.) My Titan RTX reached a peak steadystate of 49c (19c above ambient) with zero system instability. Even with no directed airflow, a big enough radiator exposed to open air will dissipate enough heat to keep a component at a steadystae temperature via convection.
I may have missed a comment somewhere in this thread on this topic but I was under the impression that if you had 2 GPU's only the amount of VRAM in the smallest card could be used, you'd still get the speed benefit of stacking but if your scene is more than 6GB of the 1660 your render will get pushed back onto the CPU?
No, Iray uses or drops cards independently - I wasn't intending to use the 6GB card for rendering (except perhaps as a test), but if I did then a scene that fitted into the avaialble memory on the 1650 would use both cards, one that fitted into the 2080Ti but not the 1650 would use just the 2080Ti, and one that didn't fit onto either would drop to CPU.
I was planning to keep the existing machine as a working unit, and pass it on to my mother (she currently has my previous machine, which iss tuck n Wndows 7 without adjusting). Of course I may face resistance over the changes to the UI, slight as they are comapred to the move from XP to 7.
OK, thanks. it's not urgent in that there are enough ports on the motherboard for now, just not a lot to spare (I do have an old four-way, unpowered hub somewhere, though I suspect it was purely USB 1)..
This is indeed false. Iray implements multiple GPUs by by loading the same identical scene data to each GPU and then telling each of them separately to calcualte different sets of iterations on that data for the final image. In order for any specific graphics card to participate, it needs to be able to hold all of that scene data. So in situations where one card out of multiple doesn't have enough ram for a scene, it will simply sit the process out while the rest continue on as usual.
I have no idea what sound you're detecting but It isn't coming from the VRM's. That almost sounds like cap noise. A capacitor vibrates very slightly with each cycle. If the cycle frequency was inside the audible range I guess you'd hear it but. Since every VRM has a cap and they should be switching pretty fast I'm guessing that is the actual source of the sound but I've never actually heard it. But that shouldn't be a reason to WC a system but just have a closed case. I have open benches for testing system setups but they really aren't good for daily use.
As to passive airflow and your loops. I doubt you ever staurated the loop. A 560mm rad plus res plus your long tube runs means a lot of thermal mass. It would take a long time to satuarte the loops (and it would happen at different times for the two loops. The Titan would downclock as its heat increase and should only shutdown in a situation where the pump stops moving water at all. It's the CPU where this could be a problem and Furmark is a GPU power virus. It puts essentially no load on the CPU.
The actual cost and labor efficient way to solve your issue is simply longer cable runs to the PC. Not spending a couple of grand on WC and a show room case.
Yep, that fits perfectly with the specific kind of noise I'm talking about.
Speaking from personal experience open systems are GREAT for daily use. They just have certain drawbacks (like total lack of electronics noise dampening) that make them ill suited to specific sitations like mine. High-end custom watercooling just so happens to be one feasable workaround.
Absolutely. Hence why all my temperature benchmarking is based on steadystate (rather than something arbitrary like time) as a limiting factor. I have a professional background in academic research and data collection methods relating to computing systems. My numbers and methodologies are absolutely solid.
Except in a case (such as mine) where the computer needs to be in close physical proximity to the operator. Who in turn also simultaneously functions as the operator of incredibly noise sensitive equipment.
Hard as it may be for some to believe (including myself as of less than two years ago),, custom watercooling actually has its uses. Especially to pc builder types who consider a new system to be a 10+ year investment since being able to loosely couple your system's chassis and cooling system to your current hardware is a huge boon to piecemeal future upgradability.
No. You've pretty well established that you did WC because you chose not to use a cheaper and more efficient solution. Custom loops are not appropriate for long term installations where you intend to upgrade piecemeal. The CPU socket will certainly change making the block obsolete. The GPU block is specific not just to that generation and version but to the specific model. Changing the blocks will result in different run lengths meaning you'll need new tubes. Upgrading most anything else requires a full drain of the loop, which is a PITA. The pumps won't last 10 years if the rig sees any reasonable amount of use. The rads likely won't last that long either (The rad channels are very small and clog really easily with the debris from bacterial growth and corosion).
Open test benches have the following issues that make them inappropriate for long term use: major dust accumulation, danger of something falling into the rig and shorting the system out, components exposed to pets etc., The test benches are light making them prone to shifting around but have a larger footprint, since they are horizontal not vertical.
CPU/GPU blocks are the only component in a watercooling loop potentially subject to obsoletion during the typical PC builder's lifetime. And even then, that's only if you don't go with generic block designs (since you will always be able to place a flat-bottomed piece of copper over a silicone chip.)
Speaking from personal experience, this is categorically false if you use flexible tubing. Which is the only sort of tubing you should ever use in a system that is meant to be upgrade friendly.
Speaking from personal experience, this is also categorically false if you design your loops with drains in the right places and lay them out to cascade properly. The same regarding filling it back up again.
D5 pumps (the only kind you should ever use in a serious watercooling build) have no internal moving parts and a typical minimum service life of 50,000 hours (5.78 years.) That's roughly on par with what the best Noctua fans offer (6 years under warranty.) Premature pump failure (assuming you avoid pitfalls like exotic cooling substances) is a non-issue when it comes to the relative merits/demerits of water/air cooling systems.
Again, assuming you use the right kind of coolant (distilled water and a non-sedimenting biocide/metal reactivity inhibiting additive combo like what Mayhems offers with their Plus products) and clean your loops regularly, this is not something that you reasonably need worry about.
D5, like all pumps, have at least a moving part. The d5 uses a rotating sphere as the impeller. It uses a magnetic field to move it which means the coil can, will fail, and the impeller can jam or otherwise fail. 50k hours is MTBF. Half the units fail before that and half after. Also, as anyone paying any attention to this knows, the WC industry has had a recent rash of bad components all of which ultimately trace back to the 3 OEM's for most of these parts. It may be rated to 50k but I'd bet that is a gross over estimate of recent component lifetimes.
Yes, you always have to worry about sediment. Distilled water is non corrosive for a few days to weeks once its in the loop. It picks up ions from the tubes and the metal, that's why mixed metal loops corrode so fast. But even single metal loops corrode just not as fast. Biocides don't last indefinitely and the stuff that comes of plastic tubes is great bacteria chow.
As to draining a loop, since you're either discounting what it actually involves or don't know, even with a well place drain and fill port, you have to fullty drain the loop which always means tipping the case around to get the water out of the rads and to the drain port. Since you said 560mm rads that means you placed the inlet and outlets of your rads above or below the pump, theres no way 2 560's went in the top of the case. Which means either fully inverting the case to drain or at least turning it 3/4 over. Then to refill you have to get water into the res to prime the pump and then keep cycling the power on and off to pump the water into the loop and rad to get all the air out. With two loops with the length of runs you claim and the absolute fact that some of the rads are below the pump that means a lot of time, I'm guessing around an hour for each loop at best. So for two loops of the size you've claimed you're looking at an all day project, And that assumes you don't flush the loops, which you absolutely should do.
As to flex tubing stretching to fit new blocks, LOL. Nvidia keeps changing the length of their reference cards and most GPU blocks are made for the reference (fe) cards. That means the ports keep changing position, as they are in the center ofthe card. You might get a few mm play but getting a cm or more? As to generic CPU blocks? I've never heard ofsuch a thing. there are blocks that have the mounting hardware for AMD and Intel's desktop mounting system but there is no guarantee those will last as a matter of fact the old AMD mounting system went out the window with AM4. That's why when Ryzen came out you could buy an off the shelf cooler or block and have to send away to the manufacturer for the correct mounting hardware. With the ongoing increases in number of pins per socket larger packages are pretty much inevitable. When that happens every existing cooler and block suddenly won't cover the package which means they won't work no matter if you can mount a bracket that fits the mobo holes.
You say you've built a single custom loop rig. I've built several, I started WC my rigs back in 2002 when serious OC's were possible. Once CPU's got to the point where even an aggressive OC on water netted you at best a 3 to 5% boost in performance I quit. It just wasn't worth the hassle.
Which is why you always plug a D5's included RPM sensor lead to a fan port on your motherboard (or other similarly capable device) so that the system can safely shut down in case of failure. Just like with any fan-based air cooling solution out there. This isn't a point of differentiation between water and air cooled systems.
None of these things are something you need worry about if you regularly clean your loops and add a drop or two of biocide/corrosion inhibitors to them every so often.
Here is how the bottom of the back is plumbed:
And here is an example of how each of the FOUR low-profile screw on drain fittings (two per loop - one at each low point) function:
Draining the entirety of one of these loops takes less than ten minutes and requires absolutely no tilting/tipping of the case whatsover. Likewise refilling takes a similarly brief amount of time (although some tilting of the case is admittedly required.)
Why on earth would someone attempt to stretch flexible tubing? Take another look at the first photo above and notice how loose the tubing runs there are . That is called slack, and it's there on purpose so that you don't have to worry about component distance tolerances at the front of the case while performing maintenance/upgrades. No one sees the back inside of the case during normal operation. You just tuck the excess 2 inches or so of tubing back through the grommets when you're done.
Which would mean that you'd be facing exactly the same dilemma with an exisitng cooler in an air-cooled system. This also isn't a point of differentiation between water and air cooled systems.
In that case, I honestly can't fault you for being jaded against watercooling (being an early adopter in watercooling is akin in my mind to being an early adopter in nuclear power production...) But I will say this: Evidently a LOT has changed (for the better) in the watercooling sphere since your last foray into it, because most of the points of trouble you've brought up about it are demonstrably presently irrelevant.
Presently there are only TWO really good reasons for not getting into custom watercooling where high performance computing systems are concerned:
1. Expense.
2. Lack of time/interest/ability to properly plan out a system before building it.
Virtually every other reason people cite against it just boils down to them working from outdated information.
Not wishing to sour my own thread (and realising I did make a negative comment to start it), but the water-cooling discussion might be better elsewhere.
My final on this, It is physically impossible to drain all the water out of a loop using a fitting placed there. You may get most but as you'll find out if you do tip the case next time you won't have gotten it all. It is also impossible to do a system flush with drains placed where you put them, vital to proper maintenance.
I get that you got some information from someone but you were steered wrong. I have a lot of experience with this and every single claim you've made has been at best incorrect or specific to a very niche use case, why would anyone use a show case for daily use? You even claimed you started with a cube case because you have an issue with GPU sag. when there are lots of better cases with GPU supports built in or available as add ons. Then you bought not a test bench as you claim but a show room case. Then you claimed VRM's were the reason which simply wasn't. Once I thoroughly worked my way through explaining that it simply wasn't true unless your MoBo was grossly defective you then claimed it was a sound that was only an issue with the new case being open. So you seem to be claiming that you left the panels off before building the custom loops in a case expressly meant for and designed for custom loops. Which brings up the obvious question why did you get 2 bad cases for your use and instead of getting one that actually made sense did you drop, based on the fittings, pumps and radiators you've shown or stated in the loops, $500 on custom loops? If it was because you're a hobbyist then great just own it. But claiming that thing makes sense for anyone but someone doing it just to do it? Stop already.
I'm finding the water cooling information useful. It's difficult to find detailed discussions on the subject and it seems to me that WC is the direction I will be going with all of my future builds. Knowing how difficult it is to get this right the first time and how expensive it can be to fry a system or a finger or two I am hesitant to move forward with WC even though I know I will eventually need to get in there and make whatever mistakes I have coming. Details are important and I've learend a lot here. Perhaps a WC thread would be well visited and populated with posts. But someone with real knowledge would need to start it and direct it.
Thank you to everyone contributing to this thread. I am in the market for a new machine and the info has really helped me.
Sorry about that. Based on your overall build design goals watercooling seemed like something worth bringing up based on my recent personal experience with it. I'll refrain from posting any more about it in this thread once I clarify one last thing:
Here is a list of all the component parts of my system's GPU water loop along with their liquid capacities (ndividually measured prior to assembly):
Total loop volume: 1.029 liter
Here is 1 liter of distilled water about to go into the empty loop for flushing purposes:
And here is how much distilled water it took under 10 minutes to remove from that same loop several hours later via simultaneous use of both drain ports and an open reservior top port as an air vent (and absolutely no tipping):
Well designed custom watercooling loops are both excellent in function and relatively easy to maintain.
I was intending to come on and say "ordered the parts, thank you" - but I've found another query. The Corsair Obsidian 750D case has two USB 3.0 ports, with connectors, but the Aorus 570 Pro has only USB 3.2 Gen1 Header headers (plus a type C and USB 2). The information I've found searching seems to suggest that the USB 3.0 connectors will not work with the USB 3.2 Gen 1 headers. Is that correct? If it is, are there adapters that would get round that (I couldn't find any at Scan, unless I skipped straight past them)?
You can get the cables you need from Amazon or Monoprice. But double check I'd be really shocked if a board came with USB 3.1 gen2 headers but no USB 3.0 headers.
Ah, looking at the specs on the Gigabyte/Aorus site for USB it says:
which is, according to https://www.tomshardware.com/news/usb-3.0-usb-3.1-becomes-usb-3.2,38699.html , the same as USB 3.0. So the answer should be yes, no problem. Thank you, now I can try ordering the parts tomorrow (barring any more panics).
Ok, internal headers are what you connect the front panel USB ports to. Back panel means the rear IO on the motherboard. In this case it's saying That you get 5 USB 3.1 gen 2 Type A ports on the back of your computer. If you look for pics of the motherboard there should be pics of the rear I/O and should find 5 type A ports and 1 type C.
You also have 4 USB 3.2 Gen 1 headers and one USB 3.2 Gen 2 tyoeC header which means you could have a case with 4 USB 3 type A's and one USB 3 type C.
Well iRay and ProRender are supposed to be based on PBR concepts and so should be very similar already. ProRenderer does have a material portability packaging scheme that could be used instead but yes it would be work. However, work is what is supposed to be done to make more sales. Thr ideal that DAZ Studio is going to sit on iRay & 3DL statically the rest of the DAZ 3D business being around is a nice one but not a realistic one for long term business success.
Yes, it was the connectors for the case I was asking about - the naming does seem unnecessarily confusing, changing 3.0 to 3.current Gen 1, but I have it clear now.
Thanks, I'll add that to my final check list.
I hope to place the order today, barring any more confusion. Thank you all for your advice.
No matter what CPU and motherboard one chooses the first thing that should always be done after installing the OS is update the drivers, and based on what is fixed in the newer BIOS versions, update the BIOS.
Ryzen has had some issues with the CPU microcode but it really isn't that major, RDRAND should never be used for an RNG since it is a known quantity and it therefore cuts one step out of the process of breaking the security. Any security or encryption package broken by RDRAND not working was, by definition, insecure. What should be done instead is using a software based RNG which uses its own seed and algorithm. I know a big deal has been made out of this in places like reddit but I work with servers and all those Rome servers at Google, Amazon and Twitter having some sort of massive security issue would be all over my news feed and it simply isn't because that is just an instruction that good programs never call. In reality there are some Linux distros that fail because open source and that's pretty much it.
But that should not be a reason not to update your motherboards BIOS when you get the machine up and running. Many MoBo's on the shelf have the very earlist BIOS the manufacturer developed off the AGESA cod they got from AMD.
As to Intel vs. AMD. Unless you are a professional gamer who needs every possible frame there is zero reason to buy Intel. The performance is at best equal part to part at much higher prices and in the vast majority of productivy programs Ryzen 3000 chips just crush their Intel equivalents at a lower price. My datacenter is under exclusive contract with Intel until the end of the year but we have orders in for Rome CPU's and Motherboards and will have 20 racks up and running on Jan. 1 for software validation and reliability testing. If our results match the rest of the industry we will not likely buy any Xeons until at least the next generatioon of Xeons hits. The 8280 CPU is more than twice the cost of 7742 which outclasses it in every way, performance, performance per dollar, performance per watt and TCO. The 9282 is still $3k more than the 7742 has 8 fewer cores and an absolutely eye watering power draw which drives the TCO into the stratosphere.
@Richard Haseltine
Can I be nosy and ask you to post your final specs? Curious. Not asking for final prices.
I switched to a Corsair HX 1200 as the PSU, doubled the RAM and moved it up to 3200, and downgraded the display card from the 1660Ti to the 1660 Super. The rest stayed the same. Now I have to wait as one of the parts was out of stock.
Thank you.