Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
@hphoenix
Thanks for your reply , it sounds more rational , however
I don't agree one one point a about voltage and I have no ideas where you get the info from or just think it is this way but this is not and you have to be corrected
the limits on Titan X standard is 250W no matter it is superclocked or not there is only 10% room for extra voltage and the card can't use more than 275W at its power setting at 110% , if that was the scenario you will need 1300W to run only 3 Titan X and where is the power resource for the system ? I run 2 Titan X SC for months on only 800W PSU without any issues , EVGA recommend 1200W for 3 Titan X and the system so they can't consume more power I go ahead with 1300W as my CPU use 140W so I wanted to have little room , there are well bios that allow the voltage at 1,3 v but that are mostly used for benchmarks under very good cooling conditions like liquid nitrogen but not for the regular consumer, the air cooled super clocked Titan X can't from the design point get over 1.2 v , but because I have original 2 Titan X SC air cooled that run at max allowed 1.2v and 1 original Hydro , there is no difference in voltage, power usage or memory speed , when I upgraded the air cooled to Hydro kit I was talking to the EVGA technician if I have to flash the bios to get the best of that due to the vol limitation , and he said nope as long the GPU stay cool I can turn the boost higher without worry about the voltage adjustment . So you see there is no voltage adjusted and usually they use max on boost 1.16V never saw even 1,2 v so the scenario is 0.87v idle to 1.18 v max and not 3.0V as that would be crazy for so big GPU anyway but there are fans that pushing to the limit and flashing the bios over the allowed limits
There are also 2 scenarios, game and rendering , with the game the power limits will run on the max pick, with rendering there is still at last 40% room as Titan X never used more than 70% of Power while rendering and when you use caustic and architectural samplers the power usage get even lower to 50-60% only so in this case it don;t even need the extra juice
but again a lot of your info is coming from the gamer scene and not actually for using with rendering and there are 2 big difference if that was the facts I could not run my 2 Titan X SC on 800W for months last year it would be over the moment I start to play games with as I would not have enough power to even run my CPU with , not to mentions the rest of the rig as the 2 cards would consume 700W on higher voltage settings . Well I played The Witcher since it came free with my cards but the voltage never go over 1.19V no matter original Hydro SC or Air SC or standard at 1457Mhz
so reading about and actually see it in action are 2 different scenarios .
Ah, that explains a LOT about why you disagree about my assesments in relation to the Titan-X. I wasn't aware of that particular peculiarity with that particular models voltage/current handling, but what you've stated here clarifies a few things. We're BOTH a bit off. I'll explain:
The Titan-X evidently is designed to run at a much lower voltage at load, which means it's using current-loading to provide the boosting to force the clocks to change faster when clocking is turned up. While all cards do this to some extent (simply increasing the voltage will cause it, since power is volts x amps) using current-loading would allow for the core to run more stable at full loads while keeping the voltage more consistent (gate logic is pretty tied to voltage, so modifying it too much causes things to get out of whack when you are also drawing too much power....or too little, depending on which direction it gets modified.) That's pretty intelligent, and also helps keep the thermal design within parameters. I dare say it enables it to more consistently throttle as total power draw starts to hit limits, unlike big voltage offsets. BUT....it also would tend to throttle more consistently, if in smaller amounts. It would also make the supply rails more current-limit sensitive (hence why they require a fairly high current max on the power supply +12v rail.)
Various benchmarking and testing sites have shown the typical 100% load power consumption of the baseline Titan-X at around 250W, which is consistent with specs. That's not TDP, that's measured consumed line wattage. And regardless of thermal dissipation, line wattage is true power consumption. And it goes up consistently with increasing clocks. But if throttling occurs in small amounts, we'd expect to see power consumption and thermal dissipation to top out at some particular value (as opposed to seeing the card have stutters or getting too hot.) But without increasing the voltage, it's harder for overclocking to get the rise-times on the clock signal to hit the marks in the needed period for a given clock-speed. (I'll assume I don't need to go into the whole instantaneous power vs rise/fall times explanation here.) It sounds like the Titan-X was built for consistency and stability, as opposed to just raw clocking. So while it does give some great performance (thanks to the number of cores and such) it also keeps itself very stable through several clever on-board limiting systems.
And I didn't know that (I've never owned a Titan series card, so I've never been able to experiment with one.) That's some good engineering, though it does limit the card to below theoretical performance limits. But for consistent high-throughput, it's the better solution. It also explains why you never 'see' the card hitting more than 70% - 80% of max power rating. Such drains would be so quickly throttled as to prevent spiking of current, as well as thermal build-up.
Gaming cards are built more on the concept of "we'll only see full load for brief periods" so they allow spiking on the assumption that any heat build up will be dissipated out during the low-load view points and such. The fan will ramp up a little at first, so if the spike doesn't last long, it levels everything out and nothing gets too far from base specs. They aren't really built for long-periods of full-load (like in rendering) which is why one has to be more careful in using such cards for that kind of loading (using custom fan curves or just setting it at a rather high fixed speed fan.)
Personally, I think I like the Titan-X design better.....more reliable and stable, if not able to hit the raw spike boosts that a gaming card can, and using those extra cores to compensate.
If they DO make a Pascal version of the Titan-X using that same kind of design philosophy, it'll be one killer card......
That would be interesting...
The thing with the fan...it always seems like the first couple driver versions are needed to settle things down. I can't remember the last time that a major Nvidia release didn't have something that was driver related. And it seems like an easy fix...just manually control the fan. Wow, a real hardship there...
Yeah, they always tend to release before they've really thoroughly tested. That's the software/hardware biz.
But it's a little more complicated than just 'manually control the fan'. This is more a firmware issue. It's like the card is deliberately dragging behind in ramping up the fan with increased load, and is instead only ramping up the fan based on thermal sensing. That's dangerous, as heat build up at full load can happen pretty quick, and suddenly the fan has to punch to max AND the gpu throttles it's frequency down to compensate. Temps drop, fan slows down gently, then it happens again....and again....that produces a lot of wear and tear on the fan, and the thermal connections. It should be slowly ramping up the fan as the gpu load increases, rather than waiting for temps, and only going beyond a certain point (say, 60% max fan tach) if the thermal detection goes above a limit (say, 80C). That way, if your load is consistent, your fan is consistent, and your temp is consistent. Less wear on the card that way.
ETA: Also, more consistent noise levels, which are less distracting than changing noise levels. I'm sure they'll release a firmware update to fix the issue soon™....
Finally we reach the top hahahah , I have nothing to add .
But I never go beyond 100% of voltage as I am not doing benchmarks or need super fast FPS and usually run on 1347Mhz all 3 so they stay on the same speed level for iray, or other GPU depended soft . For the other stuff I will have to run SLI anyway for optimal speed .
and btw just finushed setup my new rig with my new Asus MoBo X99-Delluxe/USB3.1 and i75960x .. and I am running smooth , the CPU temp are much higher than the i75820K and around 7 C but maybe due to 8 cores , need to fine tune everything .. Asus provide the most smooth setup and as much as I like Gigybte hardware the Bios and soft just sucks big time ... well I am happy camper now
Yes indeed who know what EVGA is cooking with the game version and 250W
also just got message from EVGA team GTX 1080 is on sale for $699 http://www.evga.com/products/Product.aspx?pn=08G-P4-6180-KR
I buy always all my cards there the support is fantastic and never troubles with anything I purchased there . Amazon stay away for PC hardware , the stuff is rolling from one hand to another . I returned 3 Wacom Ciniq monitors, 3 Nikon cameras as all was not new with fat finger prints and hair in the lenses ...ewww unless sold direct by Amazon and sealed I don't bother , and bad cookies will rotate around soon again
Hi Jura, I got my i75690x to 4.9Ghz , almost got 5Ghz and just on the end of stress it was the limit , I guess another silicone loterry winner hehe 48% extra temp max 43C that will be huge boost in rendering with the 16 threads in other programs compared to the standard 3Ghz but maybe I get little lower to 4.7Ghz
and btw I should purchase the 16GB memory sticks , X99-Delluxe support it already after last BIOS update and nothing was about in the manual damn , the Memory run nice on the Profile 2 , over 3000 Mhz
so 600Mhz faster ..
Hi Cath
That's very nice OC,I know few friends have i7-5960x and they struglled go beyond 4.6GHz,most of them running 4.4GHz.I would check what vCore you are running when you are rendering,you don't want pass or go beyond 1.35v on 5960x(some people feed those chips at 1.45v),as chip will degrade faster than with lower volatge...If you have enabled all power saving options then and you are getting 4.9GHz then I would stress that with OCCT,its free SW for stressing CPU,don't use older Prime95 for yours GPU,just check if does use AVX
Idle yes agree always will be hotter with such bump of speed,under load you don't want to go beyond for prolong time beyond 75C,my previous builds usually sits at low 60's,with current Xeon 55C with slight BCLK OC(105.5Mhz) which gains me 2954MHz when I render
If you can use 16GB RAM module,then I would use them,but this again depends there,if you are already running 64GB then not sure if its worth it,you need to decide Cath
In my case I would be happy with 64GB as I do use right now 32GB and that's not enough for me,if I do use 3DS MAX and very high poly models(I'm usually adding modifiers like Subdivide,Smooth,Turbo Smooth and few others) I usually running to issues due this I will need to add more RAM soon
Thanks,Jura
...so as the article concludes, and I am not into overclocking the CPU, sticking with a Xeon 8 or 10 core might still be a better deal for the cost.
...the next Generation Tesla compute card is Pascal architecture (GP100 with 16 GB of HBM 2), so it would make sense for the Quadro line to follow suit. Most likely the new flagship Quadro GPU (replacing the M6000) will have 32 GB of HBM 2 using the GP102 processor (the current M6000 was upgraded last year to 24 GB of GDDR5)
As I have read things, Volta development seems to be primarily oriented towards high speed compute applications for the next generation of supercomputers, not graphics processing.
The silver lining in al of this is that all it could result in not only a drop in price for Maxwell Titan GPUs but also Maxwell Quadro GPUs as well. For myself, compared to what I currently have, that would be a major boost in performance. As I have mentioned, 12 or 24 GB of video memory means a better chance a scene will render completely on the GPU (pretty much a "given" with 24 GB) instead of dumping to the much slower CPU mode. For me, that is a "speed bonus".
Thanks Jura, I purchased the 64GB 8x8 as the manual stated it don't support 16GB modules , then updated the BIOS from April to support my new processor and read in the info now it support 16GB modules , I don't like yo buy small modules I have like tones of it at home but DDR 3 and not usable anymore .
Regarding the CPU seriously it was worth the upgrade , it is fantastic , I go down to 4.6 Ghz to safe it as I am going to use it a lot , the power saving mode don't want turn ON anyway , I prefer the lower speed for the quiet mode anyway and it turn the clock to 1.2Ghz for that.
I don't think Titan X will drop much the price soon due to it higher memory in the gtx family , I just tested iray with my new rig , the processor perform fantastic , the delay is 3 times less than before and in most case instant compared to 20 sec before so it looks like huge improvement I am going to monitor it now to see how many cores are used for that process 4 or more , also the photoreal mode iray viewport spin so fast with just CPU I guess it use all the power , did not expected it like faster than gtx 760 lol
when using 3 cards the viewport spin did not improved and is as before , but when you load new asset it is instant not need to wait as before with i75820K conclusion here is just a good card is not enough as you need good balance between your GPU and CPU to work with iray smooth and fast , well for now my mission is accomplish .
Yeah! the deal is not good for the price, plus it is new and you never buy new one as a lot of errors and stuff is not yet clear , plus the price is crazy , maybe in January 2017 will be good time to get the new intel for less too . If you going to render with the threads then great but for iray it will use only 1 core for the nvidia vewport to render the pixel samples between rotation and rendering after you stop it will switch to GPU and how slower the core is how slow things works no matter the video cards speed so my cheaper cpu will do it faster than this . But the moment you use CPU only for rendering iray will use all the extra treads . Loading objects to the scene use also only as much cores as much cards you have , so not how much cores you have in your CPU but how fast the core speed is and i7-6950X speed is only 3Ghz so I am not sure what for deal you will get for the money I see not deal at all since you don;t overclock but you see , I run my CPU now at 1.6 Ghz only as I switch the profile and not at standard speed or overclocked but the moment I open DS or Photoshop or Zbrush I get into turbo extreme mode for faster processing , that how you save the life of a CPU and running it at base clock all the time will make it last exactly as long as mine . The math is simple , but wait I can overclock just the 4 cores needed for iray and left the other 12 at very low for even better life spam ... everything is possible .
Xeon may be good for other kind of rendering but with iray it is slower so you have to choice the most program you use and buy the right processor for that , that will be the best deal , but wait at last 6 months after release and that is for all electronic stuff in general . They first release and later fix thing based on consumer complains .lol
I am learning each time that getting greedy on important stuff and looking for cheap alternatives will cost you double anyway , and at the end you gonna lose ..
...again, the purpose for a CPU with more threads in my design is for rendering in Carrara and 3DL which are both ray trace render engines.Crikey I can get dual 10 core Xeons for less than the cost of a single Broadwell-E i7, have twice CPU cores, and change in my pocket.
About support regarding GTX 1080 and rendering with Iray
Siggraph is around 24-28 JULY so 2 more months anyway before we can see any perfomance in iray .
A lot of reviews have been dropping for a lot of 3rd-party cards and it seems like the 1080 is hitting some sort of limit at around 2050 MHz or so. That puts the card at around 10M TFlops.
It sounds like a lot, but that 2050 is only around 18% higher than the boost clock. Perhaps it's a voltage wall of sorts, but who knows?
I went looking to verify some of this. That 2050 MHz 'limit' is an issue with both the fan control and the VBIOS limits. Third-parties (at least MSI and eVGA) have already announced 'enhanced' VBIOS for their cards which removes the voltage limits, and enhanced cooling does bump that 2050 MHz limit a bit. However, we knew that much over 2100MHz was going to be questionable, given the FE cards being shown at 2144MHz at the opening were probably running a custom VBIOS and had their fans running full 100% locked (which keeps the yoyo throttling issue from happening.....I'm sure nVidia, as well as third-parties, will release an updated BIOS with better fan/throttle settings to fix the yoyo issue we're seeing.)
KingPin/Tin have already shown a screenshot of a 1080 GTX running on LN2 at 2.8GHz core clock. No idea how long it stayed there, but it did make it long enough to get the screenshot. Chip temp was reading -102C.
But don't expect to get anywhere NEAR that without LN2. Water cooling is definitely good though, and even air cooled the chip OC's pretty well.
Well, getting higher clocks is not always a guarantee with higher voltages given process variability and all that. Anything above the BIOS voltage is just playing with the silicon lottery, so we'll see how far the third-party manufacturers can push their cards.
As an aside, a part of me cringes whenever I see people playing with LN2, especially on consumer components...
The original VBIOS locks the voltage at a max of 1.25v. And convincing it to do anywhere near that is tricky. It's mainly run at factory OCing at around 1.06v - 1.08v Stock is around 0.98v. But pushing it higher doesn't help if the card can't draw enough current. Which is why many of the third-party cards have two power connectors instead of just the one on the FE. Of course, the crazy LN2 overclockers will actually modify the circuit board, and completely bypass the voltage and current regulation circuits. Good way to wreck an $700 graphics card. Of course, Kingpin/Tin I think are 'subsidized' by eVGA, so they probably get several cards to 'push the envelope' with for eVGA bragging rights. So if they happen to fry a couple along the way, no big deal.....but they've been doing this for a long time, so they have probably gotten pretty good at it. Not something for someone to just 'play around with'.....
I still think that once the driver/BIOS issues are resolved and Iray is updated to work with it, the 1080/1070 cards are going to be quite the 'go-to choice' for Iray.....
..still waiting for all the dust to settle.
Of course if I win the Megabucks lotto, all bets are off and I will go with dual 24 GB Quadro M6000s. ;-)
The thing to keep in mind is the 1080 and 1070s will be compatible with the new version of CUDA coming out in August. I believe if you have 2 or more cards it will use the combined vram of muliple cards. I read somewhere that CUDA 8 will not work with older Nvidia cards. I will wait for the dust to settle and upgrade my 980tis to 1080s. 3 1080s would give me 24 gb of vram and at that point I would never have to worry about the size of my scenes again.
to have 3 or 4 1070s or 1080s in sli you have ask for a code from nvidia to use them and even then more than 2 isn't used together others are used instead for other operations audio or monitor weird really noone understands why they did this
Probably because the percentage of consumers who game with more than 3 or 4 cards is lower than a single percentage point. For workstation purposes, I can see multiple cards being used but the 1070/1080 aren't marketed as workstation cards.
I may splurge and get a 1070 for my monitors because I'm using a Nvidia GT 640 2gb to run my monitors and 3x Nvidia 780's 6gb editions to do my Iray renders so a 1070 with 8gb of video ram will be an improvement when cuda is sorted out. This card will also make the other programs I use like Iclone run better.
CUDA 6 which introduces shared memory will NOT work on anything other than the 1080 and the 1070 at this time.
https://en.wikipedia.org/wiki/CUDA
3 x 1080 will give you only 8 GB vram from what you need also ram for rendering so around 6-7 GB left for the scene at render size full HD plus it will take away 24 GB from your system so make sure you have enough
all this also assumes Nvidia gets it right by August, delivers a working set of tools and they actually do what they are supposed to without issues. Until that happens all performance gains are a matter of speculation and conjecture.
Plus not in rendering with iray as the process is different and the reason why each scene need to be loaded separately for each GPU and not as shared memory unless they rewrite the engine but I don't see it happen just for 1080 and 1070 as that are not workstation cards and officially not recommended for rendering , for the same reason why not even ready for iray after 3 years of development as that is the last thing they worry about .
As others have noted, the Unified Memory Model in CUDA 6 does NOT mean you add up the memory. The full scene still has to fit in the memory of any single card.
The Unified Memory Model is simply an abstraction layer between the host program and the CUDA layer code. Prior to CUDA 6, developers had to manually create the memory block allocations in both main memory, and on the CUDA devices, then copy it across. "Unified Memory" is a simple (and very marketing friendly) way of saying the library code now does this for you.