Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
things will be never clear not with this speed in technology , the moment you buy your new system it is already old next day , things move too fast
you think you get cheap today, tomorrow you losing already .. bad investment , if Pascal was really 2 times faster than my cards I would freak out seriously and listed it already on sale
but 2 years is enough so the cards can works for itself in my case and I don't need any upgrades next year for sure . Since I have now 4 I will set them on lower core clock around 1200Mhz per card for the right balance and I am good . I don't mean to wait 1 and half second for Genesis to be rendered lol
mission accomplished
It all depends of your budged .. you can already get super fast system for $1000 + 1 or 2 cards ( extra ) , I would not get lower than that , so $1500 is the lowest budget you can profit from , around $2000 to $2500 is already super fast and top speed with 2 cards that will render genesis in 6 second
I got big as I need it for work
but wait with the upgrade to the end of month to see how Pascal perform in Iray as that is still not clear and for less you can have better speed
...yeah that price is good (less than what I paid for my 1 GB GTX 460 when I built my system). 6 GB of memory however means sticking to simpler scenes or a lot of multi pass/layering, or a lot of texture editing, unless I was working in Octane.
I actually felt the 980ti to be somewhat a disappointment for the price, as in the year prior to its release, all signs were pointing to it having 8 GB. In any event it would require a new MB with PCI 3.0 to take advantage of the GPU's speed, which also means a new CPU (as LGA 1366 is obsolete), and new memory, Basically, a new system so still a budget breaker for me.
In the last few days and weeks the leaks and gossips from different sites have increased.
Take it with caution but it still might help some who are on the fence to stay patient just a bit longer.
- - -
- The GTX 1060 is expected to be officially announced in the middle of July 2016. There are supposed to be 3 GB and 6 GB VRAM versions.
-> The 6 GB VRAM version may be slower in rendering than the 1080 but will offer a good amount of VRAM to get started.
http://www.dvhardware.net/article64706.html
- The first iteration of a Pascal Titan might be ready sooner than many expected and is now rumored to be announced around Gamescom in Cologne, Germany – August 17-21, 2016.
http://vrworld.com/2016/07/05/nvidia-gp100-titan-faster-geforce-1080/
- - -
The final version of the cuda 8 toolkit is still expected to be released around Siggraph 2016, 24-28 July.
https://developer.nvidia.com/cuda-toolkit
OctaneRender developers confirmed that the currently available Cuda 8 Release Candidate is still not stable enough to allready include some basic support for Pascal based GPU.
compare:
https://render.otoy.com/forum/viewtopic.php?f=9&t=54601&start=20#p280269
- - -
-> My recommendation stays the same:
Just wait it out until Iray and OctaneRender officially support Pascal cards.
To be safe do not preorder any cards and wait for the first benchmark results to compare results vs last generation cards.
- - -
Well, I have gotten my ASUS 1080 GTX STRIX graphics card. So once we get the Iray updates, I'll be able to post some (abeit, single card.....can't afford a second 1080 yet) DS Iray benchmarks for it. I'll probably use the Iray 'benchmark' scene that has it's own thread here somewhere, so we'll have some good comparison data. As well as a few other 'base' scenes that will give us a good set of comparison values.
Of course, I'm still getting that machine set up. Still need to install DS on it, downoad content, etc. And of course update with the driver changes as they happen. 1080 support (at all the AIBs as well as nVidia) are putting out lots of fixes pretty quick. It's never fast enough, of course, but it is encouraging.
Congrats ! that was a nice choice ! I am curious to see the DS result with your 1080 , is your card with the Boost Clock: 1936 MHz, 300W ? seems even better than the upcoming new EVGA card
I wondering what they will do to the Titan Pascal if 1080 run at that speed already , over 2.5Ghz ? with 400W or just below 1700Ghz with more cores and vram ?
if the gossips are true it is going to be interesting , as always there is no smoke without fire .. we see
...ah finally some word about moving to HBM 2. However, I thought the HBM configuration would mean a smaller form factor similar to the AMD HBM 1 GPU released last year. 12", that a huge card and it will be even bulkier with liquid cooling. They may need to come up with a "Titan sized" full tower case to house it all.
I also didn't think they'd release NVLink for general use. From what I've read, it was mainly for pure compute systems and servers..
The 6700 series? That means only a Quad core (for now), dual channel memory as the LGA 1151 socket boards don't suport tri or quad channel configurations and slower memory than the generation 5 i7s support. Seems a bit of a step backwards.
Hopefully I can fnd a couple of those 600$ Titan-X's before they're gone.
Thanks! It's a pretty sweet card. It isn't the OC version, just the standard ROG STRIX. But since the only difference is the default clocking (no difference in the chips, no binning advantages) I didn't bother waiting. I've tried it at +120 core and +300 Memory in GPUTweak2, with a power target of 120% and a temp limit of 75C. Ran mulitple Firestrike benchmarks, consistently got between 17,500 and 18,000. I adjusted the fan curve so it is running (slowly) all the time and ramps up a bit earlier. Never got above 65C, and never throttled the GPU.
(Edit: It runs default at base 1080 clocks (1600/1733 & 10000) but with the OCing I did, it was running around 1950 Boost.)
Now if they can just get the supply issues resolved so the prices can recover to what they SHOULD be, and iron out the last of the driver issues, I think the 1080/1070 GTX cards are going to be a big boost. The Titan-P will probably run similar clocks but with more memory and cores....unless they hold out to do the Titan-P with GP100, or wait for Volta. I'm honestly not sure which way they'll go. From a business perspective, they'll want to milk the 1000 series for as much as they can before they drop a new 'top dog'.....The 1080Ti will probably come out either at the end of the year (for the Xmas rush) or in the spring (end of fiscal year, to boost stock profit numbers.) Current rumors have the 1060 due out end-of-July to mid-August.
Got tired of waiting. Got a 980Ti for just over $400 USD
That is pretty good ! now if you only could run iray , damn it make it all not easy , when you wait the time seems stay still , just from pure curiosity
anyway it seems that the future is going to be very bight faster as expected , I have 24 months until next upgrade anyway as with over 12000 cores I can't complain for now
ah and btw !
would you be so nice and check if the texture compression works under rendering setting ? I ask yesterday in the forum but nobody seems to be interested in reply , it is important as I am doing tests with the memory and all seems to render at full resolution no matter the settings public build 4.9.2.70
thanks in advance
Nice catch congrats, I saw new one today for $423 , $60 less than 2 weeks ago when I purchased one for my brother b-day for his Adobe Premiere workstation
I was thinking I am going to run 2 Pascal and 2 Titans but could not wait anymore for the Hybrid versions so I go with 4 Titan X Hybrids and my anxiety is over lol I am like with OCD when I focus on something I can't lose it , it will consume my mind and I was tired of waiting already .
Cath, how can I tell texture compression is working? I use 512 for medium and 1024 for high.
you can check on the eyelashes on genesis , use 512 and 4000 and see if the quality change , if get fuzzy on lower res it mean it working if not it does not , that is the easy way , other can can load very detailied texture 4000x4000 and after changing the compression to lower should get fuzzy and lose sharpness , I tested yesterday but even when set on 1 nothing changed
I'd check, but I'm still running 4.8. Still waiting on them to resolve some of the DB issues before I'll risk the upgrade. I'm really hoping the latest beta fixes most of the issues, because I REALLY want the Iray and other fixes in 4.9.
I wouldn't trust the 'appearance' of the image as far as image compression in Iray goes. Better to actually do two renders (separated by a restart of DS) with the two different settings (one with the 'low' and 'high' compression set to more than the dimension of the image, and one with both set to less.) Use something like MSI Afterburner II to watch the change in peak GPU memory usage (check it at load of the scene completing, then again while rendering, subtract the lower from the upper) to get the usage of the scene. If compression is working, you should see about a 10% - 30% reduction (depending on the scene size and how many textures of that size are present.)
If you don't see it taking less memory, then the compression isn't kicking in. Visual artifacts might be something else entirely.
...with 12,000 cores, how good is the response in the viewport on Iray view mode?
I know rendering itself should be pretty fast.
For now I can only dream
(and keep playing the Megabucks Lotto) 
That is when I noticed there is NO difference in memory usage when I made some tests , but the compression affect the quality of the textures making it fuzzy on a very low setting and usual eyelashes are the first to be affected by texture compresssion and stright lines .
The response is the same as OpenGL , but I see that the matte fuction of the ground affect the rotation when using more than 6000 cores , so having actual 3D scene with own plane floor incrase the response when using more than 6000 cores .
...yeah even with just two Titan-Xs in your vid clip from last year, I was very impressed with the refresh rate.
For me it's move the camera or adjust the lighting, wait about a minute or two, and finally it refreshes. Makes for very, very long setup sessions (or a lot of test renders). At least working with 3DL I have a better indication of what is going on in OpenGL, particularly when setting up lighting.
The CPU core speed is a big deal here not just the GPU , the moment you rotate or change things the CPU take over then moment you stop it switch to GPU so kind of Hybrid . When I set the CPU cores on a very low speed everything slow down as well no matter how much GPU cores you using , that what I was talking about the balance between GPU and CPU , any changes to the scene need to be sent to GPU , only rotation of the camera need less CPU power , but changing morph or pose need to generate new obj that need to be updated on the cards . so how faster your CPU cores how less delay you will have
...Thanks, Definitely sticking to the 3.5 GHz hex core CPU for the concept build then.
anything around 3.5Ghz and above is better already from my testing , I wish my old i74790K supported more lanes at 5Ghz I will be fine sadly only 2 cards allowed with this CPU and don't be crazy about PCI-e x 8 or x16 the difference is so minimal you can't even notice it even with games it is like half second difference in speed according to real game benchmarks so investing more in CPU with more lanes that you need is waste of budged .
If your CPU support 16 or 26 lanes you good with 2 cards , if you want 3 or 4 you need 40 lanes , a lot of modern motherboards using pci-e lanes for other things too like M2 , USB 3 or Wifi cards so pay attention .
Also if you disable under BIOS all SATA ports you don't use you will free out a lot of lanes so your pci-e cards can profit from
Interesting, hm. I just ran a test with 4000/4000 on the texture compression settings and while the memory usage didn't change, I noticed a lot of jumping in the GPU/Memory Controller load when the settings were at 512/1024. At 4000, the usage was high more consistently with less jumps...
I reinstalled DS and things are working again , the memory jumping because it need VRAM for rendering and a lot no matter the compression , the compression is less aggressive than before that why we can't see the difference as much as in early builds
- - -
The GTX1060 was officially announced today.
6GB of VRAM
~ $ 249 for partner cards
$ 299 for founders edition
http://www.geforce.com/hardware/10series/geforce-gtx-1060
https://twitter.com/NVIDIAGeForce
- - -
3rd party opinion:
http://www.anandtech.com/show/10474/nvidia-announces-geforce-gtx-1060-july-19
available to buy on July 19th.
- - -
Hmm. The announced specs for the 1060 GTX Founders Edition:
1280 CUDA cores
Base Core Clock 1506 MHz
Boost Core Clock 1708 MHz
6GB GDDR5, 192-bit, 8Gbps bandwidth
Power Consumption 120W Max
Looks pretty good for the price. A pretty decent upgrade from the 960 GTX.
(Though I'm waiting for a 1050 or 1040 to use as a 'monitors only' card. Something around $100)
†
Dang it! I was hovering over CMD+V to run back here paste this link.
Oh, well. I'm still waiting for the GP 102/1080ti with the 3840 cores.
No CUDA yet still or did that happen?
I saw it this morning , good one on fine budget and nice VRAM amount for those that don;t want to break the bank and still want to render with iray in place of using CPU , the speed will be not crazy fast but better than any CPU or 4GB card .
Guys the 4th Titan X Hybrid I purchased was a fraud and fake , it was stock 1 year old Titan X with flashed BIOS manually to Superclocked ( damaging the warranty) with EVGA Hydro Kit and listed as Original Titan X Hydro , Amazon customer services have protection plan so I am getting all my money back guaranteed as the process started already . When I saw the card today I knew something was odd about compared to my original Hydro and contacted EVGA for more advice and we find out the truth .
People buying new boxes on ebay and restocking old things , they have no brain or thinking an idiot order it hoping they don;t register the card for warranty or what , how stupid
well wasted a week , ordered new already so another 5 days of waiting to finalize my rig .
so if you see anything bellow $750 used or new or open box make sure you ask the seller about the BIOS version
the Bios version for Titan X Hybrid is 84.00.45.00.90 ( with metal shroud and LED only the preview one black has some issues for the reason EVGA upgraded it ) for Titan X Superclocked BIOS version is 84.00.1F.00.90
but I guess now everyone waiting for Titan-P now , what was the cores count again over 3800 cuda cores?
It's all speculation on the Titan-P. No announcements have been made. IF they go over 3800 cores, they might as well do what I thought they would and use the GP100 chip instead of the GP104. The GP100 has 4096 cores.
I expect they'll announce the 1080Ti before they announce a pascal-based Titan card. So we'll just have to wait and see......