Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
All the reliable early press and romors say September maybe for consumer GPUs... the keynote was about Ampere and data centers especially.
Yeah after seeing the nvidia GTC there is nothing that will interest us right now..
No you did not miss anything there was nothing that will interest us, will have to wait for later in the year to find out..
Nvidia usually launches their new microarchitectures for datacenters before the consumer cards. Consumer cards are 3 to 6 months away at least.
And yet, they did tease us with something that looked very much like a consumer card sitting on the counter next to his pepper mills.
Don't try and get a clear idea of what the next gen cards will have, until they are officially announced.
It can be interesting to read about it; it is NOT worth making any decissions based on the speculation at best and guesswork often.
I pick up my card tomorrow...either way I can always upgrade again later on prolly a new system by then.It is intresting how our hobby and gaming keeps going towards heavyer hardware but everythiong else is going to light phone / tablet computing..
...yeah I learned that back in 2014 with all the hype around the 980 Ti having 8 GB.
On the other side, I expected the RTX Titan to have maybe have it's VRAM boosted to 16 GB like the Titan V and was pleasantly surprised when it was released with 24 GB and was fully NVLink compatible. Now that is worth putting away my Zlotys for.
Or 2 systems ago when I went with AMD because I believed the verbal snake oil that Reality was going to be able to produce fast rendering times. Only to have to swap it out for a 980 when we got iray.. I also have a laptop with a 1070..
I've finally gotten some clarification with the Nvidia rep.
The A100 has 40Gb of VRAM compared to the Volta V100 at 32Gb (there was no Turing equivalent) or the P100 which had 16Gb. Make of that what you will.
The DGX A100 may have just broken AI research. If it really is half the price of the previous DGX and something like 5x more powerful we'll get a few racks of them and dump the other stuff in the dumpster (I'm not kidding if you like cheap used server parts keep an eye out over the next year) The space and TCO saving this promises could completely change how the whole AI industry does HW (and there is an enormous amount of datacenter HW doing AI of various sorts right now). The specs on these are amazing BTW but not really relevant here.
600gb/s NVlink! Currently NVLink tops out at 200. No idea what if any of that will come down to consumer cards though.
Absolutely no ETA on the A100 or a consumer card announcement. No Quadro or consumer card specs.
On NVcache, the rep had no idea what it was. Unless I missed it on some slide there was no mention of it anywhere in the videos. There is a thing called NV cache (Non Volatile chache) which is just flash memory on hybrid drives and the like.
If anyone is holding off a purchase waiting for ampere you could be in for a long wait.
Exacly in time to build next system in a few years. Though I am hoping it goes more the tablet lighter compenent route by then..
The crux of the rendering issue for me is twofold:
1. To be able to render a scene without it dropping to CPU due to exceeding the VRAM.
2.To be able to render a short (circa 100 frame) animation, with reasonable resolution, in a reasonable time. I don't call overnight a reasonable time - mainly because I often render a clip and find there are things that need tweaking.
I get the impression that Ampere will not provide me with more VRAM at a similar price point to my 1070 so I have to hope for other technology such as out-of-core. You and the other tech-heads can argue about whether that is possible or not but it is on my wishlist. Perhaps they can improve the denoiser but at the moment I don't use it because, even for animations, it spoils the look of the image (wet skin becomes polished plastic). Perhaps there is a better compression algorithm. I don't know enough (anything) about this technology so it is all a wishlist.
So, other avenues are possibly available to address my two points. I might move my scene over to Blender and use Eevee to render an animation. The Diffeomorphic plugin is being discussed elsewhere on the forum but I've just learned that, while it is doing a fine job converting materials to render in Cycles, it doesn't cater for Eevee. That's a disappointment and a possible show-stopper because, as far as I am aware, Cycles is no quicker than IRay. Another alternative being discussed is Google Filament and this is, perhaps, the best hope for animation within DAZ Studio even before Ampere. Again, I know nothing of the technology beyond what I read on these pages.
I think a 100 frame animation in any PBR is always going to take a lot of HW or a lot of time. By the time the HW gets to the point where one mid range consumer card can render a 1080p clip in a few minutes everyone will be on 4k and so on. If this is strictly a hobby then there are going to be limits on what can be accomplished. If you make money off this then there are serious options for getting render times down.
4x Quadro 8000's should make shortish work of any animation of the length andscale you are talking about and that would give you 96Gb to work with. If $25 to 30k is too rich for you, it is I'm sure, you could look at a pair of 2070 Supers and NVLink. That's between 14 and 16Gb, and substantially more than double the CUDA of your 1070, at about $1100.
I'm happy to wait; I saved up for a 1080ti, a 2080ti, then a RTX titan; ended up spending the cash (and some) on moving house, so although I have the cash saved again, well I always thought the new RTXs were a rip off; my opinion of course, asmany seem happy with em
Hell, I might even go over to Blender as all I've really got to sort now is the strand based hair. I'll then likely go for an AMD card, but who knows; I'll make my choice when I have useful and reliable data.
I still believe there may be a compromise route. Compromise some quality for speed without suffering the IRay denoiser problems. After all, game engines have to render real time so some of that technology might be applicable. Who knows what Filament might bring?
Of course $25k is too rich for me - I had to abandon my annual holiday to buy the 1070 and I'll probably have to do the same to upgrade.I also have to move home soon which can be expensive. I'd be interested to compare the twin 2070 Supers against a new Ampere though - especially when the prices start to fall on the 20xx range.
I tried to stress that this was a professional event, and that no gaming cards would be announced. But that there could be hints.
We didn't get a lot of hints. But yes, the new DGX is $200K. That sounds like a lot, but it is HALF the price of the last DGX, and offers so much more performance. Its insane. Now consider this, Nvidia just cut the price in half, I believe that bodes well for us in the consumer market. I am not saying Nvidia is going to cut the prices in half, LOL, but as I said already, the prices will certainly not be going up this generation. We might get lucky and see some small drops.
Also, after the keynote, Jensen gave us the information that Ampere will be used as the basis for their entire GPU line, meaning that YES, gamers will be getting Ampere. We can confirm this now. Obviously there are changes in going from whats in the DGX to a 3080ti, but the core arch is going to be the same. Watch out for any Quadro reveals, I was hoping we would get new Quadros, but that did not happen. The Quadros are largely equivalent to what gaming cards will be.
The Nvlink used in DGX is not anything at all like the Nvlink used in Quadro or gaming. So its not going be nearly as fast. But I would expect it to be faster than the Turing Nvlinks were.
Frankly I would be switching render engines for animation. Unless Iray gets a drastic change, you would be so much better off using something else for animation. Have you seen the Unreal 5 demo? This demo is exciting not just because it looks nice, but how they did it. Listen closely to this video.
Some of you may think I am crazy for even bringing up a video game engine and comparing it to Iray. But I've been telling you guys for years that game engines are rapidly progressing, and Unreal 5 represents a whole new level of design.
The biggest breakthrough: They can drop RAW super high poly geometry and 8K+ size textures into Unreal 5 and the engine is capable of smartly using them without blowing up the VRAM budget and killing performance. They do not have to optimize them for performance and VRAM! This one single change is such a huge breakthrough. It is going to change everything. And all of this was running on a PS5. The PS5 is going to be fast, but its not going to beat a 2080ti. Its more like a 2070 or 2070 Super in GPU power. Its going to have about 12GB of VRAM, so roughly the 2080ti in capacity. And here it is with Unreal 5 rendering billions of triangles and massive 8K textures that a super computer might choke on, all at 30 frame per second at about 1440p (there was some dynamic resolution scaling). That is 1800 frames per minute. Maybe a 2080ti could bump it up to 4K resolution. This is a modern game engine.
Oh, and because the geometry is so high, they do not even use normal maps. They are not needed anymore. That is how good the geometry is in this demo.
Does anybody here believe that Iray can do this? I don't think Iray could perform like this if you used one of the new $200K Amphere DGX boxes, LOL. Not even in interactive mode. Iray is rapidly becoming a dinosaur.
So instead of begging for more VRAM, why can't we have the render software handle this data more intelligently? This is what we should be asking for.
I agree but the reason I mentioned Eevee and Blender is because I know a little more about how to get a scene into Blender than I do into Unreal. So my question is: does Unreal have animation keyframing so that I could make my own animations or is it limited to a set of animation blocks similar to AniMate aniblocks? I'll watch the video with interest, however.
You can already compromise quality to gain some speed. Just render at a lower resolution. Render your animations at 720p or lower. You can also use something like scene optimizer to reduce texture sizes which will speed up rendering.
But iRay is a PBR. To go to the tricks game engines use would break that. As suggested above you might want to look at Unreal.
Ok, just watched that Unreal 5 video and it is seriously impressive.
I do all that already. The best I can manage is 3 minutes per frame but that often depends upon how close to the camera my characters are (often quite close).
...I saw this yesterday and while I was extremely impressed with the "cinematic" quality of both the mesh generation and lighting engines, it all came down to the appearance of characters as it usually does with a game engine as I feel they fall short of matching the detail of the setting. She looked like a "game character" inserted into a highly photo real world. A year or so ago ago I watched a demo of the Uniity engine and the scene (an interior setting in a room of a house) looked fantastic...until a character was inserted. For some reason, the skin and hair textures in particular just seemed to be sub-par compared to the appearance of the surroundings. Though I realise these engines can render quickly even with full GI while not placing such a drain on system resources as say Iray does, this one point has made me reluctant to work with them as much of my illustration work is character based.
So yeah, still putting those zlotys away for that Turing RTX Titan.
Those are some *REALLY* good points. I may have missed the real big story of today. Thanks for pointing these things out, I glossed over it because the Daz JCMs don't work. Maybe I'll have to use Alembic for everything and give UE5 a try...
3 minutes a frame is too slow for you? What would you consider acceptable? You're running on older hardware that was mid tier when it came out. I'm thrilled when I get a render inside of an hour, and yes I render overnight and then sometimes have to go back and do the render over. So what if I have to add the scene back into RenderQueue 2 or 3 times? As long as it gets done eventually.
In this instance, that was probably more a "design choice" similar to games like The Last Gaurdian which use the same "cartoon character in real backdrop" kind of thing. It's also a tech demo of lighting/bajillion triangle enviroments, so the character wasn't likely the main focus. Being a realtime game (and not an animation) the animations aren't sequenced (well, not entirely), and theres various game things going on (to help the visibility it does help to make the character stand out a bit from the enviroment) which would be non issues in an animation project.
UE3 could do SSS and fairly "realistic" skin materials, and UE4's were even better. So it would be weird for UE5 not to be be better still.
this is really exciting. besides the huge increase in geometry and texture detail, is there a major difference in render quality? looks like they are using voxel -based ray tracing according to this (which is already in UE4?)
UE5 has a mid-2021 release date?! Now I wish I hadn't seen it at all...
Yes.
...I used to do classic animation on hand painted vinyl cels and hand drawn/coloured card stock. If I could get down to 10 - 12 min per frame I'd really be "cooking" (and that's just creating the cels). Generally I would start with "hand flip" pencil test to check motion and translate that to the finished cels working over alight table. After all that is done there was getting everything on the stand properly registered (particularly in the case of overlays and multiple planes) correctly and actually shooting the sequence two frames per image, and you had to get it right the first time as mistakes could be very expensive). Patience was one of the important qualities as without it, you ended up wasting a lot of time and money.
Yeah Iray changed the game as it requires a lot of GPU horsepower to make rendering "cost effective" time wise. Putting a lot of strain on components during long render processes is a reasonable concern, which is why for a while I went back to 3DL as I could render a very detailed scene with multiple characters, props, transmaps, reflections, etc at a very high quality (using new tools and scripts made available) in a fraction of the time the same scene optimised for Iray took when I was still rendering on the CPU. Were it not for the drive crash 18 months ago (which took the test scene and all the settings with it) I feel I was only a short ways away from achieving near photo real quality.
If it's anything like AMD was also talking about doing with their GPUs them mean streaming into and out of the GPU VRAM I'm pretty sure.
UE4 & UE5 will be able to do realistic characters.
Did you notice how fluidly the character moved? Super!