-
Nvidia Ampere Cards Supposed Specs and More.
If anyone is holding off a purchase waiting for ampere you could be in for a long wait.
The crux of the rendering issue for me is twofold:
1. To be able to render a scene without it dropping to CPU due to exceeding the VRAM.
2.To be able to render a short (circa 100 frame) animation, with reasonable resolution, in a reasonable time. I don't call overnight a reasonable time - mainly because I often render a clip and find there are things that need tweaking.
I get the impression that Ampere will not provide me with more VRAM at a similar price point to my 1070 so I have to hope for other technology such as out-of-core. You and the other tech-heads can argue about whether that is possible or not but it is on my wishlist. Perhaps they can improve the denoiser but at the moment I don't use it because, even for animations, it spoils the look of the image (wet skin becomes polished plastic). Perhaps there is a better compression algorithm. I don't know enough (anything) about this technology so it is all a wishlist.
So, other avenues are possibly available to addres my two points. I might move my scene over to Blender and use Eevee to render an animation. The Diffeomorphic plugin is being discussed elsewhere on the forum but I've just learned that, while it is doing a fine job converting materials to render in Cycles, it doesn't cater for Eevee. That's a disappointment and a possible show-stopper because, as far as I am aware, Cycles is no quicker than IRay. Another alternative being discussed if Google Filament and this is, perhaps, the best hope for animation within DAZ Studio even before Ampere. Again, I know nothing of the technology beyond what I read on these pages.
I think a 100 frame animation in any PBR is always going to take a lot of HW or a lot of time. By the time the HW gets to the point where one mid range consumer card can render a 1080p clip in a few minutes everyone will be on 4k and so on. If this is strictly a hobby then there are going to be limits on what can be accomplished. If you make money off this then there are serious options for getting render times down.
4x Quadro 8000's should make shortish work of any animation of the length andscale you are talking about and that would give you 96Gb to work with. If $25 to 30k is too rich for you, it is I'm sure, you could look at a pair of 2070 Supers and NVLink. That's between 14 and 16Gb, and substantially more than double the CUDA of your 1070, at about $1100.
I still believe there may be a compromise route. Compromise some quality for speed without suffering the IRay denoiser problems. After all, game engines have to render real time so some of that technology might be applicable. Who knows what Filament might bring?
Of course $25k is too rich for me - I had to abandon my annual holiday to buy the 1070 and I'll probably have to do the same to upgrade.I also have to move home soon which can be expensive. I'd be interested to compare the twin 2070 Supers against a new Ampere though - especially when the prices start to fall on the 20xx range.
Game render engines pre-render stuff too, so they aren't always compromising, just hiding the work-rounds.
Nvidia Ampere Cards Supposed Specs and More.So I went through the 8 GTC videos and... there wasn't any guidance about desktop GPUs to be found. Did I miss something?
I tried to stress that this was a professional event, and that no gaming cards would be announced. But that there could be hints.
We didn't get a lot of hints. But yes, the new DGX is $200K. That sounds like a lot, but it is HALF the price of the last DGX, and offers so much more performance. Its insane. Now consider this, Nvidia just cut the price in half, I believe that bodes well for us in the consumer market. I am not saying Nvidia is going to cut the prices in half, LOL, but as I said already, the prices will certainly not be going up this generation. We might get lucky and see some small drops.
Also, after the keynote, Jensen gave us the information that Ampere will be used as the basis for their entire GPU line, meaning that YES, gamers will be getting Ampere. We can confirm this now. Obviously there are changes in going from whats in the DGX to a 3080ti, but the core arch is going to be the same. Watch out for any Quadro reveals, I was hoping we would get new Quadros, but that did not happen. The Quadros are largely equivalent to what gaming cards will be.
The Nvlink used in DGX is not anything at all like the Nvlink used in Quadro or gaming. So its not going be nearly as fast. But I would expect it to be faster than the Turing Nvlinks were.
If anyone is holding off a purchase waiting for ampere you could be in for a long wait.
The crux of the rendering issue for me is twofold:
1. To be able to render a scene without it dropping to CPU due to exceeding the VRAM.
2.To be able to render a short (circa 100 frame) animation, with reasonable resolution, in a reasonable time. I don't call overnight a reasonable time - mainly because I often render a clip and find there are things that need tweaking.
I get the impression that Ampere will not provide me with more VRAM at a similar price point to my 1070 so I have to hope for other technology such as out-of-core. You and the other tech-heads can argue about whether that is possible or not but it is on my wishlist. Perhaps they can improve the denoiser but at the moment I don't use it because, even for animations, it spoils the look of the image (wet skin becomes polished plastic). Perhaps there is a better compression algorithm. I don't know enough (anything) about this technology so it is all a wishlist.
So, other avenues are possibly available to address my two points. I might move my scene over to Blender and use Eevee to render an animation. The Diffeomorphic plugin is being discussed elsewhere on the forum but I've just learned that, while it is doing a fine job converting materials to render in Cycles, it doesn't cater for Eevee. That's a disappointment and a possible show-stopper because, as far as I am aware, Cycles is no quicker than IRay. Another alternative being discussed is Google Filament and this is, perhaps, the best hope for animation within DAZ Studio even before Ampere. Again, I know nothing of the technology beyond what I read on these pages.
Frankly I would be switching render engines for animation. Unless Iray gets a drastic change, you would be so much better off using something else for animation. Have you seen the Unreal 5 demo? This demo is exciting not just because it looks nice, but how they did it. Listen closely to this video.
Some of you may think I am crazy for even bringing up a video game engine and comparing it to Iray. But I've been telling you guys for years that game engines are rapidly progressing, and Unreal 5 represents a whole new level of design.
The biggest breakthrough: They can drop RAW super high poly geometry and 8K+ size textures into Unreal 5 and the engine is capable of smartly using them without blowing up the VRAM budget and killing performance. They do not have to optimize them for performance and VRAM! This one single change is such a huge breakthrough. It is going to change everything. And all of this was running on a PS5. The PS5 is going to be fast, but its not going to beat a 2080ti. Its more like a 2070 or 2070 Super in GPU power. Its going to have about 12GB of VRAM, so roughly the 2080ti in capacity. And here it is with Unreal 5 rendering billions of triangles and massive 8K textures that a super computer might choke on, all at 30 frame per second at about 1440p (there was some dynamic resolution scaling). That is 1800 frames per minute. Maybe a 2080ti could bump it up to 4K resolution. This is a modern game engine.
Oh, and because the geometry is so high, they do not even use normal maps. They are not needed anymore. That is how good the geometry is in this demo.
Does anybody here believe that Iray can do this? I don't think Iray could perform like this if you used one of the new $200K Amphere DGX boxes, LOL. Not even in interactive mode. Iray is rapidly becoming a dinosaur.
So instead of begging for more VRAM, why can't we have the render software handle this data more intelligently? This is what we should be asking for.
...I saw this yesterday and while I was extremely impressed with the "cinematic" quality of both the mesh generation and lighting engines, it all came down to the appearance of characters as it usually does with a game engine as I feel they fall short of matching the detail of the setting. She looked like a "game character" inserted into a highly photo real world. A year or so ago ago I watched a demo of the Uniity engine and the scene (an interior setting in a room of a house) looked fantastic...until a character was inserted. For some reason, the skin and hair textures in particular just seemed to be sub-par compared to the appearance of the surroundings. Though I realise these engines can render quickly even with full GI while not placing such a drain on system resources as say Iray does, this one point has made me reluctant to work with them as much of my illustration work is character based.
So yeah, still putting those zlotys away for that Turing RTX Titan.
In this instance, that was probably more a "design choice" similar to games like The Last Gaurdian which use the same "cartoon character in real backdrop" kind of thing. It's also a tech demo of lighting/bajillion triangle enviroments, so the character wasn't likely the main focus. Being a realtime game (and not an animation) the animations aren't sequenced (well, not entirely), and theres various game things going on (to help the visibility it does help to make the character stand out a bit from the enviroment) which would be non issues in an animation project.
UE3 could do SSS and fairly "realistic" skin materials, and UE4's were even better. So it would be weird for UE5 not to be be better still.
UE4 & UE5 will be able to do realistic characters.
Did you notice how fluidly the character moved? Super!
Nvidia Ampere Cards Supposed Specs and More.If anyone is holding off a purchase waiting for ampere you could be in for a long wait.
The crux of the rendering issue for me is twofold:
1. To be able to render a scene without it dropping to CPU due to exceeding the VRAM.
2.To be able to render a short (circa 100 frame) animation, with reasonable resolution, in a reasonable time. I don't call overnight a reasonable time - mainly because I often render a clip and find there are things that need tweaking.
I get the impression that Ampere will not provide me with more VRAM at a similar price point to my 1070 so I have to hope for other technology such as out-of-core. You and the other tech-heads can argue about whether that is possible or not but it is on my wishlist. Perhaps they can improve the denoiser but at the moment I don't use it because, even for animations, it spoils the look of the image (wet skin becomes polished plastic). Perhaps there is a better compression algorithm. I don't know enough (anything) about this technology so it is all a wishlist.
So, other avenues are possibly available to addres my two points. I might move my scene over to Blender and use Eevee to render an animation. The Diffeomorphic plugin is being discussed elsewhere on the forum but I've just learned that, while it is doing a fine job converting materials to render in Cycles, it doesn't cater for Eevee. That's a disappointment and a possible show-stopper because, as far as I am aware, Cycles is no quicker than IRay. Another alternative being discussed if Google Filament and this is, perhaps, the best hope for animation within DAZ Studio even before Ampere. Again, I know nothing of the technology beyond what I read on these pages.
I think a 100 frame animation in any PBR is always going to take a lot of HW or a lot of time. By the time the HW gets to the point where one mid range consumer card can render a 1080p clip in a few minutes everyone will be on 4k and so on. If this is strictly a hobby then there are going to be limits on what can be accomplished. If you make money off this then there are serious options for getting render times down.
4x Quadro 8000's should make shortish work of any animation of the length andscale you are talking about and that would give you 96Gb to work with. If $25 to 30k is too rich for you, it is I'm sure, you could look at a pair of 2070 Supers and NVLink. That's between 14 and 16Gb, and substantially more than double the CUDA of your 1070, at about $1100.
I still believe there may be a compromise route. Compromise some quality for speed without suffering the IRay denoiser problems. After all, game engines have to render real time so some of that technology might be applicable. Who knows what Filament might bring?
Of course $25k is too rich for me - I had to abandon my annual holiday to buy the 1070 and I'll probably have to do the same to upgrade.I also have to move home soon which can be expensive. I'd be interested to compare the twin 2070 Supers against a new Ampere though - especially when the prices start to fall on the 20xx range.
You can already compromise quality to gain some speed. Just render at a lower resolution. Render your animations at 720p or lower. You can also use something like scene optimizer to reduce texture sizes which will speed up rendering.
But iRay is a PBR. To go to the tricks game engines use would break that. As suggested above you might want to look at Unreal.
I do all that already. The best I can manage is 3 minutes per frame but that often depends upon how close to the camera my characters are (often quite close).
3 minutes a frame is too slow for you?
Yes.
Nvidia Ampere Cards Supposed Specs and More.So I went through the 8 GTC videos and... there wasn't any guidance about desktop GPUs to be found. Did I miss something?
I tried to stress that this was a professional event, and that no gaming cards would be announced. But that there could be hints.
We didn't get a lot of hints. But yes, the new DGX is $200K. That sounds like a lot, but it is HALF the price of the last DGX, and offers so much more performance. Its insane. Now consider this, Nvidia just cut the price in half, I believe that bodes well for us in the consumer market. I am not saying Nvidia is going to cut the prices in half, LOL, but as I said already, the prices will certainly not be going up this generation. We might get lucky and see some small drops.
Also, after the keynote, Jensen gave us the information that Ampere will be used as the basis for their entire GPU line, meaning that YES, gamers will be getting Ampere. We can confirm this now. Obviously there are changes in going from whats in the DGX to a 3080ti, but the core arch is going to be the same. Watch out for any Quadro reveals, I was hoping we would get new Quadros, but that did not happen. The Quadros are largely equivalent to what gaming cards will be.
The Nvlink used in DGX is not anything at all like the Nvlink used in Quadro or gaming. So its not going be nearly as fast. But I would expect it to be faster than the Turing Nvlinks were.
If anyone is holding off a purchase waiting for ampere you could be in for a long wait.
The crux of the rendering issue for me is twofold:
1. To be able to render a scene without it dropping to CPU due to exceeding the VRAM.
2.To be able to render a short (circa 100 frame) animation, with reasonable resolution, in a reasonable time. I don't call overnight a reasonable time - mainly because I often render a clip and find there are things that need tweaking.
I get the impression that Ampere will not provide me with more VRAM at a similar price point to my 1070 so I have to hope for other technology such as out-of-core. You and the other tech-heads can argue about whether that is possible or not but it is on my wishlist. Perhaps they can improve the denoiser but at the moment I don't use it because, even for animations, it spoils the look of the image (wet skin becomes polished plastic). Perhaps there is a better compression algorithm. I don't know enough (anything) about this technology so it is all a wishlist.
So, other avenues are possibly available to address my two points. I might move my scene over to Blender and use Eevee to render an animation. The Diffeomorphic plugin is being discussed elsewhere on the forum but I've just learned that, while it is doing a fine job converting materials to render in Cycles, it doesn't cater for Eevee. That's a disappointment and a possible show-stopper because, as far as I am aware, Cycles is no quicker than IRay. Another alternative being discussed is Google Filament and this is, perhaps, the best hope for animation within DAZ Studio even before Ampere. Again, I know nothing of the technology beyond what I read on these pages.
Frankly I would be switching render engines for animation. Unless Iray gets a drastic change, you would be so much better off using something else for animation. Have you seen the Unreal 5 demo? This demo is exciting not just because it looks nice, but how they did it. Listen closely to this video.
Some of you may think I am crazy for even bringing up a video game engine and comparing it to Iray. But I've been telling you guys for years that game engines are rapidly progressing, and Unreal 5 represents a whole new level of design.
The biggest breakthrough: They can drop RAW super high poly geometry and 8K+ size textures into Unreal 5 and the engine is capable of smartly using them without blowing up the VRAM budget and killing performance. They do not have to optimize them for performance and VRAM! This one single change is such a huge breakthrough. It is going to change everything. And all of this was running on a PS5. The PS5 is going to be fast, but its not going to beat a 2080ti. Its more like a 2070 or 2070 Super in GPU power. Its going to have about 12GB of VRAM, so roughly the 2080ti in capacity. And here it is with Unreal 5 rendering billions of triangles and massive 8K textures that a super computer might choke on, all at 30 frame per second at about 1440p (there was some dynamic resolution scaling). That is 1800 frames per minute. Maybe a 2080ti could bump it up to 4K resolution. This is a modern game engine.
Oh, and because the geometry is so high, they do not even use normal maps. They are not needed anymore. That is how good the geometry is in this demo.
Does anybody here believe that Iray can do this? I don't think Iray could perform like this if you used one of the new $200K Amphere DGX boxes, LOL. Not even in interactive mode. Iray is rapidly becoming a dinosaur.
So instead of begging for more VRAM, why can't we have the render software handle this data more intelligently? This is what we should be asking for.
...I saw this yesterday and while I was extremely impressed with the "cinematic" quality of both the mesh generation and lighting engines, it all came down to the appearance of characters as it usually does with a game engine as I feel they fall short of matching the detail of the setting. She looked like a "game character" inserted into a highly photo real world. A year or so ago ago I watched a demo of the Uniity engine and the scene (an interior setting in a room of a house) looked fantastic...until a character was inserted. For some reason, the skin and hair textures in particular just seemed to be sub-par compared to the appearance of the surroundings. Though I realise these engines can render quickly even with full GI while not placing such a drain on system resources as say Iray does, this one point has made me reluctant to work with them as much of my illustration work is character based.
So yeah, still putting those zlotys away for that Turing RTX Titan.
In this instance, that was probably more a "design choice" similar to games like The Last Gaurdian which use the same "cartoon character in real backdrop" kind of thing. It's also a tech demo of lighting/bajillion triangle enviroments, so the character wasn't likely the main focus. Being a realtime game (and not an animation) the animations aren't sequenced (well, not entirely), and theres various game things going on (to help the visibility it does help to make the character stand out a bit from the enviroment) which would be non issues in an animation project.
UE3 could do SSS and fairly "realistic" skin materials, and UE4's were even better. So it would be weird for UE5 not to be be better still.
Nvidia Ampere Cards Supposed Specs and More.If anyone is holding off a purchase waiting for ampere you could be in for a long wait.
The crux of the rendering issue for me is twofold:
1. To be able to render a scene without it dropping to CPU due to exceeding the VRAM.
2.To be able to render a short (circa 100 frame) animation, with reasonable resolution, in a reasonable time. I don't call overnight a reasonable time - mainly because I often render a clip and find there are things that need tweaking.
I get the impression that Ampere will not provide me with more VRAM at a similar price point to my 1070 so I have to hope for other technology such as out-of-core. You and the other tech-heads can argue about whether that is possible or not but it is on my wishlist. Perhaps they can improve the denoiser but at the moment I don't use it because, even for animations, it spoils the look of the image (wet skin becomes polished plastic). Perhaps there is a better compression algorithm. I don't know enough (anything) about this technology so it is all a wishlist.
So, other avenues are possibly available to addres my two points. I might move my scene over to Blender and use Eevee to render an animation. The Diffeomorphic plugin is being discussed elsewhere on the forum but I've just learned that, while it is doing a fine job converting materials to render in Cycles, it doesn't cater for Eevee. That's a disappointment and a possible show-stopper because, as far as I am aware, Cycles is no quicker than IRay. Another alternative being discussed if Google Filament and this is, perhaps, the best hope for animation within DAZ Studio even before Ampere. Again, I know nothing of the technology beyond what I read on these pages.
I think a 100 frame animation in any PBR is always going to take a lot of HW or a lot of time. By the time the HW gets to the point where one mid range consumer card can render a 1080p clip in a few minutes everyone will be on 4k and so on. If this is strictly a hobby then there are going to be limits on what can be accomplished. If you make money off this then there are serious options for getting render times down.
4x Quadro 8000's should make shortish work of any animation of the length andscale you are talking about and that would give you 96Gb to work with. If $25 to 30k is too rich for you, it is I'm sure, you could look at a pair of 2070 Supers and NVLink. That's between 14 and 16Gb, and substantially more than double the CUDA of your 1070, at about $1100.
I still believe there may be a compromise route. Compromise some quality for speed without suffering the IRay denoiser problems. After all, game engines have to render real time so some of that technology might be applicable. Who knows what Filament might bring?
Of course $25k is too rich for me - I had to abandon my annual holiday to buy the 1070 and I'll probably have to do the same to upgrade.I also have to move home soon which can be expensive. I'd be interested to compare the twin 2070 Supers against a new Ampere though - especially when the prices start to fall on the 20xx range.
You can already compromise quality to gain some speed. Just render at a lower resolution. Render your animations at 720p or lower. You can also use something like scene optimizer to reduce texture sizes which will speed up rendering.
But iRay is a PBR. To go to the tricks game engines use would break that. As suggested above you might want to look at Unreal.
I do all that already. The best I can manage is 3 minutes per frame but that often depends upon how close to the camera my characters are (often quite close).
3 minutes a frame is too slow for you? What would you consider acceptable? You're running on older hardware that was mid tier when it came out. I'm thrilled when I get a render inside of an hour, and yes I render overnight and then sometimes have to go back and do the render over. So what if I have to add the scene back into RenderQueue 2 or 3 times? As long as it gets done eventually.
Nvidia Ampere Cards Supposed Specs and More.So I went through the 8 GTC videos and... there wasn't any guidance about desktop GPUs to be found. Did I miss something?
I tried to stress that this was a professional event, and that no gaming cards would be announced. But that there could be hints.
We didn't get a lot of hints. But yes, the new DGX is $200K. That sounds like a lot, but it is HALF the price of the last DGX, and offers so much more performance. Its insane. Now consider this, Nvidia just cut the price in half, I believe that bodes well for us in the consumer market. I am not saying Nvidia is going to cut the prices in half, LOL, but as I said already, the prices will certainly not be going up this generation. We might get lucky and see some small drops.
Also, after the keynote, Jensen gave us the information that Ampere will be used as the basis for their entire GPU line, meaning that YES, gamers will be getting Ampere. We can confirm this now. Obviously there are changes in going from whats in the DGX to a 3080ti, but the core arch is going to be the same. Watch out for any Quadro reveals, I was hoping we would get new Quadros, but that did not happen. The Quadros are largely equivalent to what gaming cards will be.
The Nvlink used in DGX is not anything at all like the Nvlink used in Quadro or gaming. So its not going be nearly as fast. But I would expect it to be faster than the Turing Nvlinks were.
If anyone is holding off a purchase waiting for ampere you could be in for a long wait.
The crux of the rendering issue for me is twofold:
1. To be able to render a scene without it dropping to CPU due to exceeding the VRAM.
2.To be able to render a short (circa 100 frame) animation, with reasonable resolution, in a reasonable time. I don't call overnight a reasonable time - mainly because I often render a clip and find there are things that need tweaking.
I get the impression that Ampere will not provide me with more VRAM at a similar price point to my 1070 so I have to hope for other technology such as out-of-core. You and the other tech-heads can argue about whether that is possible or not but it is on my wishlist. Perhaps they can improve the denoiser but at the moment I don't use it because, even for animations, it spoils the look of the image (wet skin becomes polished plastic). Perhaps there is a better compression algorithm. I don't know enough (anything) about this technology so it is all a wishlist.
So, other avenues are possibly available to address my two points. I might move my scene over to Blender and use Eevee to render an animation. The Diffeomorphic plugin is being discussed elsewhere on the forum but I've just learned that, while it is doing a fine job converting materials to render in Cycles, it doesn't cater for Eevee. That's a disappointment and a possible show-stopper because, as far as I am aware, Cycles is no quicker than IRay. Another alternative being discussed is Google Filament and this is, perhaps, the best hope for animation within DAZ Studio even before Ampere. Again, I know nothing of the technology beyond what I read on these pages.
Frankly I would be switching render engines for animation. Unless Iray gets a drastic change, you would be so much better off using something else for animation. Have you seen the Unreal 5 demo? This demo is exciting not just because it looks nice, but how they did it. Listen closely to this video.
Some of you may think I am crazy for even bringing up a video game engine and comparing it to Iray. But I've been telling you guys for years that game engines are rapidly progressing, and Unreal 5 represents a whole new level of design.
The biggest breakthrough: They can drop RAW super high poly geometry and 8K+ size textures into Unreal 5 and the engine is capable of smartly using them without blowing up the VRAM budget and killing performance. They do not have to optimize them for performance and VRAM! This one single change is such a huge breakthrough. It is going to change everything. And all of this was running on a PS5. The PS5 is going to be fast, but its not going to beat a 2080ti. Its more like a 2070 or 2070 Super in GPU power. Its going to have about 12GB of VRAM, so roughly the 2080ti in capacity. And here it is with Unreal 5 rendering billions of triangles and massive 8K textures that a super computer might choke on, all at 30 frame per second at about 1440p (there was some dynamic resolution scaling). That is 1800 frames per minute. Maybe a 2080ti could bump it up to 4K resolution. This is a modern game engine.
Oh, and because the geometry is so high, they do not even use normal maps. They are not needed anymore. That is how good the geometry is in this demo.
Does anybody here believe that Iray can do this? I don't think Iray could perform like this if you used one of the new $200K Amphere DGX boxes, LOL. Not even in interactive mode. Iray is rapidly becoming a dinosaur.
So instead of begging for more VRAM, why can't we have the render software handle this data more intelligently? This is what we should be asking for.
...I saw this yesterday and while I was extremely impressed with the "cinematic" quality of both the mesh generation and lighting engines, it all came down to the appearance of characters as it usually does with a game engine as I feel they fall short of matching the detail of the setting. She looked like a "game character" inserted into a highly photo real world. A year or so ago ago I watched a demo of the Uniity engine and the scene (an interior setting in a room of a house) looked fantastic...until a character was inserted. For some reason, the skin and hair textures in particular just seemed to be sub-par compared to the appearance of the surroundings. Though I realise these engines can render quickly even with full GI while not placing such a drain on system resources as say Iray does, this one point has made me reluctant to work with them as much of my illustration work is character based.
So yeah, still putting those zlotys away for that Turing RTX Titan.
Nvidia Ampere Cards Supposed Specs and More.If anyone is holding off a purchase waiting for ampere you could be in for a long wait.
The crux of the rendering issue for me is twofold:
1. To be able to render a scene without it dropping to CPU due to exceeding the VRAM.
2.To be able to render a short (circa 100 frame) animation, with reasonable resolution, in a reasonable time. I don't call overnight a reasonable time - mainly because I often render a clip and find there are things that need tweaking.
I get the impression that Ampere will not provide me with more VRAM at a similar price point to my 1070 so I have to hope for other technology such as out-of-core. You and the other tech-heads can argue about whether that is possible or not but it is on my wishlist. Perhaps they can improve the denoiser but at the moment I don't use it because, even for animations, it spoils the look of the image (wet skin becomes polished plastic). Perhaps there is a better compression algorithm. I don't know enough (anything) about this technology so it is all a wishlist.
So, other avenues are possibly available to addres my two points. I might move my scene over to Blender and use Eevee to render an animation. The Diffeomorphic plugin is being discussed elsewhere on the forum but I've just learned that, while it is doing a fine job converting materials to render in Cycles, it doesn't cater for Eevee. That's a disappointment and a possible show-stopper because, as far as I am aware, Cycles is no quicker than IRay. Another alternative being discussed if Google Filament and this is, perhaps, the best hope for animation within DAZ Studio even before Ampere. Again, I know nothing of the technology beyond what I read on these pages.
I think a 100 frame animation in any PBR is always going to take a lot of HW or a lot of time. By the time the HW gets to the point where one mid range consumer card can render a 1080p clip in a few minutes everyone will be on 4k and so on. If this is strictly a hobby then there are going to be limits on what can be accomplished. If you make money off this then there are serious options for getting render times down.
4x Quadro 8000's should make shortish work of any animation of the length andscale you are talking about and that would give you 96Gb to work with. If $25 to 30k is too rich for you, it is I'm sure, you could look at a pair of 2070 Supers and NVLink. That's between 14 and 16Gb, and substantially more than double the CUDA of your 1070, at about $1100.
I still believe there may be a compromise route. Compromise some quality for speed without suffering the IRay denoiser problems. After all, game engines have to render real time so some of that technology might be applicable. Who knows what Filament might bring?
Of course $25k is too rich for me - I had to abandon my annual holiday to buy the 1070 and I'll probably have to do the same to upgrade.I also have to move home soon which can be expensive. I'd be interested to compare the twin 2070 Supers against a new Ampere though - especially when the prices start to fall on the 20xx range.
You can already compromise quality to gain some speed. Just render at a lower resolution. Render your animations at 720p or lower. You can also use something like scene optimizer to reduce texture sizes which will speed up rendering.
But iRay is a PBR. To go to the tricks game engines use would break that. As suggested above you might want to look at Unreal.
I do all that already. The best I can manage is 3 minutes per frame but that often depends upon how close to the camera my characters are (often quite close).
Nvidia Ampere Cards Supposed Specs and More.If anyone is holding off a purchase waiting for ampere you could be in for a long wait.
The crux of the rendering issue for me is twofold:
1. To be able to render a scene without it dropping to CPU due to exceeding the VRAM.
2.To be able to render a short (circa 100 frame) animation, with reasonable resolution, in a reasonable time. I don't call overnight a reasonable time - mainly because I often render a clip and find there are things that need tweaking.
I get the impression that Ampere will not provide me with more VRAM at a similar price point to my 1070 so I have to hope for other technology such as out-of-core. You and the other tech-heads can argue about whether that is possible or not but it is on my wishlist. Perhaps they can improve the denoiser but at the moment I don't use it because, even for animations, it spoils the look of the image (wet skin becomes polished plastic). Perhaps there is a better compression algorithm. I don't know enough (anything) about this technology so it is all a wishlist.
So, other avenues are possibly available to addres my two points. I might move my scene over to Blender and use Eevee to render an animation. The Diffeomorphic plugin is being discussed elsewhere on the forum but I've just learned that, while it is doing a fine job converting materials to render in Cycles, it doesn't cater for Eevee. That's a disappointment and a possible show-stopper because, as far as I am aware, Cycles is no quicker than IRay. Another alternative being discussed if Google Filament and this is, perhaps, the best hope for animation within DAZ Studio even before Ampere. Again, I know nothing of the technology beyond what I read on these pages.
I think a 100 frame animation in any PBR is always going to take a lot of HW or a lot of time. By the time the HW gets to the point where one mid range consumer card can render a 1080p clip in a few minutes everyone will be on 4k and so on. If this is strictly a hobby then there are going to be limits on what can be accomplished. If you make money off this then there are serious options for getting render times down.
4x Quadro 8000's should make shortish work of any animation of the length andscale you are talking about and that would give you 96Gb to work with. If $25 to 30k is too rich for you, it is I'm sure, you could look at a pair of 2070 Supers and NVLink. That's between 14 and 16Gb, and substantially more than double the CUDA of your 1070, at about $1100.
I still believe there may be a compromise route. Compromise some quality for speed without suffering the IRay denoiser problems. After all, game engines have to render real time so some of that technology might be applicable. Who knows what Filament might bring?
Of course $25k is too rich for me - I had to abandon my annual holiday to buy the 1070 and I'll probably have to do the same to upgrade.I also have to move home soon which can be expensive. I'd be interested to compare the twin 2070 Supers against a new Ampere though - especially when the prices start to fall on the 20xx range.
You can already compromise quality to gain some speed. Just render at a lower resolution. Render your animations at 720p or lower. You can also use something like scene optimizer to reduce texture sizes which will speed up rendering.
But iRay is a PBR. To go to the tricks game engines use would break that. As suggested above you might want to look at Unreal.
Nvidia Ampere Cards Supposed Specs and More.So I went through the 8 GTC videos and... there wasn't any guidance about desktop GPUs to be found. Did I miss something?
I tried to stress that this was a professional event, and that no gaming cards would be announced. But that there could be hints.
We didn't get a lot of hints. But yes, the new DGX is $200K. That sounds like a lot, but it is HALF the price of the last DGX, and offers so much more performance. Its insane. Now consider this, Nvidia just cut the price in half, I believe that bodes well for us in the consumer market. I am not saying Nvidia is going to cut the prices in half, LOL, but as I said already, the prices will certainly not be going up this generation. We might get lucky and see some small drops.
Also, after the keynote, Jensen gave us the information that Ampere will be used as the basis for their entire GPU line, meaning that YES, gamers will be getting Ampere. We can confirm this now. Obviously there are changes in going from whats in the DGX to a 3080ti, but the core arch is going to be the same. Watch out for any Quadro reveals, I was hoping we would get new Quadros, but that did not happen. The Quadros are largely equivalent to what gaming cards will be.
The Nvlink used in DGX is not anything at all like the Nvlink used in Quadro or gaming. So its not going be nearly as fast. But I would expect it to be faster than the Turing Nvlinks were.
If anyone is holding off a purchase waiting for ampere you could be in for a long wait.
The crux of the rendering issue for me is twofold:
1. To be able to render a scene without it dropping to CPU due to exceeding the VRAM.
2.To be able to render a short (circa 100 frame) animation, with reasonable resolution, in a reasonable time. I don't call overnight a reasonable time - mainly because I often render a clip and find there are things that need tweaking.
I get the impression that Ampere will not provide me with more VRAM at a similar price point to my 1070 so I have to hope for other technology such as out-of-core. You and the other tech-heads can argue about whether that is possible or not but it is on my wishlist. Perhaps they can improve the denoiser but at the moment I don't use it because, even for animations, it spoils the look of the image (wet skin becomes polished plastic). Perhaps there is a better compression algorithm. I don't know enough (anything) about this technology so it is all a wishlist.
So, other avenues are possibly available to address my two points. I might move my scene over to Blender and use Eevee to render an animation. The Diffeomorphic plugin is being discussed elsewhere on the forum but I've just learned that, while it is doing a fine job converting materials to render in Cycles, it doesn't cater for Eevee. That's a disappointment and a possible show-stopper because, as far as I am aware, Cycles is no quicker than IRay. Another alternative being discussed is Google Filament and this is, perhaps, the best hope for animation within DAZ Studio even before Ampere. Again, I know nothing of the technology beyond what I read on these pages.
Frankly I would be switching render engines for animation. Unless Iray gets a drastic change, you would be so much better off using something else for animation. Have you seen the Unreal 5 demo? This demo is exciting not just because it looks nice, but how they did it. Listen closely to this video.
Some of you may think I am crazy for even bringing up a video game engine and comparing it to Iray. But I've been telling you guys for years that game engines are rapidly progressing, and Unreal 5 represents a whole new level of design.
The biggest breakthrough: They can drop RAW super high poly geometry and 8K+ size textures into Unreal 5 and the engine is capable of smartly using them without blowing up the VRAM budget and killing performance. They do not have to optimize them for performance and VRAM! This one single change is such a huge breakthrough. It is going to change everything. And all of this was running on a PS5. The PS5 is going to be fast, but its not going to beat a 2080ti. Its more like a 2070 or 2070 Super in GPU power. Its going to have about 12GB of VRAM, so roughly the 2080ti in capacity. And here it is with Unreal 5 rendering billions of triangles and massive 8K textures that a super computer might choke on, all at 30 frame per second at about 1440p (there was some dynamic resolution scaling). That is 1800 frames per minute. Maybe a 2080ti could bump it up to 4K resolution. This is a modern game engine.
Oh, and because the geometry is so high, they do not even use normal maps. They are not needed anymore. That is how good the geometry is in this demo.
Does anybody here believe that Iray can do this? I don't think Iray could perform like this if you used one of the new $200K Amphere DGX boxes, LOL. Not even in interactive mode. Iray is rapidly becoming a dinosaur.
So instead of begging for more VRAM, why can't we have the render software handle this data more intelligently? This is what we should be asking for.
Nvidia Ampere Cards Supposed Specs and More.If anyone is holding off a purchase waiting for ampere you could be in for a long wait.
The crux of the rendering issue for me is twofold:
1. To be able to render a scene without it dropping to CPU due to exceeding the VRAM.
2.To be able to render a short (circa 100 frame) animation, with reasonable resolution, in a reasonable time. I don't call overnight a reasonable time - mainly because I often render a clip and find there are things that need tweaking.
I get the impression that Ampere will not provide me with more VRAM at a similar price point to my 1070 so I have to hope for other technology such as out-of-core. You and the other tech-heads can argue about whether that is possible or not but it is on my wishlist. Perhaps they can improve the denoiser but at the moment I don't use it because, even for animations, it spoils the look of the image (wet skin becomes polished plastic). Perhaps there is a better compression algorithm. I don't know enough (anything) about this technology so it is all a wishlist.
So, other avenues are possibly available to addres my two points. I might move my scene over to Blender and use Eevee to render an animation. The Diffeomorphic plugin is being discussed elsewhere on the forum but I've just learned that, while it is doing a fine job converting materials to render in Cycles, it doesn't cater for Eevee. That's a disappointment and a possible show-stopper because, as far as I am aware, Cycles is no quicker than IRay. Another alternative being discussed if Google Filament and this is, perhaps, the best hope for animation within DAZ Studio even before Ampere. Again, I know nothing of the technology beyond what I read on these pages.
I think a 100 frame animation in any PBR is always going to take a lot of HW or a lot of time. By the time the HW gets to the point where one mid range consumer card can render a 1080p clip in a few minutes everyone will be on 4k and so on. If this is strictly a hobby then there are going to be limits on what can be accomplished. If you make money off this then there are serious options for getting render times down.
4x Quadro 8000's should make shortish work of any animation of the length andscale you are talking about and that would give you 96Gb to work with. If $25 to 30k is too rich for you, it is I'm sure, you could look at a pair of 2070 Supers and NVLink. That's between 14 and 16Gb, and substantially more than double the CUDA of your 1070, at about $1100.
I still believe there may be a compromise route. Compromise some quality for speed without suffering the IRay denoiser problems. After all, game engines have to render real time so some of that technology might be applicable. Who knows what Filament might bring?
Of course $25k is too rich for me - I had to abandon my annual holiday to buy the 1070 and I'll probably have to do the same to upgrade.I also have to move home soon which can be expensive. I'd be interested to compare the twin 2070 Supers against a new Ampere though - especially when the prices start to fall on the 20xx range.
Nvidia Ampere Cards Supposed Specs and More.If anyone is holding off a purchase waiting for ampere you could be in for a long wait.
The crux of the rendering issue for me is twofold:
1. To be able to render a scene without it dropping to CPU due to exceeding the VRAM.
2.To be able to render a short (circa 100 frame) animation, with reasonable resolution, in a reasonable time. I don't call overnight a reasonable time - mainly because I often render a clip and find there are things that need tweaking.
I get the impression that Ampere will not provide me with more VRAM at a similar price point to my 1070 so I have to hope for other technology such as out-of-core. You and the other tech-heads can argue about whether that is possible or not but it is on my wishlist. Perhaps they can improve the denoiser but at the moment I don't use it because, even for animations, it spoils the look of the image (wet skin becomes polished plastic). Perhaps there is a better compression algorithm. I don't know enough (anything) about this technology so it is all a wishlist.
So, other avenues are possibly available to addres my two points. I might move my scene over to Blender and use Eevee to render an animation. The Diffeomorphic plugin is being discussed elsewhere on the forum but I've just learned that, while it is doing a fine job converting materials to render in Cycles, it doesn't cater for Eevee. That's a disappointment and a possible show-stopper because, as far as I am aware, Cycles is no quicker than IRay. Another alternative being discussed if Google Filament and this is, perhaps, the best hope for animation within DAZ Studio even before Ampere. Again, I know nothing of the technology beyond what I read on these pages.
I think a 100 frame animation in any PBR is always going to take a lot of HW or a lot of time. By the time the HW gets to the point where one mid range consumer card can render a 1080p clip in a few minutes everyone will be on 4k and so on. If this is strictly a hobby then there are going to be limits on what can be accomplished. If you make money off this then there are serious options for getting render times down.
4x Quadro 8000's should make shortish work of any animation of the length andscale you are talking about and that would give you 96Gb to work with. If $25 to 30k is too rich for you, it is I'm sure, you could look at a pair of 2070 Supers and NVLink. That's between 14 and 16Gb, and substantially more than double the CUDA of your 1070, at about $1100.
Nvidia Ampere Cards Supposed Specs and More.If anyone is holding off a purchase waiting for ampere you could be in for a long wait.
The crux of the rendering issue for me is twofold:
1. To be able to render a scene without it dropping to CPU due to exceeding the VRAM.
2.To be able to render a short (circa 100 frame) animation, with reasonable resolution, in a reasonable time. I don't call overnight a reasonable time - mainly because I often render a clip and find there are things that need tweaking.
I get the impression that Ampere will not provide me with more VRAM at a similar price point to my 1070 so I have to hope for other technology such as out-of-core. You and the other tech-heads can argue about whether that is possible or not but it is on my wishlist. Perhaps they can improve the denoiser but at the moment I don't use it because, even for animations, it spoils the look of the image (wet skin becomes polished plastic). Perhaps there is a better compression algorithm. I don't know enough (anything) about this technology so it is all a wishlist.
So, other avenues are possibly available to address my two points. I might move my scene over to Blender and use Eevee to render an animation. The Diffeomorphic plugin is being discussed elsewhere on the forum but I've just learned that, while it is doing a fine job converting materials to render in Cycles, it doesn't cater for Eevee. That's a disappointment and a possible show-stopper because, as far as I am aware, Cycles is no quicker than IRay. Another alternative being discussed is Google Filament and this is, perhaps, the best hope for animation within DAZ Studio even before Ampere. Again, I know nothing of the technology beyond what I read on these pages.
New game like render engine in the works?Yeah, something to keep in mind here is that the officially stated poly counts for Daz figures don't mean a thing when it comes to what sort of computational load they put on external rendering engines like Iray. For the very simple reason that Iray/octane/etc only ever get to interact with the figures in their fully "compiled" state. Which in the case of G8F (a purportedly 17k figure) rendered under 100% default Daz Studio settings is as a 524k figure.
How do I know this? Because if you fire up the latest version of Daz Studio, load a blank scene, load a single "Genesis 8 Female Dev Load" from Smart Content, hit Render for Iray, and then jump to the log file and search for the latest occurrence of "Geometry import" you get this:
2020-05-09 20:29:19.927 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : Geometry import (1 triangle object with 524k triangles, 0 fiber objects with 0 fibers (0 segments), 1 triangle instance yielding 524k triangles, 0 fiber instances yielding 0 segments) took 0.059sThe story that Genesis 8 figures are low poly count is an innocent fiction. It may load initially into Daz Studio as a 17k payload. But by the time it actually makes its way to Iray for rendering it is a 524k beast. And once again - this is without changing any DS options (eg. subD) from their defaults.
I don't really understand the development issues but I was under the impression that DAZ Studio will have Google Filament at some stage. This is Open Source so how much work does it take to integrate it?
My reading in-between the lines is that Daz developers pulled support for it in the latest release cycle because of difficulties getting native 3DL/Iray shaders to visually translate properly, given that Filament does not yet seem to have fully documented/working implementations of key visual effects like subsurface scattering/volumetrics.
New game like render engine in the works?The Scene optimizer and the Decimator are both options to reduce the "Heaviness" or hardware footprint of a Genesis figure and or props. by creating a VERSION less render intensive for internal use or sending to other programs/engines.But that would still be a G8, dependent on the facilities provided by the framework. And the framework is decidely NOT realtime friendly. The polycount is not what make G8s heavy, it's the DQ, JCMs, JCJs, and SubD. Calculating the blended position of a vertex is a Sum of Products operation that CPUs can do 8 at a time, and GPUs by the thousands. The poly count is not the problem.
But if you remove what is the problem, what is left is a 17K character model that, shorn of the innovative framework that makes a 17K model be able to work, is inappropriate for just about anything. Even despite the quality of its topology. There's going to have to be another model because Genesis 8 characters don't even have a base resolution model.
I would need to see some actual production Data to beleive that The very existance of these end user OPTIONS have stopped or slowed the production of "Heavy" Genesis 8 content with HD morphs and 4-8k textures.There is a cost to having that OPTION. The genesis framework does not even have a "base resolution". It has a subd cage, which does not represent the artist's vision for the character, that MUST be subdivided, even if only at level 0. A usable base resolution model expected to be aesthetically pleasing without aby of the G8's innovations is a much different model that will not just spring into existence. I don't need production data to understand how the passage of time works; an artist can only do one thing at a time. It's going to take longer and he/she may opt not even to do one or the other. This is where the mere existence of the OPTION could affect the availability of the kinds of models I want.
The market trends and potential for growth will determine "the focus" ..not you.. nor I.
Never have I even implied that my personal desire mattered to anyone nor anything beyond myself. But I'm sure you appreciate that market trends and potential for growth seem like empirically observable things, but there are many places for subjectivity to creep in between the business intel analyst on the ground and the CEO. Companies do things seemingly contrary to "what they should obviously do" all the time, and miss these signals altogether.
If indeed Daz has publicly stated an interest in supporting the Open source Realtime "Filament " engine, it is reasonble to assume that they will at some point offer content optimized for that engine as they are in the 3D content business ..yes?? .....wether you like it ....or not.And holding man hours to produce it constant, the production of that content could reasonably be expected to affect the production of other types of content. Yes? And I'm afraid that you will have to point out where I claimed that my personal desire would have any effect at all on anything. I merely expressed my personal desire in the hopes of spawning conversation, and I suppose I succeeded better than I would have thought.
New game like render engine in the works?The Scene optimizer and the Decimator are both options to reduce the "Heaviness" or hardware footprint of a Genesis figure and or props. by creating a VERSION less render intensive for internal use or sending to other programs/engines.
I used the decimator extensively for low res backgrounders in my feature length Animated Film "Galactus Rising"
I would need to see some actual production Data to beleive that The very existance of these end user OPTIONS have stopped or slowed the production of "Heavy" Genesis 8 content with HD morphs and 4-8k textures.
Daz once Had a game content branch called "morph3D" selling game optimized Genesis 2 content called "MCS" for Unity
Then rebranded it "morph ID"
Then rebranded it "the Oasis" IIRC
Now it is Called "Maketafi"( i'm not kidding)
All were/are dealing with game ready figure assets and web Avatars. to sell to those markets.
The market trends and potential for growth will determine "the focus" ..not you.. nor I.
If indeed Daz has publicly stated an interest in supporting the Open source Realtime "Filament " engine, it is reasonble to assume that they will at some point offer content optimized for that engine as they are in the 3D content business ..yes?? .....wether you like it ....or not.New game like render engine in the works?I don't really understand the development issues but I was under the impression that DAZ Studio will have Google Filament at some stage. This is Open Source so how much work does it take to integrate it?
Of course, I can't say. Maybe we could ignore all the other variables like support levels and consider how long it took to get IRay support. I don't really know how long that was as it was already here when I showed up, but I don't imagine it was overnight.
I've seen Vulkan mentioned too (I believe it was you who mentioned it) - is that an alternative or is it part of the Filament architecture?
It's a lower level framework, like OpenGL, that render engines like Filament could be based on.
As far back as I can remember with DAZ Studio, people have been demanding faster renders. People spend huge sums on new and faster GPUs so there is clearly a groundswell of support for something that would meet the need, if only, at the moment, to enable animation which is a pipe dream for those attempting to render hundreds of frames in IRay. Real time rendering could take DAZ Studio to a much bigger market and maybe draw some investment from the big boys like NVidia.
Fair enough. But the realtime renderer is certainly going to, in some way, lack the realism that ray tracers produce. There's no free lunch. You're going to be giving up something. For a lot of people, the trade-off will be worth it, and in that sense, the choice will be good.
My only concern is that we can't have it all and, personally, I don't want to see a move towards lighter models intended for realtime rendering because I've seen what those models look like. I came to the conclusion that I didn't want a lighter model or a less faithful render engine, I wanted a faster render box.
But this is just my personal opinion and I'm not trying to yuck anyone else's yum.
Who's Next to Join the G8 Family?Ain't nothing new technologically worthwhile coming out with G9 if it comes this June so it'll be a continuation of Genesis 3 really. I think maybe the Filament renderer is going to open up DAZ Studio to some OSS physics library maybe and then maybe we'll get Genesis 9?
Daz Studio 4.12 Pro, General Release! (*UPDATED*)The version of Iray in the latest build requires fairly recent driversDoes anyone else see a problem ahead if this trend continues? We've got a constantly moving target of "very recent minimum driver version" every time D|S gets a new Iray update, and not everyone that was working happily with Iray is going to have an NVidia card capable of using that minimum version.
Yeah, they definately need to get that more forgiving generic Filament Renderer out so they can always move iRay forward. If not, then really why bother ever updating DAZ Studio again ever?
New game like render engine in the works?I don't really understand the development issues but I was under the impression that DAZ Studio will have Google Filament at some stage. This is Open Source so how much work does it take to integrate it? I've seen Vulkan mentioned too (I believe it was you who mentioned it) - is that an alternative or is it part of the Filament architecture?
As far back as I can remember with DAZ Studio, people have been demanding faster renders. People spend huge sums on new and faster GPUs so there is clearly a groundswell of support for something that would meet the need, if only, at the moment, to enable animation which is a pipe dream for those attempting to render hundreds of frames in IRay. Real time rendering could take DAZ Studio to a much bigger market and maybe draw some investment from the big boys like NVidia.
New game like render engine in the works?Checked out https://github.com/google/filament
It looks very promising. What would be so awesome is if it would allow for offline or network rendering. Perhaps even offload rendering of animations onto a different machine. I was hoping RIB files would contain animation (and is supported in the spec) but DAZ's implementation is broken.







