How many here are using the newer RTX GPUs?

1235»

Comments

  • outrider42outrider42 Posts: 3,679
    ebergerly said:

    I suppose another option, rather than rely on expensive hardware, is to investigate the world of digital compositing and really improve your craft and cut down render times (and do fancy effects and make changes in real time rather than have to spend long hours rendering):

    https://www.youtube.com/watch?v=gYu4esqvnQ0

    Compositing video still requires pretty good hardware. You need a lot of memory and a seriously good CPU to make good use of programs from Adobe and other venders. It takes time to transcode video, just like it does for us to render 3D. There are numerous benchmarks released around video encoding, it matters. Nothing is real time if your computer is junk.

    The statement also omits a very important issue...where do you obtain the material for this composition in the first place? If you use real people, well now you need to pay actors and have contracts in place for using their likenesses. Who is filming with the camera? If you need 3D stuff...now you are looking right back at programs like Daz. You could get everything through stock footage, but you still have to buy it, and you cannot control what stock footage is available. This stuff isn't free and it isn't easy...just like 3D.

    Even if you do obtain everything, as I said above, you still need the hardware capable of putting it all together. I don't see any real difference between these fields in terms of time and money invested. They can even overlap since you can always use Daz renders in composition. People do that a lot, we have users who create book covers and other commercial art combining Daz renders with other images for backgrounds. So these fields certainly are not exclusive from each other.
  • bluejauntebluejaunte Posts: 1,990

    You can composite a still render too, doesn't have to be video. I'm not sure he actually meant compositing though, which usually involves rendering various passes that I don't think are going to require any lesser hardware. If anything that is probably going to be even more time spent rendering. Maybe a bit less time spent fine tuning the scene in Daz Studio, if that was the goal. Perhaps he meant compositing more like post work in Photoshop.

  • ebergerlyebergerly Posts: 3,255

    For those who haven't already made up their minds not to use compositing and related stuff, I'd encourage you to at least learn a bit about the field and try it.

    I just started learning about it last year, based on a friend who even writes simple scripts to automate some complex compositing tasks. It's pretty amazing what you can do. And since you're working on 2D images, not time consuming 3D rendering, it can be amazingly fast and/or realtime. And apparently you can extract 3D scene information and use that to do effects on your rendered, 2D images. Not quite sure how that all works yet. 

    The simplest example I use a lot is to render a complex, time-consuming background once, then all you need to render after that is the moving character(s), which is usually MUCH faster. Then just plop the character image on top of the background image in your compositing app and away you go. No GPU needed. And you can do stuff like realtime depth of field on the background image, and tweak it in realtime by moving a slider. Or add light rays in real time. Or motion blur. Or actually change lighting in the rendered image in real time, without re-rendering. There's a gazillion different things you can do, only limited by your imagination. Very cool. 

    And to me, the biggest thing is you can make tweaks to your image in real time without having to go back and re-render each time you want to make a change. "Hey, my image is a bit too bright"...instead of tweaking the scene lights and re-rendering, just modify the image brightness in real time in your compositing app. Or change colors. 

    I'm guessing that pretty much the entire professional world of digital effects-type people and so on uses these techniques, so I doubt they can all be wrong. laugh

    Personally, I honestly don't care what methods others use, I'm just suggesting that folks don't give up on something before they understand it. 

    Below is one of the first effects I did last year sometime. Built a model in Blender, rendered it, then in the compositing app I added a photo of the sky, the rendered image of the fighter, and I had done a render layer that includes motion vector info needed to realtime motion blur. And you just slide a slider to vary the blur. Very cool.

    Motion Blur.PNG
    961 x 543 - 986K
  • bluejauntebluejaunte Posts: 1,990

    The main reason composting is used by VFX studios is to put together live action video with CG shots. Say actors in front of a green screen need to be combined with rendered parts. 

  • kyoto kidkyoto kid Posts: 41,861

    ...one of the issues, it also can also involve the need for digital painting. particularly if shadows that fall on subsequent planes.  For those like myself who lack of a steady hand, digital painting is not an option.  If I could still paint with brushes, I would not have bothered with 3D CG.

  • ebergerlyebergerly Posts: 3,255

    Like I said, I'm fairly new at this digital effects/compositing thing but so far I've never needed to hire any actors. I think most of the stuff I render is CG characters like you get in the DAZ store (G3, G8, etc.). Although you can certainly place those characters over different backgrounds if you want. I suppose kinda like they do in the big CG animation movies like Toy Story and Incredibles. And I'm sure they also do a ton of other stuff like effects (blurs, depth of field, color changes, lighting changes, etc.) like I mentioned. And I've never had to do any digital painting. Mostly it's just drag and drop and moving sliders. One thing I just started is doing "vignettes", which is taking your image and putting a barely noticeable, dark circular edge around it to kinda focus attention to the center of the image. It really makes a big difference. And if I make and render something in Blender I can just plop that rendered image over my Studio render.

    But I'd caution folks not to do what I've done and get addicted to what are called "breakdown" videos. It's where people show how their compositing/effects were done in a particular movie. Kinda mind blowing.  

  • bluejauntebluejaunte Posts: 1,990
    ebergerly said:

    Like I said, I'm fairly new at this digital effects/compositing thing but so far I've never needed to hire any actors. I think most of the stuff I render is CG characters like you get in the DAZ store (G3, G8, etc.). Although you can certainly place those characters over different backgrounds if you want. I suppose kinda like they do in the big CG animation movies like Toy Story and Incredibles. And I'm sure they also do a ton of other stuff like effects (blurs, depth of field, color changes, lighting changes, etc.) like I mentioned. And I've never had to do any digital painting. Mostly it's just drag and drop and moving sliders. One thing I just started is doing "vignettes", which is taking your image and putting a barely noticeable, dark circular edge around it to kinda focus attention to the center of the image. It really makes a big difference. And if I make and render something in Blender I can just plop that rendered image over my Studio render.

    But I'd caution folks not to do what I've done and get addicted to what are called "breakdown" videos. It's where people show how their compositing/effects were done in a particular movie. Kinda mind blowing.  

    A lot of that can just be Photoshop work too. You can render canvases and then "composite" them in Photoshop, just throw them into layers and do whatever you want with them. Vignettes as you said is just a darkening around the image that can be done in PS too, or even in Iray actually. There is a vignette setting in the render settings under tone mapping. Of course you can do all that in compositing software but I'd still be hesitant to recommend that over just an image editor for still renders.

    This made the rounds some years ago and shows compositing in action in The Wolf of Wall Street movie: https://www.youtube.com/watch?v=pocfRVAH9yU

  • ebergerlyebergerly Posts: 3,255

    Of course you can do all that in compositing software but I'd still be hesitant to recommend that over just an image editor for still renders.

    Since you can get free, professional grade compositing/effects software, and it has so much more capability than simple image editors (features which you can only discover once you actually try it and learn about it...), I strongly recommend that anyone who is interested in the field at least consider giving it a try. It's another one of those cases where folks don't know what they don't know unless they try it. laugh

    As yet another example you can set up a "depth" layer to be rendered with your normal render, and bring that grayscale depth image into your compositing software and it uses the grayscale values as an indicator of the depth of each point in the scene. And it then applies a depth of field blur proportional to those actual scene values. And if you don't like the results you can slide a slider in real time to adjust it.  

     

  • bluejauntebluejaunte Posts: 1,990

    Never hurts to learn new stuff. I just wanted to clarify a bit what compositing software actually is. You said it yourself, you barely scrachted the surface. I just felt it's a bit misleading to advertise this as a panacea for slow hardware, which it isn't. You still need a high quality render. You can't just render a character with no background and then hope to somehow composite in a background that matches the angle and lighting without considerable effort. Where do you take this matching background from, another render? A real photo? Meanwhile, if you just use it to change colors, add vignette or even DOF, all that can be done in Photoshop which is capable of way more than you seem to give it credit for. And it will be massively more intuitive to the vast majority of people compared to complex node based compositing software like Nuke that was made for professional VFX studio workflows. And it costs thousands of dollars, at least if you wanted to use it commercially. I can't even legally install it alongside Mari because TheFoundry does not allow mixing commercial and none-commercial licenses. Not that Nuke is the only software out there, but as far as I remember that's what you are referring to.

    Anyway, I won't say any more about it. This is really immensely off topic.

  • ebergerlyebergerly Posts: 3,255
    edited April 2019

    Well I think it's appropriate in a GPU thread to show that there are alternatives to very expensive devices. 

    Here's a GIF of real time depth of field. I rendered a Blender scene, included a depth pass, loaded both into a compositing software, and by manually moving the focal point it figures automatically, and with virtually zero GPU or CPU usage, the depth of field effect that you specify. No need for time consuming re-rendering. And if someone like me can do this, you can be sure it's fairly straightforward and quick to set up. 

    Post edited by ebergerly on
  • bluejauntebluejaunte Posts: 1,990

    Yup, that's nice. Worth noting also that Blender has its own compositor. Don't know if it can do this though.

    Of course I'd have to ask, is this a situation that ever comes up in rendering? Like, you suddenly after the fact decide that the focus should be somewhere else in the scene, and then you save on hardware because you didn't have to rerender? Seems rather like an edge case. And you still did have to render the classroom anyway. So does this really effectively avoid expensive hardware?

  • ebergerlyebergerly Posts: 3,255
    edited April 2019

    Clearly it's up to the individual to determine whether a tool is beneficial. Again, I just encourage folks to know and understand the tools first before deciding to dismiss, rather than dismissing first based on lack of knowledge. 

    Post edited by ebergerly on
  • bluejauntebluejaunte Posts: 1,990

    Sure yeah, people have to find what works best for them. Doesn't hurt to try.

    Just remembered, it could be nice for Iray because the damn thing doesn't have an auto focus! How often it happens that I want to focus on the eyes and then realize I didn't quite hit the sweet spot with the stupid manual focusing... of course I see this in the preview or early in the render if I pay attention, but still annoying. In terms of time saved though... well, having to render a depth pass, throw it all into a compositor and set it up, that takes time too. I really wouldn't wanna have to do all that for every render. But that's just me.

  • kyoto kidkyoto kid Posts: 41,861
    edited April 2019
    ebergerly said:

    Like I said, I'm fairly new at this digital effects/compositing thing but so far I've never needed to hire any actors. I think most of the stuff I render is CG characters like you get in the DAZ store (G3, G8, etc.). Although you can certainly place those characters over different backgrounds if you want. I suppose kinda like they do in the big CG animation movies like Toy Story and Incredibles. And I'm sure they also do a ton of other stuff like effects (blurs, depth of field, color changes, lighting changes, etc.) like I mentioned. And I've never had to do any digital painting. Mostly it's just drag and drop and moving sliders. One thing I just started is doing "vignettes", which is taking your image and putting a barely noticeable, dark circular edge around it to kinda focus attention to the center of the image. It really makes a big difference. And if I make and render something in Blender I can just plop that rendered image over my Studio render.

    But I'd caution folks not to do what I've done and get addicted to what are called "breakdown" videos. It's where people show how their compositing/effects were done in a particular movie. Kinda mind blowing.  

    ...if a shadow from an object in one render plane falls on an object in a render plane behind it then it would have to be painted in as the shadow casting object would not be there when rendering the layer behind it.

    Post edited by kyoto kid on
  • ebergerlyebergerly Posts: 3,255
    edited April 2019

    In fact, here is a quick DAZ render of a simple torus against a transparent background, but including the shadows it casts on a floor. Just composite that over a Blender background and you're done. 

    Shadow.JPG
    982 x 552 - 124K
    Shadow1.JPG
    949 x 597 - 105K
    Post edited by ebergerly on
  • Richard HaseltineRichard Haseltine Posts: 108,079

    Please keep replies to this thread civil and free os scorn or condescension.

  • bluejauntebluejaunte Posts: 1,990
    ebergerly said:

    In fact, here is a quick DAZ render of a simple torus against a transparent background, but including the shadows it casts on a floor. Just composite that over a Blender background and you're done. 

    Now do the same thing with the composited torus so it actually makes sense inside of that scene. With light coming from the window, shadow probably on the left of the torus, material of the torus correctly interacting with the scene and bounce light from the torus affecting the surroundings. If the torus is reflective, reflect the environment correctly. Or imagine the light was coming from the front and the torus would have to cast a complex shadow on every chair and table and the whole wall with the chalkboard behind it.

    Can it be done? Probably. Is it worth it for a Daz Studio still render because "hardware is too slow"? Unlikely.

  • ebergerlyebergerly Posts: 3,255
    ebergerly said:

    In fact, here is a quick DAZ render of a simple torus against a transparent background, but including the shadows it casts on a floor. Just composite that over a Blender background and you're done. 

    Now do the same thing with the composited torus so it actually makes sense inside of that scene. With light coming from the window, shadow probably on the left of the torus, material of the torus correctly interacting with the scene and bounce light from the torus affecting the surroundings. If the torus is reflective, reflect the environment correctly. Or imagine the light was coming from the front and the torus would have to cast a complex shadow on every chair and table and the whole wall with the chalkboard behind it.

    Can it be done? Probably. Is it worth it for a Daz Studio still render because "hardware is too slow"? Unlikely.

    You do raise some good points. As with any tool, a user may decide that it's maybe a good choice for some situations, but not for others. Of course, hopefully we don't decide to never ever use a hammer because it can't turn a wood screw real well. Because it's probably the best tool for the job when we have to put a nail into a 2x4. Let's just hope we KNOW to reach for the hammer in that case instead of pounding it with a screwdriver. laugh

  • bluejauntebluejaunte Posts: 1,990

    What are you talking about, I use hammers for everything. Well except to kill flies. I use chainsaws for that like any fine gentleman.

  • outrider42outrider42 Posts: 3,679
    edited April 2019
    ebergerly said:

    Well I think it's appropriate in a GPU thread to show that there are alternatives to very expensive devices. 

    Here's a GIF of real time depth of field. I rendered a Blender scene, included a depth pass, loaded both into a compositing software, and by manually moving the focal point it figures automatically, and with virtually zero GPU or CPU usage, the depth of field effect that you specify. No need for time consuming re-rendering. And if someone like me can do this, you can be sure it's fairly straightforward and quick to set up. 

    That's all well and good except you gloss over how you obtained your image in the first place...through a render. A render that required your machine to create it. And that image of a TIE fighter? Did you obtain the rights to use that image? If that came from one of the movies, there is a problem. You can't just composite any image off the internet. You need to own that image first.

    This is what your argument for composition lacks. You still need to obtain these assets, and that is going to cost money. You also cannot easily control what assets you can obtain. If you want a very specific image, you may not be able to get it from the stock image sites. Plus these stock images and effects are used in other places as well, meaning your lightening bolts will look just like lightening bolts another person uses. The effects are canned. The result is not so different from the canned laughter in tv sitcoms. 

    Lets not forget Daz Studio is also free software. And you do NOT need a Titan to use it. It helps, but we have plenty of people with much less who get by OK. BOTH of these field are asset driven.

    If compositing is your goal, well rendering characters with no background is extremely easy. You don't need a ton of VRAM to do that, and many GPUs can handle that task competently. I used to have a GTX 670, and that old card could render a Genesis 2 in just 5 minutes, or even less. You can get better performance with a 1050 today, and thats like $120 or less. You talk about hardware, you have 1080ti in your rig, of course the usage is low. Your 1080ti is driving your monitor, which allows your CPU more freedom to work in Blender. Pull that GPU out, cut your CPU clocks in half, cut your RAM to 8gb, and see how well that software runs now with a really large image and various post effects.

    I have an app on my phone that adds cheap generic "movie" effects to photos. I dropped a nuke on my cat. Kids in the family had a blast doing silly things with it. Obviously there is potential there, and Hollywood loves compositing. But like anything you need to invest in it to make it work well. For someone always asking for data to back up a claim, I haven't seen any actual data for purchasing the stock images you need to do this. There are free stock image sites, but you will find a pretty limited selection there. There is a Reddit thread dedicated to created off the wall...art...with stock images and videos. There are real gems there. There are some very surreal stock images and video clips out there.

    Actual stock from shutterstock. If you like it, you can buy it!

    So, if you like this image, there are 4 image sizes available. The smallest is $2.50, the largest is $12. However, there are 4 different licenses you can buy. The "standard" license is included, but this is the most basic one. If you want to resell this image in any form it will cost an extra $50. So you are talking about a potential $62 for this one picture if you really need an image of a man in a bottle held by a sneaky cat in a hat. That is just this one image. At least with Daz Studio you can render as many different images you want from every angle. What if you wanted to see the cat from an angle? Too bad. Dude, the cat is not even looking at the camera here. What's up with that? That really bothers me.

    Post edited by outrider42 on
  • Great conversational turn!  Love this.  smiley

  • nicsttnicstt Posts: 11,715

    Yup, that's nice. Worth noting also that Blender has its own compositor. Don't know if it can do this though.

    Of course I'd have to ask, is this a situation that ever comes up in rendering? Like, you suddenly after the fact decide that the focus should be somewhere else in the scene, and then you save on hardware because you didn't have to rerender? Seems rather like an edge case. And you still did have to render the classroom anyway. So does this really effectively avoid expensive hardware?

    You can just render the DoF pass, so if you change your mind, it is more quickly changeable than having to do it all.

  • MIH_BADMIH_BAD Posts: 65

    I got 4 x 1080 Ti and still need around 1 hour for a full scene (100% render). When I look at the cost of RTX 2080 Ti or even RTX Titan, then it's 4 times more expensive then some second hand 1080 Ti. Also, please note: The RTX 2080 Ti has more Cuda Cores than 1080 Ti.

    At this point the prices of RTX are too high, just impossible to replace.

  • Spent ages getting fed up of GPU dropping to CPU, with older msi laptop.

    Finally have the new laptop set up, with 4.11 and 4.12 for comparisons [using Sickleyield bench mark test scene, from DeviantArt]

    Managed to speak to a reasonable nVidia chat assistant, and pointed to best driver - so finally not dropping out randomly.

    Only had time to run a GPU only comparison   on the benchmark scene so far.

    4.11 - 1 Minute  28.67 Secs

    4.12 -  0 Minuite  55.88 Secs

    See snips attached.

    Will be making CPU / Combo comparisons when haave some time.  Fair to say this is an improvement on old system [now relegated to media use]  :)

    Dell Precision 7740
    64GB ram
    i9-9980HK, 8 Core
    Nvidia Quadro RTX 5000
    17.3" UltraSharp 4K IGZO

    4.11 Quadro Test.JPG
    2898 x 541 - 329K
    4.12 Quadro Test.JPG
    2769 x 395 - 187K
  • ebergerly said:

    I'm curious...for those who have already bought an RTX-2080ti, could you explain how you decided to spend $1,200 for one? I've got 3 GPU's across two desktops, so I don't really need another GPU, but honestly I've been a good boy and I do deserve a present. It's just that this one has been extremely difficult to come close to justifying for me since there's so little real data showing it's worth the $$. 

    Am I missing something? Was it a gaming decision? Since I never play games that's not a factor. Have you seen that much of an Iray render time improvement? 

    Thanks. 

     

    As I posted earlier, I got lucky and mine was a present from the GF. Personally I couldn't justify the cost myself. I had been looking for a 1080 ti at a decent price for awhile and I guess she heard me one to many times complain that they were getting harder to come by and the price was even crazier for one. She had asked why I wanted one so bad and I explained I wanted the 11 gig of DDR for DS, which might explain why she got the 2080 ti.

    Your GF got you a 2080ti?  Wow!  Nice gift!  I hope you treated her to a nice hedge trimming for that one!  

Sign In or Register to comment.