Digital Art Zone

 
   
4 of 9
4
Everything to do with Lighting in Bryce 7.1
Posted: 01 September 2012 10:10 PM   [ Ignore ]   [ # 46 ]
Active Member
Avatar
RankRank
Total Posts:  713
Joined  2006-05-26

I think the problem is a lack of initial feeler rays resulting in an incomplete gathering of the environment. Fewer rays means less processing time which would be important especially 12 years ago. TA is likely cheating and not firing nearly the number of feeler rays it should. My thoughts are that to account for this lost information, default TA fills it in somehow which I think of as a"cap.” Pixels that struck dark areas are blended with gray to produce some of the lightness that would have been gathered from other areas of the environment if more rays were fired.  Same with bright areas. Bright pixels are blended with gray to darken them as an estimation of the darkness this pixel would have otherwise gathered if more rays had been fired at different areas of the environment. Boost Light by contrast doesn’t “fix” the lost information at all, creating visible holes in the illumination as noise. My thinking is that there is only but so much we can do at the end of the calculation to account for a lack of initial accuracy in the gathering process.

Your point about at which point the gathered information should be dithered (distributed) is important. But at best it will only aid us in better hiding the initial problem which is a lack of rays. What needs to happen is that enough rays need to be fired so that the viewing perspectives of two adjacent pixels overlap a great deal. If both pixels are gathering the full environment with only a slight difference in perspective then they will tend to arrive at very similar final colors automatically limiting the need for last minute fixes at all.

I dont think rays per pixel is even going to accomplish it. What we need is a way to control the number of rays fired by TA alone similar to the way you can adjust Photon count in Carrara and other render engines. More photons results in smoother illumination and transitions between light and shadow as well as reduced noise…and longer render times!!

 Signature 

Please view my Daz3d User Gallery
http://www.daz3d.com/gallery/#users/465/

Profile
 
 
Posted: 02 September 2012 01:31 AM   [ Ignore ]   [ # 47 ]
Addict
Avatar
RankRankRankRank
Total Posts:  2554
Joined  2004-07-06

Either that, or… use statistical methods to address the noise issue directly.  If the results can be made to look right (or at least how the artist wants them to look) controlling the noise level may prove more efficient than increasing the accuracy.  The lack of interest in (and use of) the premium effects in Bryce is probably to some degree down to the long render times - particularly so for people with slower CPU’s.

 Signature 

that Bryce Tutorials Info and this Products made by Horo and myself and a link to my gallery at DAZ 3D

Profile
 
 
Posted: 02 September 2012 02:10 AM   [ Ignore ]   [ # 48 ]
Active Member
Avatar
RankRank
Total Posts:  713
Joined  2006-05-26
David Brinnen - 02 September 2012 01:31 AM

Either that, or… use statistical methods to address the noise issue directly.  If the results can be made to look right (or at least how the artist wants them to look) controlling the noise level may prove more efficient than increasing the accuracy.  The lack of interest in (and use of) the premium effects in Bryce is probably to some degree down to the long render times - particularly so for people with slower CPU’s.

Yes, I think we are after the same thing but perhaps hoping to accomplish these goals in a different way. From a statistical standpoint it would seem that there is an opportunity to induce a shortcut in the noise handling. If as you propose the final gathered values of adjacent pixels were to be compared and processed in a way that did not limit them to a narrow band of colors such as that of default TA then I am all for it. My only caveat is that I have been thinking that statistical fudging it is exactly what has already been done with default TA, and that the comparison of pixels leads to certain other artifacts and compromises. But I am in no way one who understands higher mathematics so my fears could be completely unfounded.

That said, I think that your idea is quite valid if it actually helps gather more information rather than attempt to hide a lack of information. In fact I can almost imagine a way for it to be done.

Yes, as you propose ideally the information gathered by adjacent pixels could be exchanged with other adjacent pixels. This information would only be shared at the end of the gathering process. Even though each of the individual pixels is sending out only a few feeler rays, it can borrow and lend a bit of information to and from the pixels adjacent to it to help it gain a better understanding of the environment. ideally, this would allow a greater extraction of information from the surrounding environ. However this would already introduce some statistical error since a range would have to be set on the number of pixels to be compared. plus, it would be necessary to sample a second time with the corrected values per pixel based on the considerations of the first round of gathering. It would be very similar to the way the new AA settings work. Too wide of a range and you will get a very blurry and distorted view of the environment even though it will appear smooth and noiseless. Too narrow of a range and you will gain very little in terms of noise handling but most of the dynamics of the environment will remain intact. The comparison of pixels will mean some degree of blurring no matter how you slice it. So finding the sweet post is the goal. I think you are right.

I would hope that we could get GPU rendering for Bryce some day. I know, that seems like a leap in the conversation we have going but I really do wish it were true. I would love to have unbiased renders to compare to see what types of shortcuts we can and cannot make with regard to gathering of the environment. With GPU rendering we could crank up the discovery rays per pixel and really get some realistic lighting in somewhat acceptable times. But with Cpu rendering, I guess we will have to resort to statistical fudging to get the job done in less than a million years

 

 Signature 

Please view my Daz3d User Gallery
http://www.daz3d.com/gallery/#users/465/

Profile
 
 
Posted: 02 September 2012 02:27 AM   [ Ignore ]   [ # 49 ]
Addict
Avatar
RankRankRankRank
Total Posts:  2554
Joined  2004-07-06

OK, let me run this past you, the problem as I see it is that the light gathering probes are infinitely thin, and it is this very precision that is causing the issue with the noise.  So what I was thinking was that Bryce needs to suspend two views of the inner world while rendering, one view which deals with direct lighting has to be pixel sharp, the light gathering view needs to be “defocused” (user controlled), the sharp view would resolve direct light, bump, anisotropy, pattern (in other word material properties) and would dictate the definition of where things began and ended, then the “defocused” gathered light would be brought into play, but instead of being defocused across the 2D surface of the monitor - like AA softening - it would defocus across the geometry within the scene.  So it would be bound by “sharp” information and so disguise the fact it could be gathered at lower resolution.

 Signature 

that Bryce Tutorials Info and this Products made by Horo and myself and a link to my gallery at DAZ 3D

Profile
 
 
Posted: 02 September 2012 02:35 AM   [ Ignore ]   [ # 50 ]
Addict
Avatar
RankRankRankRank
Total Posts:  3454
Joined  2004-10-01

Generally, I would say that more feeler rays would result in a more precise render and an accordingly longer render time - there’s no free lunch. Another path would be statitical filtering, using whatever algorithm, or cross convolving filters with error correction and relatively large masks with values smaller the farther the pixel is from the pixel in the centre that is convolved (closing low pass filter). Convolving filters are quite fast, also statistical filters can be realised. But in the end, all filters are cheating and decrease the accuracy of the render. It is a built-in post production, like AA-ing is (which most probably uses FFT).

I’m not sure GPU processing would be the solution, though I’m not the expert here. GPU’s are used to render scenes for games very fast. But games are like movies and there is not much time to behold a frame. A still image is different. It needs to be much higher quality.

Adaptive rendering would be probably a good idea, too. We’re doing something similar already. A terrain farther away and partly obscured by haze doesn’t need to be in the same resolution as a terrain in the foreground. The farther away a feeler ray has to travel to hit an object, the less precise it has to be. The same is true when tracing a light source. A direct, or almost direct, light source needs to be considered with more detail and precision than one that can be found only after several reflections.

 Signature 

**  [ Stuff by David Brinnen and myself**  [ My DAZ 3D Gallery**  [ My Website**  OPC 4565 **

Profile
 
 
Posted: 02 September 2012 02:55 AM   [ Ignore ]   [ # 51 ]
Addict
Avatar
RankRankRankRank
Total Posts:  2554
Joined  2004-07-06

In other news… prompted by a question from Rashad on Bryce5.com with respect to caustics.

Preliminary tests suggest.

Caustic reflection = possible.
Caustic transmission = not good.

The optics for the TA feelers is amiss.  My ongoing theory was that it was like “looking” out from the surface that is gathering the light in random directions (prodding here and there), but otherwise, it was still like looking out from the camera (but at the object surface).  But… the feelers respond to optics in a way that is similar to the rays traced by direct light, as opposed to the rays traced by the camera.  And in this case, refraction only serves to dim the rays depending on the angle of interception with the transparent surface (and the refractive value).  The rays are not diverted from their path.  TA probes see the rendered environment without optical refraction.  Sorry to be the bearer of bad tidings.

 Signature 

that Bryce Tutorials Info and this Products made by Horo and myself and a link to my gallery at DAZ 3D

Profile
 
 
Posted: 02 September 2012 03:06 AM   [ Ignore ]   [ # 52 ]
Active Member
Avatar
RankRank
Total Posts:  713
Joined  2006-05-26

David,

Fine. By “sharp” I assume you mean it as the tool to define the exact scaling of the “virtual tiles?” Fine. The measurements would have to be real world such as centimeters, millimeters, and perhaps smaller. Some software provide this type of control already, in fact I think it works this way in Carrara allowing user defined scaling for the photon mapping. Surely rays are infinitely thin by themselves, in a since not so dissimilar to real photons. It stands to reason that if you want to catch something you would open your hand wide and not ball it up into a fist. but alas fists are all Bryce knows how to throw, so it would take a lot of them to gather much of anything.

The defocused ray firing sounds good. Perhaps a step of pre-processing to accomplish that? For at some point the user would need to decide the parameters of the blurring as you noted already in your initial response.

Horo,

I hadn’t considered the vast number of ready made filters already available for solving such problems. By contrast though the results might be counter to realism I can imagine that high pass convolved filters could produce some super cool results. Maybe even negative effects.

More adaptive rendering is probably a very wise way to go about speeding things up. They must have some good schools over there where you’re from!!

 Signature 

Please view my Daz3d User Gallery
http://www.daz3d.com/gallery/#users/465/

Profile
 
 
Posted: 02 September 2012 03:13 AM   [ Ignore ]   [ # 53 ]
Active Member
Avatar
RankRank
Total Posts:  713
Joined  2006-05-26
David Brinnen - 02 September 2012 02:55 AM

In other news… prompted by a question from Rashad on Bryce5.com with respect to caustics.

Preliminary tests suggest.

Caustic reflection = possible.
Caustic transmission = not good.

The optics for the TA feelers is amiss.  My ongoing theory was that it was like “looking” out from the surface that is gathering the light in random directions (prodding here and there), but otherwise, it was still like looking out from the camera (but at the object surface).  But… the feelers respond to optics in a way that is similar to the rays traced by direct light, as opposed to the rays traced by the camera.  And in this case, refraction only serves to dim the rays depending on the angle of interception with the transparent surface (and the refractive value).  The rays are not diverted from their path.  TA probes see the rendered environment without optical refraction.  Sorry to be the bearer of bad tidings.

That is unfortunate. This might be due to some problem or shortcut in the way refraction itself is generally handled, or it might be due to a switch being turned off. Does Blurry transmissions help any? Or does it hurt? Actually, it probably wont have much if any effect. i wonder if there is any sort of ticker within the inner workings that would allow TA rays to have their paths diverted. So far there are no light rays in Bryce that respond as they should to caustic transmissions to my knowledge, so it is no surprise that TA rays don’t conform either.

Well, I’ll take caustic reflections for now. It’s a start.

 Signature 

Please view my Daz3d User Gallery
http://www.daz3d.com/gallery/#users/465/

Profile
 
 
Posted: 02 September 2012 06:32 AM   [ Ignore ]   [ # 54 ]
Addict
Avatar
RankRankRankRank
Total Posts:  3454
Joined  2004-10-01
Rashad Carter - 02 September 2012 03:06 AM

Horo,

I hadn’t considered the vast number of ready made filters already available for solving such problems. By contrast though the results might be counter to realism I can imagine that high pass convolved filters could produce some super cool results. Maybe even negative effects.

More adaptive rendering is probably a very wise way to go about speeding things up. They must have some good schools over there where you’re from!!

Right, that’s why I said any filtering would be built-in post production - like the hated gamma correction.

You’re again right that some super cool effects can be produced with convolving filters. The old render below shows an HDRI backdrop I acquired with a mirror ball at the time and is blurred. I used a diagonal Sobel convolving filter in HDRShop (Banterle’s Plug-ins My Filter) and promptly got negative values. Bryce cannot show them but I made all values absolute in HDRShop.

Well, I don’t know about the schools. I’ve read some books. Then I wrote a text book about CCD in amateur astronomy and programmed a set of graphic tools long before there was a Photoshop, including cross convolving filters (in assembler for speed - source code on my website). The pictures I got were already 12 bit per pixel and the display on my computer had only 4 (EGA). It took nearly 20 years until I understood that I had experimented with something like high dynamic range imaging and effect filtering. Hence my interest in the topic.

Adaptive rendering isn’t my idea. I’ve read about the possibility quite a while ago. I don’t know whether it is feasable but it seemed to be a good idea at the time, and I think it still is, if it can be implemented.

Image Attachments
sv.jpg
 Signature 

**  [ Stuff by David Brinnen and myself**  [ My DAZ 3D Gallery**  [ My Website**  OPC 4565 **

Profile
 
 
Posted: 02 September 2012 07:04 AM   [ Ignore ]   [ # 55 ]
Addict
Avatar
RankRankRankRank
Total Posts:  2554
Joined  2004-07-06

Three steps forwards - two steps back.  Some questions answered, new questions raised.

Bryce 20 minute scene lighting project - TA caustics effect - a tutorial by David Brinnen

Image Attachments
Ctest6_promo1.jpg
 Signature 

that Bryce Tutorials Info and this Products made by Horo and myself and a link to my gallery at DAZ 3D

Profile
 
 
Posted: 02 September 2012 07:37 AM   [ Ignore ]   [ # 56 ]
Addict
Avatar
RankRankRankRank
Total Posts:  3454
Joined  2004-10-01

I did some experimenting with TA and IBL. I took Hitomi, done her some cloths, boots, and hair on the head. I used a ready pose but moved the head so that she would look at the beholder.

The first render uses 3 TA optimised radials with a gel and the sun. It took 65 minutes to render.

The second has the same setup with the addition of an HDRI rotated so that it matches the panorama on the gel of the radials. The HDRI gives additional light, the backdrop is kept rather dark to make Hitomi stand out. Then, there is a negative parallel light with infinite width for shadow capture. The ground plane has a bit of diffuse to adjust the deepness of the shadows. This seems to work fine through the TA optimised radials. Rendered with HDRI quality 128 and soft sun shadows. The rest of the settings are like for the TA only render. This one took 99 minutes to render.

The shadows give it away that she lifted her behind a bit, something that is not obvious in the TA only render.

Image Attachments
Hitomi03.jpgHitomi04.jpg
 Signature 

**  [ Stuff by David Brinnen and myself**  [ My DAZ 3D Gallery**  [ My Website**  OPC 4565 **

Profile
 
 
Posted: 02 September 2012 08:36 AM   [ Ignore ]   [ # 57 ]
Addict
Avatar
RankRankRankRank
Total Posts:  2554
Joined  2004-07-06

The lighting on the model looks excellent, Horo, not at all like we have come to expect from Bryce - like me you seem to have fallen foul of some blurred reflection oddness do you think?  But yes, it does look very promising.

 Signature 

that Bryce Tutorials Info and this Products made by Horo and myself and a link to my gallery at DAZ 3D

Profile
 
 
Posted: 02 September 2012 09:44 AM   [ Ignore ]   [ # 58 ]
Addict
Avatar
RankRankRankRank
Total Posts:  3454
Joined  2004-10-01

Thank you David. Yes the blurry reflections, I did that on purpose but it is not convincing. The way I set the HDRI, which has no visible sun (camera was in the shadow), I could not have achieved the light on Hitomi as I have here.

The caustics are also a very interesting effect. A reflecting surface acts as a mirror, but it does not reflect the light as well, something we have been missing in Bryce. This caustic effect emulates that to some extent. However, caustics appear in nature on a curved transparent or reflecting surface, so it’s not the same thing, but it’s a promising start.

 Signature 

**  [ Stuff by David Brinnen and myself**  [ My DAZ 3D Gallery**  [ My Website**  OPC 4565 **

Profile
 
 
Posted: 02 September 2012 06:13 PM   [ Ignore ]   [ # 59 ]
Member
Avatar
Rank
Total Posts:  138
Joined  2003-10-09
Rashad Carter - 30 August 2012 08:47 AM

I literally have to thank PJF for the idea which I then took to another level and implemented it as a full feature.

Have checked PM folder, Hansard, Honours list, and PayPal balance - but the parched and cracked tumble weed of forgotten desolation is the same discovery always. The collar of my threadbare coat is turned up against the steady drizzle of decline as I trudge wearily past abandoned tanks of yor on the trail of tears heading back to Germany.
Or, something…


Just catching up with this enjoyable and fascinating thread. Really wish I had more time to play. Feels especially nostalgic watching David’s video when he turns the sky off and starts messing about in a small area under blackness.

 

Caustics would simply not be possible without Boost Light.

Not as good, probably not even useable – but certainly possible. This Renderosity thread from nine years ago (!) features me pratting about with caustics using Bryce5 TA.
http://www.renderosity.com/mod/forumpro/showthread.php?message_id=1445509&ebot;_calc_page#message_1445509

Of course, David B. working with today’s improved TA options is way, way ahead of my earlier fiddlings. But they do show the potential was there. “If only there could be more light,” I said at one point. Well now there is, and David (B for brilliant) has found a way of reducing its horrible side effect.


You’ve all done very well…

.

.

Profile
 
 
Posted: 02 September 2012 07:49 PM   [ Ignore ]   [ # 60 ]
Active Member
Avatar
RankRank
Total Posts:  927
Joined  2003-10-09
Horo - 01 September 2012 01:46 AM

Mark - we know that not everyone can purchase every product that comes in the store. That’s why a few HDRIs are supplied with Bryce. I also have a few others on my website for free. There are so many HDRIs because each one has another quality to it, but this doesn’t mean that only the one shown can be used. There will be differences and those make your render stand apart.

As for the Stanford models: I downloaded all a few weeks ago but couldn’t find a reliable program to convert them. Most programs that claimed to be able tried to instal crap on my machine and I had a busy time to clean it up. Lost a full afternoon on this and finally gave up frustrated.

I understand that one doesn’t have to use the same HDRI and each HDRI is going to have different impacts which can make something unique which would be desirable when trying to create one’s own work of art. When following a tutorial however if using a different HDRI your progress can begin to deviate from the instruction and make it difficult to fully follow along because your results are different. So I would prefer to use the exact same elements when possible. Like I said though fortunately one of the HDRI’s included with Bryce seems to be based on the same HDRI David used and so other then subtle differences my results were able to follow the tutorial.

On the Stanford model issue. I have good news. Seems my pronouncement of them being unimportable into Bryce was pre-mature. They are unimportable directly into Bryce still, even though they’ve been converted to an obj using Meshlab. They are importable into Studio however and can be sent over the bridge to Bryce successfully. As you can see below I have successfully imported a head of a statue named igea, a bust of someone called maxplanck, an object called a heptoroid, a human brain, the Stanford Bunny, a statue called Happy Buddha, a figurine of an armadillo like creature, a golfball and an angel sculpture that looks like it might have been part of a larger work of art. There were two other models that did manage to be unimportable via the bridge, presumably because it was just too much for my system to handle. One was another angel statue called Lucy and one was a totem like statue that went with the xyzrgb dragon.

Apparently my intial assumption was wrong because the models were so complex it seemed like they were taking an impossibly long time to import. I assumed my system had locked up as it often does when something doesn’t complete transfering via the bridge when in fact it was still processing the import for conversion. I didn’t think they would take that long as they appeared like they would be less complex meshes then some of the poser figures and objects that import into Bryce in seconds via the bridge. Apparently though the scanning process Stanford used is more interested in perserving the exact shape of the model accurately then they are with making the models practical on mainstream computers. This notion was supported by the fact that what I was able to import had much denser meshes then I was anticipating. The models that didn’t import were the most detailed of the selection of models so it stands to reason they might have been too much for Bryce to digest. I’ll have to poke around meshlab some more and see if there is a way to reduce the mesh at some point during the conversion process and give it another try. I also may have found the angel named Lucy in a more workable form from a site boasting free 3D models which has mostly alot of nice cars and planes but assorted other models too.

Anyway if you or David are interested I’d be delighted and consider it a priviledge to make these already Bryce converted models available to you as a token of appreciation for all you do for the community. I can strip the matrials as they were added in Bryce just to make the image below more visually appealing. In fact I already did and then saved each one in the Bryce default scene. Now all together in one scene with materials already applied as seen below is around 167MB sipped (170MB unzipped). I imagine there is some file hosting service by which I could make this available? And that would simplify things being all in one. Individually zipped the biggest is the buddha at almost 49MB and the smallest is the Stanford Bunny at a little over 3MB the rest somewhere inbetween.

Also, I had to stop midway thru composing this message (phone call)  and during that time I did play around with Meshlab a bit more (it’s fairly intuitive with the help of mouse overs) and I did find a way to reduce the mesh of those ones that didn’t import. I did four attempts each working off the previous one and reducing further and each time it almost cut the file size in half. None are importing into Bryce directly each causing the same unexpected failed creation error message as my previous attempts. So far I’ve only tried to bring the first reduction over to Bryce via the bridge and it seems to be resulting in a hang like before. Hopefully the next reduction will work because if you reduce it too much the mesh begins to lose it’s integrity.

Image Attachments
Stanford_Models.jpg
Profile
 
 
   
4 of 9
4