OT: I Wrote a Ray Tracer !!!

Go no, right? Yes I did. And I'm really jazzed. For the longest time I've wanted to see what's inside the guts of a ray tracer, but avoided it because I thought it would be a huge undertaking. 

After doing a little research, I was absolutely blown away by how easy it is !!!! You can do it in just a couple hundred lines of code. Below is my first grainy render of a simple sphere (using one sample per pixel) and another with 10 samples per pixel. And it was rendered on my GTX-1080ti. laugh

And I found out why you always see spheres in 3D test renders. They're super easy to calculate in a ray tracer. And doing materials is FAR easier than I expected. 

And yeah, I did take advantage of some existing libraries for vectors and some graphics stuff, but the core of the ray tracer I wrote in C# code. 

 

 

«1

Comments

  • nicsttnicstt Posts: 11,715

    Ha! cool.

  • Sven DullahSven Dullah Posts: 7,621

    Well done! Do I even see some color bleeding? laugh Please male it Mac compatiblecool

  • ebergerlyebergerly Posts: 3,255
    edited August 2018

    Yeah, this is super bare bones. There's no light source, I just defined the colors of the sphere and ground and background. So right now it's just some very simple vector calculations. You define a ray (vector) coming out of the camera for each pixel in your image and depending on the direction of that vector it does a simple calculation (quadratic equation) to see if any point on the vector is the same (X,Y,Z) coordinates as any point on the sphere (or the background, or the floor).

    Spheres are super easy because there's a very simple equation for all the points on the sphere: (X^2 + Y^2 +Z^2) = R^2. And the vector from the camera (the ray), is just an X,Y,Z coordinate vector with increasing magnitude. You just put those together and solve the equation and you can determine where (or if) any ray hits the sphere. And if it does, it just returns the sphere color for that particular pixel. Piece of cake.  

    In this case when the rays hit the sphere they just do random bounces (assuming a diffuse/matte surface). And that's why there's a uniform shadow. 

    Anyway, my next task it so model light sources. Looks like it's fairly simple. 

    Post edited by ebergerly on
  • CGHipsterCGHipster Posts: 241

    Cool

  • PaintboxPaintbox Posts: 1,633

    Impressive! So when are you releasing this AND when will you be overtaking IRAY as the de-facto renderer of DAZ Studio ?laugh

  • ebergerlyebergerly Posts: 3,255

    Okay I added another sphere and made a metal texture. Well, I kinda copied the metal code from someone else and converted to C#, but it's pretty straightforward. Metal and diffuse/matte are pretty easy, but the tough one is glass (dielectric). I think I'll pass on that one for now laugh 

    RayTracer2.JPG
    1071 x 750 - 45K
  • ebergerlyebergerly Posts: 3,255
    edited August 2018

    Cool. I think I figured one way to make an emissive. Just crank up the RGB values of the Lambertian (matte) material way past 1.0 and it glows. Not sure that's right though. Awful noisy....

    RayTracer3.JPG
    1069 x 744 - 124K
    Post edited by ebergerly on
  • Sven DullahSven Dullah Posts: 7,621

    yessmiley So when will you post your first beachbabe render?surprise

  • ebergerlyebergerly Posts: 3,255

    yessmiley So when will you post your first beachbabe render?surprise

    laugh

    When you get past simple spheres and try to do complex shapes I think it gets WAY more difficult. What I'm gonna shoot for next is to do realtime updates of the scene so I can crank the light up and down and move the camera and stuff. That's about as much as my puny brain can handle... 

  • Sven DullahSven Dullah Posts: 7,621

    This is very interesting, thread bookmarkedwink

  • KitsumoKitsumo Posts: 1,222

    Awesome work! This is really cool.

  • Looking forward to seeing a teapot. wink

  • ueziuezi Posts: 46

    Good job! Any chance we can look at the source and learn something?

  • ebergerlyebergerly Posts: 3,255
    edited August 2018
    uezi said:

    Good job! Any chance we can look at the source and learn something?

    I've been working on a conceptual explanation that hopefully simplifies the ideas and makes them more understandable. There's only a few equations that are solved for each ray/pixel, so once you get the basics you can easily apply to any language. BTW, what language do you use? Its easier if you use something with some good vector and graphics libraries available. With C I think you need to build your own vector library which is a pain. And if you're not familiar with solving simple quadratic eqations using the quadratic formula thats probably the best place to start. Much of this boils down to that.
    Post edited by ebergerly on
  • ebergerlyebergerly Posts: 3,255

    Okay I did a little more work on my ray tracer, and this time I worked on the camera model, this time tweaking the camera parameters so I can get some depth of field blur. It's really surprising how simple this is. It's all about vectors and geometry. You start with the camera origin at (0,0,0), define a vector that shoots out from there thru one of the pixels in your image (each pixel is a simple XYZ location in a matrix in front of the camera), and into your scene. And you check to see if any point on that vector hits any point on the sphere. And since you have the XYZ location of every point on the sphere in your scene data, it's just a matter of solving a simple equation. And if the ray does hit the surface, you just return the color value at the point it hits.  

    Anyway, this is just my next step in trying to figure all this out. BTW, notice the little shout-out to the forum in the title bar of the render window. laugh

    BTW, is a "shout-out" still a thing? 

    RayTracer5.JPG
    1069 x 747 - 64K
  • ebergerlyebergerly Posts: 3,255
    edited August 2018

    So if anyone's interested in this, here's my 2 cents on how this raytracing works. 

    First, imagine you have a camera, and it's looking out onto your scene. And inbetween your camera and your scene is a mesh, or a grid. And this grid represents your final image. So if your final rendered image is 1920x1080 pixels, that's the size of your grid. 

    So as you look out onto your scene, you're looking thru this grid, which is kinda like you're looking outside your house thru a window screen.

    And the basic goal is to fill in each of the squares of the grid with the color you see in the scene "behind" that square. So if you're looking outside at a blue car on the street, and you focus on one square of the 1920x1080 squares (total of 2,073,600 squares in the window screen), what you see thru that particular square might be a tiny piece of the blue car's front door. So you say, "Oh, okay, that square should be blue", and you fill it in with blue. 

    And you do that for every one of the 2 million squares in the grid. But instead of doing it using real world light rays and your eyes, you do it with geometry. Because your computer doesn't have eyes and there's no real light in your computer. So the way you do it is you have all the information you need about the XYZ coordinates and surface colors of everything in your scene. And you have the XYZ location of the camera. And you have the XYZ location of every square/pixel of the grid. So all you need to do is say, "Okay, I"m working on pixel 473x120 of the 1920x1080 pixels, and from my camera at (0,0,0) that's a direction of (whatever XYZ coordinates...), and that ray/vector from the camera to that pixel shoots out into the scene and hits this one sphere at location (whatever XYZ coordinates...), and I know that the sphere it hits is blue". 

    And that's basically it. That's ray tracing. 

    Well, of course it's a whole lot more complicated, because once you hit the sphere you have to bounce a ray off so it hits other stuff. So in our case since it's a matte finish sphere you just take the XYZ point the vector hit the sphere, and then calculate another vector at a random direction to make the next bounce into the scene. And so on. 

    And then you'll probably want to take the ray going thru each pixel and make some more random rays inside that pixel and average them together to clean up the image. 

    Anyway, attached is a zoomed in render of my simple sphere with ONE ray for each pixel of a 1920x1080 image. And you can imagine how each pixel is just a color that was filled in by a ray shooting thru it and into the scene, and it decides what in the scene (either an object or background) it hit and report that back so the pixel can be filled in.  

    RayTrace1Sb.png
    1401 x 923 - 545K
    Post edited by ebergerly on
  • ebergerlyebergerly Posts: 3,255
    edited August 2018

    Also, as you can see from the zoomed in image above, there's a big challenge here. You only have 2 million pixels. And by definition, a pixel is just ONE color. You have to decide on a single color for each pixel. You have to come up with a single set of R, G & B values for each pixel. And even with an image with 2 million pixels it looks very grainy and noisy.

    So you have to come up with a way to make it look nicer. 

    Well, one thing you can do is, for each ray going thru the center of each pixel, you can say "Okay, well I'm gonna add some more rays thru this pixel and then average them together". 

    So in the attached image I increased this to 50 rays (aka "samples") inside each pixel (each ray going thru a different random point inside the pixel square), and averaged all the colors I saw in those rays together to come up with a final color for each pixel.

    And next to it is the final render, and you can see it at least gives the impression that it's a lot cleaner. Even though if you look at the individual pixels it's still kinda noisy.

    RayTrace1Sd.png
    1566 x 925 - 277K
    RayTrace1Sc.png
    1065 x 746 - 492K
    Post edited by ebergerly on
  • ebergerlyebergerly Posts: 3,255

    And here's the result with 100 samples/rays shooting thru random points inside each pixel. So you can see that the major challenge in the quality of the image is the resolution (1920x1080 in this case). And to get around that you have to start using fancy methods to smooth it all out. That's where stuff like de-noising comes in. Your real limit is the image resolution, so everything else is just kind of averaging and making better guesses and stuff.  

    RayTrace100S.png
    1575 x 914 - 413K
  • algovincianalgovincian Posts: 2,670

    Yep - what your describing is called Monte Carlo Simulation. It's a well known approach used frequently to solve certain computational problems in science and engineering.

    I have no idea about your age, background, experience, or education, so please forgive me if this is off-base. But, if this sort of scientific study interests you, I might suggest you have a look at Matlab. It's been around for ages (I used it when I was in college a long time ago in a galaxy far far away), has tons of modules for specific areas of interest, and is a really powerful tool for scientific analysis. I find it useful for running simulations and, in particular, quantifying the results.

    You seem to have the interest and curiosity . . .

    - Greg

  • ebergerlyebergerly Posts: 3,255

    Yep - what your describing is called Monte Carlo Simulation. It's a well known approach used frequently to solve certain computational problems in science and engineering.

    Oh, so you've already written ray tracer software? 

  • I don't understand this? What exactly does it do and what is it used for?
  • algovincianalgovincian Posts: 2,670
    edited August 2018

    I've never written a complete ray tracing render engine, but have been working on a different type of NPR (Non-Photographic Render) engine for decades. Here's a couple of links to recent examples of rendered output:

    - Greg

    ETA: My interest is in structures much larger than individual pixels.

    Post edited by algovincian on
  • ebergerlyebergerly Posts: 3,255

    So does that just take a 3D scene and do primary rays, but no bounce rays, and come up with a kind of toon shader from the results?

  • algovincianalgovincian Posts: 2,670
    edited August 2018

    I use a traditional render engine to produce a series of analysis passes, which are then interpreted and passed on to additional algorithms for additional rendering.

    Many people have asked about the process in the past few years. I've attempted to answer questions as they've been asked, so I'll be lazy and cut/paste some links to a few specific posts with info from other threads that should explain a lot:

    https://www.daz3d.com/forums/discussion/comment/980556/#Comment_980556

    https://www.daz3d.com/forums/discussion/comment/981107/#Comment_981107

    https://www.daz3d.com/forums/discussion/comment/3148991/#Comment_3148991

    There has been further development since I made those posts, but the ideas remain the same.

    - Greg

    Post edited by algovincian on
  • ebergerlyebergerly Posts: 3,255

    Ahh, okay. So it sounds like you take the Iray or other render passes and do various post processing steps using the information in the passes to produce a final image? So it's not really a render engine but a post-processing exercise?  

  • algovincianalgovincian Posts: 2,670

    If one is specifically talking about 3D render engines and they are defined as taking 3D info and collapsing it down to produce a 2D image, then no - my algorithms are not doing any rendering (specifically no 3D rendering). At the same time, there's much more going on than typical post processing (adjusting the hue/saturation or brightness/contrast and blending layers). Everybody is welcome to call it what they want - doesn't really matter. 

    What I'm doing is a layer of abstraction or 2 above what a typical 3D render engine does. I'm interested in structures larger than a single pixel, which is why the 3D info must be collapsed down to 2D prior to further processing, or rendering - whatever you want to call it. 

    - Greg

     

  • rayglendenningrayglendenning Posts: 137
    edited August 2018

    Used to play around with writing ray tracing algorithms back in the 90s but it was amazingly slow on my Amiga, even with a Mega Midget Racer :-(

    Great fun though as a lot of new ideas were still being developed.

    Some good references here "https://en.wikipedia.org/wiki/Ray_tracing_(graphics)" (and explains the difference between ray tracing and ray casting)

    Post edited by rayglendenning on
  • algovincianalgovincian Posts: 2,670

    I would add that much of the current work being done on render engines is similar in it is also interested in structures larger than a single pixel, like AI noise reduction, etc.

    - Greg

  • ebergerlyebergerly Posts: 3,255
    edited August 2018

    Anyway, back to ray tracing...

    One of the interesting things and surprisingly simple aspects of this is in how rays are reflected depending on whether the surface is a matte or a mirror-like metal. With a matte surface, rays bounce off in random directions. So programming-wise, you just set up a random number generator, and once you have the XYZ coordinates to the "hit point" on the sphere's surface, you just generate a "bounce" vector with origin at that point, and with a random direction. 

    But with a mirror-like surface, you just determine the "normal" direction at that point, and send a bounce vector off in that direction.  

    Now with more complex surfaces it gets more complicated very quickly. 

    Oh, and if you've ever wondered why ray tracing is so compute-intensive...

    Like I said, with a 1920x1080 image there are over 2 million pixels. So you have to start with 2 million calculations for just the primary rays. And if you have, say, 100 samples within each of those pixels, that's 200 million calculations just to cover all the primary rays. And that's just the primary rays, not all the bounce rays.

    So how many cores does your GPU have? laugh 

    Post edited by ebergerly on
  • bluejauntebluejaunte Posts: 1,998

    You could check out POV-Ray, an ancient raytracer with available source code.

    https://github.com/POV-Ray/povray

Sign In or Register to comment.