Overview
In this part of the project, I took the path tracer even further and implemented mirror, glass and microfacet materials, environment lighting, and depth of field. 
Mirror and Glass Materials
In this part of the project, I Implemented reflective and refractive materials. The mirror material approximates a perfect specular reflection, where light that makes contact with the surface bounces off at the exact opposite angle from which it came. 
To get the reflection of a surface, I used a function that took an outgoing angle as input (remember in path tracing, we're working backwards from the camera to the light source), and reflected it around the object's surface normal to get the incoming angle. 
To get the refraction of a surface, I implemented a refraction function that took an outgoing angle as input, and returned an ingoing angle based on the object's IOR (index of refraction, a number associated with the density of the material). I also used Snell's law to find out if there was total internal reflection, which would terminate the ray. I then used Schlick's approximation to determine whether a ray would reflect off the surface of the glass or refract into (or out of) the glass material. 
Take a look at the below images to notice what happens as we increase the number of light bounces. 
At zero bounces, we just have the light from our light source. 
At one bounce, we have direct lighting everywhere in the scene (note the black spheres- this is an artifact from how the BSDFs were implemented). 
At two bounces, we finally see light everywhere in the scene - light has reflected off of the reflective sphere, and notice the glass sphere is dark with a slight reflection - this is because refracted light has entered (but not exited) the refractive sphere, while the reflected light has bounced off. 
By the third bounce, we see refracted light exiting the glass sphere, and by the fourth bounce, we can notice a bright spot on the ground. This is what is known as "caustics" - light that is transmitted through a refractive object and concentrated based on the curvature of the object. This is akin to the concentration of light that happens when you put a magnifying glass underneath a bright light. 
By the fourth, fifth and hundredth bounces, notice that the concentrated light is being reflected and diffused back onto the sphere , and concentrated in a lighter spot on the blue wall. Also notice that as the bounces increase the reflected image of the glass sphere in the reflective sphere gets lighter and lighter.  

Max ray depth 0
Max ray depth 0
Max ray depth 1
Max ray depth 1
Max ray depth 2
Max ray depth 2
Max ray depth 3
Max ray depth 3
Max ray depth 4
Max ray depth 4
Max ray depth 5
Max ray depth 5
Max ray depth 100
Max ray depth 100
Microfacet Materials
Show a sequence of 4 images of scene CBdragon_microfacet_au.dae rendered with \alphaα set to 0.005, 0.05, 0.25 and 0.5. The other settings should be at least 128 samples per pixel and 1 samples per light. The number of bounces should be at least 5. Describe the differences between different images. 
For this part I implemented microfacet materials, which simulate the behavior of real-life conductors. In essence, the surface of these materials is made up of small microfacets, whose shape determines how rough or smooth the material appears. 
My implementation used importance sampling to sample the BRDF, which used the inversion method to obtain a probability distribution function that approximates a Beckmann normal distribution function. Given an input angle wo, we calculate a half-angle h using our pdf, and reflect wo about this angle to get our incoming angle wi. 
Below are renderings of the golden dragon set to alpha = 0.005, 0.05, 0.25 and 0.5.  Note that as the alpha decreases the image becomes more and more rough and diffuse, like brushed metal. The alpha value seems to equate to the smoothness of the material. 
alpha = 0.5
alpha = 0.5
alpha = 0.25
alpha = 0.25
alpha = 0.05
alpha = 0.05
alpha = 0.005
alpha = 0.005

Below are two images of the bunny rendered using cosine hemisphere sampling and my importance sampling, with 64 samples per pixel and max ray depth 5. Notice how much more accurate the second image looks - although they are the same number of samples, the first image is more noisy and much darker than it should be. Although with enough samples it would converge at a correct image, the importance sampling arrives there much more quickly.
hemisphere sampling
hemisphere sampling
importance sampling
importance sampling
Below is an image of a lithium rabbit. I replaced the eta and k values for the wavelengths of red, green and blue light in my .dae file with that of lithium. 
Environment Light
In this part I implemented the use of 360-degree environment maps to simulate real-world lighting on a scene. This method maps a 2D image onto the inside of a sphere, and uses the color values at each point on the image to cast light inwards onto the scene. This is a very easy way to achieve highly realistic results, and is commonly used in the world of CG rendering.
I first implemented uniform sampling of the environment light, meaning given a light ray, I sampled a random direction on the light sphere and returned the resulting radiance. This was simple, but noisy. (See the images below.) I then implemented importance sampling of the sphere - because most light cast onto a scene from an environment comes from its areas of high radiance. 
To implement importance sampling, I created a probability map that was dependent on the local radiance values of the map. This probability map determined how likely it was that a light ray would sample that point in the environment map. I then created the marginal distribution, which is the cumulative distribution function of each row in the map. I then used Bayes' rule to determine the conditional distribution, which is the cumulative distribution function for sampling any point in a given row of the map. After sampling these functions to obtain (x,y) coordinates, I converted these coordinates into a direction vector and returned the correct radiance value at that point on the map. 

The image I used for this part of the project. 

Above is my probability debug image, displaying the different probability values at each point in the image for use with importance sampling.

Below are images of the copper and diffuse bunny rendered using uniform and importance sampling. Notice the sheer amount of noise in the uniform sampling images compared to the importance sampling images, although they are all 4 samples per pixel and 64 samples per light. The importance sampling images achieve a much higher level of realism at the same sample count.
Uniform sampling on a copper bunny
Uniform sampling on a copper bunny
Importance sampling on a copper bunny
Importance sampling on a copper bunny
Uniform sampling on a diffuse bunny
Uniform sampling on a diffuse bunny
Importance sampling on a diffuse bunny
Importance sampling on a diffuse bunny
Depth of Field
In this part, I implemented a representation of a real thin lens. This contrasts with previous parts of the project where we used a pinhole camera, and everything in the scene was in focus at once. But to create some out-of-focus effects, I needed to figure out how to model real life lens behavior in my renderer.
My implementation took a focal point position, which is simply where a perfectly-focused ray would have intersected the scene, and calculated a ray from the sensor plane to this point. 
Below are some renders of the golden dragon with aperture 0.1 and varying focal lengths (2.4, 2.5, 2.6, and 2.7). The images below all have 128 samples per pixel.
Focal Distance 2.4
Focal Distance 2.4
Focal Distance 2.5
Focal Distance 2.5
Focal Distance 2.6
Focal Distance 2.6
Focal Distance 2.7
Focal Distance 2.7
Since it may be difficult to discern the differences between the above images, I've made a gif of them below:

Notice how the focal distance is slowly pushed so the back of the dragon comes into focus.

Below are some renders of the golden dragon with focal length 2.5 and varying apertures (0.15, 0.10, 0.07, 0.05 and 0.0). The images below all have 128 samples per pixel.
Aperture = 0.0
Aperture = 0.0
Aperture = 0.05
Aperture = 0.05
Aperture = 0.07
Aperture = 0.07
Aperture = 0.1
Aperture = 0.1
Aperture = 0.15
Aperture = 0.15
Below is an animated gif of the above images, so the difference can be seen more clearly.

Notice how the size of the area in focus decreases and the amount of background blurring increases as the aperture increases. 

Back to Top