Overview
In this project, I built a simple image rasterizer in C++ that can render .svg files with support for anti-aliasing, transforms, color interpolation, and texture mapping. 
As a 3D artist, this project was especially interesting to me because I've always wondered how 3-dimensional objects are rendered on a screen. As I learned, it's as simple as projecting them mathematically onto a 2D plane and rendering these shapes onto the framebuffer, which is what I accomplished. I also really enjoy texture mapping, and often use 3D programs to UV map my objects, so it was especially interesting to see how UV coordinates really drive the placement of textures onto shapes. 
Part 1: Rasterizing Single - Color Triangles
The most basic polygon is the triangle, as it contains the fewest number of vertices. Any larger polygon can be made up of triangles. To create a good foundation for the rest of the project, the goal for part one was to implement a simple rasterizer that could take in triangle coordinates in 2D space, a color to fill them with, and render these colored triangles on a screen. 
My algorithm does the following: given three 2D coordinates and a color, it first finds a bounding box for the triangle, made up of the smallest and largest x,y coordinates of the triangle. It then iterates through all the pixels in the bounding box, checking if each one is inside the triangle. This is accomplished with a helper function that takes in a sample point, shifts it over by 0.5 to sample the center of the pixel, and determines if it intersects with three planes constructed from the normal vectors facing inwards from the triangle's edges. Finally, if a sample point is determined to be in a triangle, it calls another function to fill the pixel with the color input to the function.
Below is a simple image of colored triangles output by the program, with an enlarged pixel inspector on the top corner of the red triangle.
Even though the image below looks complicated, it was also rendered using only triangles! 
Part 2 - Anti-Aliasing / Supersampling 
The first image from part 1 was interesting to inspect up close, because at a certain zoom level it no longer looks very triangular, and shapes get jagged edges. This effect is called aliasing, and happens because we sample per-pixel in the above example. To implement anti-aliasing, I divided each pixel up into smaller sub-pixels (the amount of which was determined by a sample rate parameter), and iterated through these, sampling whether each one was inside the triangle. If a pixel was in the triangle, it was filled with the triangle's color, otherwise it was filled with the background color. Finally each sub-pixel color was averaged to obtain the overall pixel's color.
Below are images of the above triangle image at sample rates 1, 4, 9 and 16. 
Sample rate 1
Sample rate 1
Sample rate 4
Sample rate 4
Sample rate 9
Sample rate 9
Sample rate 16
Sample rate 16
As you can see, the higher the sample rate, the smoother and less jagged the edges of the triangles appear. This is because with more samples, the image on the screen more accurately resembles the actual shape we're attempting to render.
Part 3: Transforms
In this part of the project, I implemented functions that allowed for translation, rotation and scaling of vector shapes. Below is an unmodified image showing a robot made of triangles, and another one whose colors and transformation matrices I changed to make it look like it's a waving pastel robot.
Part 4: Barycentric Coordinates
In this part, I added the ability to interpolate between colors assigned to triangle vertices. To do this, I calculated the barycentric coordinates of a sample point within a triangle. Barycentric coordinates are coordinates assigned to points in 2D space that, rather than show the position relative to horizontal and vertical axes as with (x,y) coordinates, show the position of a point relative to the vertices of the triangle. The image below illustrates this well (courtesy of scratchapixel.com):
You can see how each color is interpolated between the red, green and blue vertices. This results in a smooth gradient across the whole shape. Below is an image output by my program, which uses barycentric coordinates to render a smooth color wheel:
Part 5: "Pixel Sampling" for Texture Mapping
For part 5, I implemented texture mapping. My function takes in x,y coordinates of a sample point, and converts them to u,v coordinates using barycentric methods. The image is wrapped onto a shape depending on the conversion scheme from (x,y) to (u,v) coordinate space. Below we can see a warping effect - the original image is a flat rectangular map. 
When enlarging or warping an image, a sample point may not exactly match up to a pixel in the texture, so a sampling scheme is necessary. I implemented two methods of pixel sampling, nearest-neighbor and bilinear. Nearest neighbor sampling samples the pixel at the closest distance to the sample point and applies this color to the sample. Bilinear is more accurate to the image, as it samples four surrounding points in the image and performs a weighted linear interpolation between their colors based on the distance of these four surrounding points to the sample point. Below are four examples of the same image, using different sampling schemes and anti-aliasing sample rates. (Hover to reveal)
Nearest sampling @ 1 sample per pixel
Nearest sampling @ 1 sample per pixel
Nearest sampling @ 16 samples per pixel
Nearest sampling @ 16 samples per pixel
Bilinear sampling @ 1 sample per pixel
Bilinear sampling @ 1 sample per pixel
Bilinear sampling @ 16 samples per pixel
Bilinear sampling @ 16 samples per pixel
It's clear that bilinear sampling smooths the final image out more. This effect would most likely be more visible on a lower-resolution and higher contrast image, with clearly defined outlines.
Part 6: "Level Sampling" With Mipmaps For Texture Mapping
For part 6, I created the functionality that would, given a sample point, choose an appropriate mipmap (down-sampled image at a specific level of pixel reduction) and sample this mipmap rather than the full-resolution texture. This concept is used frequently in game development, as objects that are far away from the camera don't need to be rendered at full-resolution, as the viewer won't notice a difference. In the case with my images, there are certain areas of the image that look strange when warped - in the case above, it's the upper edge of the map - and using a down-sampled mipmap will provide better results. 
I implemented two level sampling methods for the mipmaps: zero, which behaves as above and samples the 0th (largest) mipmap, nearest, which samples the closes mipmap to the required resolution, and linear, which samples the two surrounding mipmaps to the one specified, and interpolates the results between all three. Below are four examples of my fruit bowl render, using different mipmap level sampling schemes and pixel-sampling schemes. (Hover to reveal)
Level 0 and nearest-neighbor pixel sampling
Level 0 and nearest-neighbor pixel sampling
nearest level sampling and nearest-neighbor pixel sampling
nearest level sampling and nearest-neighbor pixel sampling
linear level sampling and nearest-neighbor pixel sampling
linear level sampling and nearest-neighbor pixel sampling
linear level sampling and bilinear pixel sampling (AKA trilinear filtering)
linear level sampling and bilinear pixel sampling (AKA trilinear filtering)
Back to Top