Rasterization Help

Hi all, I just wanted to address some questions I’ve gotten a couple of times that might be tripping people up.

1) How do I compute Barycentric Coordinates?
Both Wikipedia and Wolfram Mathworld have great articles on this. The Wikipedia article steps through the math in a particularly useful way giving you easy to implement equations. Make sure you get the signs correct!

1a) Great, I have the Barycentric Coordinates coordinates how do I interpolate with them?
The same Wikipedia article has the correct math again. To compute a color at any given point given Barycentric Coordinates a,b, & c:
Color = a*Vertex1_color + b*Vertex2_color + c*Vertex3_color

2) What is a reliable way to check if a point is inside a triangle?
Compute the 3 Barycentric coordinates for each point. Lets call them a, b, and c. A point is inside the triangle if: a>0 and b>0 and a+b < 1 For more details see this MathWorld article.

3) Why are their small cracks in my model?
Many ways of detecting if a point is an a triangle are unreliable, especially on the edge of triangles. Use a different method for determining if points are in triangles.

4) Why is my rasterizer so slow??
You are probably looping through every pixel and checking if it’s inside every triangle! A much better way is to loop through each triangle or line then compute exactly which pixels that triangle covers. This is the point of the DDA rasterization and the triangle-sweep algorithm. When you implement these you should see a huge speedup!

5) What’s up with my OpenGL previewer? Why are some triangles the wrong colors of completely black?
OpenGL is a big beast, and their lots of things that could be going wrong. Two things to check are:
glEnable(GL_COLOR_MATERIAL) <- Call this in your setup function to ask OpenGL to use the colors you specify for triangles as coefficients for the lighting equations. glPolygonMode(GL_FRONT_AND_BACK, GL_FILL) <- Call this to display triangles regardless of the order of the inputs.

Posted in FAQ | Leave a comment

Rasterizing Lines & Triangles

Due: Nov 18
Part I – Scan Conversion (2D)
In this assignment you will build an application which is able to rasterize both triangles and line segments. You may use any algorithm to rasterize the primitives but you must do the drawing yourself.

Your program should receive as input a file containing list of lines and triangles to be drawn in the given format (described at the end of this document). Each vertex of a triangle/line is allowed to have a different color specified. Therefore you must also implement color interpolation across a line/triangle. The rendered output should be displayed either in a window or as a bitmap as in the previous assignment.

You may assume that vertices lie within the viewport (no clipping is required). However, you will have to be careful to set the boundaries of your viewport so that this invariant is satisfied. Vertices are specified using 2D coordinates, meaning that you don’t have to implement any projection transformations.

Using OpenGL, write a program which draws the same output (lines and triangles with per-vertex color). Compare the OpenGL output vs. the output from your rasterizer.

Part II – Shading (3D)
Once you have 2D scan conversion working, extend your implementation to handle 3D triangles and perform Pong Shading.

You will need to implementView Volume Transformations, i.e. transform every vertex from the eye coordinate frame to the canonical view volume for rasterization. You can implement either orthographic (parallel) or perspective viewing.

You will be given several large 3D models (on the website) to read in (in the specified format) and display the rendered output. Each model will be sized so that it fits within the bounds [-1,1] along all 3 axes. Thus, you will also have to perform a minimal scene transformation in order to place the eye at a location other than the origin so that the whole model can be seen.

There are several ways to extend your rasterizer for better results. Document what you have implemented, and be sure to include a sample showing your features off. If you implement any other extensions be sure to document them.
Mandatory Extensions (Optional for 575)
Grad students must implement these, they are optional for 575.
  Phong Shading – Interpolate the normal vector of each vertex across a triangle, using the current normal vector at each pixel to calculate diffuse, specular, and ambient components of the light contribution. You can assume a light source which is overhead (the vector to the light source is always (0, 1, 0).
  Z-Buffering – Maintain a depth buffer (analogous to a frame buffer except that it stores depth values instead of colors) to keep track of the depth at each pixel. Interpolate the Z coordinate (in eye space) of each vertex across each line/triangle and use the current state of the depth buffer to determine the visibility of a triangle at each pixel.
Extra Credit
Optional for both classes.
  Clipping – Clip primitives against the view volume. This is needed for a robust implementation to make sure you don’t draw any pixels outside of the frame buffer.
  Efficient Rasterization – Use improvement such as a span based algorithm to efficiently rasterize the triangles. Be sure to report the speed-up you see.

What to Turn In
Create a webpage with a short write-up and renderings of the sample scenes. Include:
-2D Rasterization Code & Executable
-3D Rasterization Code & Executable
-OpenGL previewer
-Renderings of some of the 2D and 3D sample scenes
Be sure to briefly explain your implementation, and document any extensions you’ve made and provide examples to show case them. You may be asked to demo your program by rendering one or more of the sample scenes to the course TA.

File Format
There are 2 types of file formats provided. The first format is for the simple 2D inputs for part I to test your basic scan conversion. The second format is extended to 3D triangles/scenes. (Note, some files use the Linux line ending convention and may not display correctly in notepad.)

2D Scenes
The input files will be structure in the following manner where // indicates a comment. Each triangle is defined by a list of 3 vertices where each vertex has its own line and consists of 6 floating point values specifying its position and color. Each line segment is defined by 2 endpoint vertices.

//start of file
numTriangles //integral number of triangles in the file (always on first line of file)
//begin a triangle
x1 y1 r1 g1 b1 a1 //vertex at (x1,y1) with color (r1,g1,b1,a1)
x2 y2 r2 g2 b2 a2 //vertex at (x2,y2) with color (r2,g2,b2,a2)
x3 y3 r3 g3 b3 a3 //vertex at (x3,y3) with color (r3,g3,b3,a3)
//end a triangle
//total of numTriangles triangles
numLines //integral number of lines in the file
//begin a line
x1 y1 r1 g1 b1 a1
x2 y2 r2 g2 b2 a2
//end a line
//total of numLines lines
//end of file

3D Scenes
The input files will be structure in the following manner where // indicates a comment. Each triangle is defined by a list of 3 vertices where each vertex has its own line in the file and consists of 10 floating point values specifying its position, color, and normal vector. Each line segment is defined by 2 endpoint vertices.

//start of file
numTriangles //integral number of triangles in the file (always on first line of file)
//begin a triangle
//vertex at (x1,y1,z1), color (r1,g1,b1,a1) and normal vector (nx1, ny1, nz1)
x1 y1 z1 r1 g1 b1 a1 nx1 ny1 nz1
//vertex at (x2,y2,z2), color (r2,g2,b2,a2) and normal vector (nx2, ny2, nz2)
x2 y2 z2 r2 g2 b2 a2 nx2 ny2 nz2
//vertex at (x3,y3,z3), color (r3,g3,b3,a3) and normal vector (nx3, ny3, nz3)
x3 y3 z3 r3 g3 b3 a3 nx3 ny3 nz3
//end a triangle
//total of numTriangles triangles
numLines //integral number of lines in the file
//begin a line
//vertex at (x1,y1,z1), color (r1,g1,b1,a1) and normal vector (nx1, ny1, nz1)
x1 y1 z1 r1 g1 b1 a1 nx1 ny1 nz1
//vertex at (x2,y2,z2), color (r2,g2,b2,a2) and normal vector (nx2, ny2, nz2)
x2 y2 z2 r2 g2 b2 a2 nx2 ny2 nz2
//end a line
//total of numLines lines
//end of file

Example files
Both link provides examples files along with the correct renderings.
2D Files
3D Files
Z-Fighting Example (Check back in a few hours)

Posted in Assignments | Leave a comment

Example Scenes

Here are some examples sceens for use in raytracer in the .scn format.

Ther first is a set of scenes which use only sphere’s: https://www.cs.unc.edu/Courses/comp575-f10/assignments/raytrace2/SphereExamples.zip
These should, for the most part, match the examples in the XML format.

Next, there is a set which also use triangles: https://www.cs.unc.edu/Courses/comp575-f10/assignments/raytrace2/Triangles.zip

Lastly, there is a set of more complex tests. This includes a dragon! https://www.cs.unc.edu/Courses/comp575-f10/assignments/raytrace2/Complex.zip
Warning: These will take a LONG time to render unless you get some sort of acceleration structure working!

Posted in Uncategorized | 2 Comments

Ray Tracing – Part 2

Due: Oct 18
File Format
You can use either the format provided last time or the alternate format (described here). You may have to modify the format, especially if you use the original XML one.

A basic Ray Tracer which (ideally) builds off of your RayTracing-Part1 assignment. It will be worth 85/100 points to get these basics:

  • Arbitrary camera placement, film resolution, and aspect ratio
  • Arbitrary scenes with spheres, triangles (possible with vertex normals), and aribitrary background colors
  • Arbitrary materials, including diffuse and specular shading, reflections, and refractions
  • Point and directional lights
  • Ambient lighting
  • Shadows
  • Recursion to a bounded depth

To get the full 100 points you need to add additional features from the list below (ask me if you have something in mind not on the list). Its okay if you have to extend the scene format in anyway you want, just make sure you include a sample scene that shows off your new, cool features!

The number in front is how many points a feature is worth. Their will be partial credit for features that “sort of” work.

Scene specifications / Primitives

  • (5) Cones and Cylinders
  • (5) Boxes and Planes
  • (5) Constructive Solid Geometry (union, difference, and intersection of primitives)
  • (10) Transformations on primitives (support 4×4 transformations or procedural ones!)
  • (20) Procedurally generated terrain/heightfields

Complex Lighting

  • (5) Area lights that produce soft shadows
  • (10) Ambient Occlusion
  • (20) Image-based lighting



  • (5) Texture mapping
  • (5) Bump mapping
  • (5) Procedural texturing or bump mapping (checkerboard, wood, marble, mandelbrot set, etc..)


  • (5) User interface that shows the raytraced image being updated
  • (10) An acceleration structure: BVH, OctTree, etc. (measure the performance impact on different scenes!)
  • (15) Parallelize the raytracer (and analyze the performance gains as you add more processors!)
  • (100) GPU Implimentation using CUDA

You should create a webpage with:

  • A ZIP file of all your source code
  • At least two sample renderings from your raytracer
  • A writeup of what features you implemented, and any interesting details of your implementation
  • A submission for the art contest (optional)

-You should be able to leverage your previous raytracer. Any extra credit work you did their should roll easily into this assignment. Try to be strategic about what extra features you choose to implement.
-A BVH is an easy way to render large, impressive scenes in a reasonable amount of time
-If your running out of time focus on the easier options such as boxes, planes, CSG and supersampling. That way you get some easy things completed well rather than rushing through something you don’t have time to finish.

Posted in Uncategorized | Leave a comment

Assignment 2 Online

Programming assignment 2, Ray Tracing 1, is now online. You can find it here.

Posted in Uncategorized | Leave a comment

General Assignments FAQ

How do I make a webpage with my UNC space?
This instructions will work for those with an onyen. The webpage will be at: www.unc.edu/~onyen/, where onyen is your onyen
1. Download at install SSH/SFTP Secture Shell from shareware.unc.edu
2. Run the Secure File Transfer Client
3. Log into isis: File->Quick Connect, Host Name: isis.unc.edu
4. You will see a folder called “public_html”. Inside there is index.html. Edit this or create a new html file. If you know nothing about HTML, there is an excelent very breif tutorial here.
5. Upload images or .zip files to this same directory. The links will all be relative to your homepage: e.g. www.unc.edu/~onyen/assignment1.zip

Where can I get Visual Studios free?
If you are a CS graduate student ask the department help (help@cs.unc.edu).
If you are a student at UNC (with a valid UNC email address), you should be able to download Visual Studios for free from Microsoft’s Dreamspark: https://www.dreamspark.com/Products/Product.aspx?ProductId=9
You’ll need to get a MSN/Hotmail account to access this.

Posted in Assignments, FAQ | Leave a comment

2D Graphics FAQ

How do I compute the luminance of an RGB pixel?
For projects in this class, you can safely use:
 Luminance = Y = .30*R + .59*G + .11*B

There are a couple different formulations from varying standards such as:
Y = 0.299*R + 0.587*G + 0.114*B
Y = 0.2126*R + 0.7152*G + 0.0722*B
Y = 0.212*R + 0.701*G + 0.087*B
The “right” choice depends on several factors, such as the gamma of the display source.

Why does the blue channel contribute so little to luminance?
The human eye has 3 types of color detecting cone cells S,M,L which respond primarily to roughly Blue, Green, and Red stimulus.
It turns out, blue-sensitive cones contribute almost nothing to the perception of luminance (see Eisner and MacLeod, ’79). Blue light does effect the perception of luminance somewhat because it will stimulate the green-cones to a small extent.

What should a 0 contrast image look like?
A single-tone, purely grey image at the average luminance of all the pixels.

What should a 0 saturation image look like?
A gray scale version of the image.

What’s the right way to handle edges with Convolution?
Experiment and choose work works best for your images and filters. Assuming black outside the edges of an image leads to the most obvious errors, the other methods such as reflecting the image across edges tend to cause many fewer visible artifacts.

What’s the right way to handle edges with Floyd-Steinburg Dithering?
Renormalize the weights based on how many pixels the error is being distributed over. The key is to avoid adding or subtracting energy from the image.

How wide should the radius be on the gaussian sampling for the reconstruction?
You’ll have to experiment to get non-aliased, non-blurry images. A good starting point is a little under 1 pixel std deviation, and a radius of 2-3 pixels.

Posted in FAQ, Uncategorized | Leave a comment

Assignment 1 Online

Assignment 1 is now online here

It is due Thur. Sep 16 at 11:59PM.

Posted in Assignments | Leave a comment

Lectures Posted

A list of lectures is now online here.

PDF versions of the lectures are linked from the page. I will shortly update the page with a tentative version of future classes, along with suggested reading from the book.

Posted in Assignments | Leave a comment

Assignment 0 Online

Assignment 0 is now online here

It is due Fri. Sep 3 at 11:59PM.

Posted in Assignments | Leave a comment