Q2. Do I have to output the same exact image as sample_viewer?
Q3. What exactly is passed in with the vertices that are to be rasterized?
typedef struct Vertex_st { float x, y, z, w; /* geometry coordinates */ float u, v; /* texture coordinates */ } Vertex;
Any field in (x, y, z, w, u, v) can hold a number in any range.
(x, y, z, w) is the homogenous coordinate of the vertex. You should clip to the 2D region defined by (-1 < X < 1, -1 < Y < 1). This means that the left edge of the image buffer corresponds to (X = -1), the right edge to (X = 1), the bottom edge to (Y = -1), and the top edge to (Y = 1). It is your responsibility to map the vertices into screen space, which includes considering that the input vertex is given in homogeneous coordinates. If a triangle has a pixel outside of the boundaries of the screen space, it is your responsibility to not draw it.
(u, v) are the texture coordinates of a vertex. (0, 0) corresponds to the bottom left corner of the texture, and (1, 1) corresponds to the top right corner of the texture. As you are interpolating texture coordinates, the coordinate should be clamped to the range [0, 1).
Q4. How does texture coordinate interpolation work?
[ x ] [ u ] [ y ] = T [ v ] [ w ] [ 1 ]where T is a 3x3 matrix; in particular, T is the result on compositing all transformations that texture coordinates undergo until they are mapped to the screen:
Rewriting the formula as
[ u ] [ x ] [ x ] [ v ] = Inverse(T) [ y ] = T' [ y ] [ 1 ] [ w ] [ w ]we see that, during the rendering of a particular triangle, there is some constant matrix T' relating raster coordinates and texture coordinates. What you are given per triangle is the u, v, x, y, and w values at the vertices. Using this information, it is possible to compute the elements of T' by solving 3 sets of 3 linear equations, each in 3 unknowns (the elements of a row of T').
Alternatively, without ever calculating T' per se, we can find u and v for every pixel within the triangle by linear interpolation. In particular, the above equation can be written as
[ U ] [ u/w ] [ x/w ] [ X ] [ V ] = [ v/w ] = T' [ y/w ] = T' [ Y ] [ Q ] [ 1/w ] [ w/w ] [ 1 ]by dividing both sides by w. This equation implies that the three quantities U, V, and Q - namely the homogenous texture coordinates - are linearly related to X and Y -namely the pixel coordinates generated during scan conversion. Moreover, U, V, and Q are known at the vertices, since u, v, and w are given at the vertices. It follows that U, V, and Q can be incrementally calculated using the standard interpolation mechanism described in the course notes (lecture 8, page 29). And from these, u and v can be obtained using
u = U/Q v = V/Q
For last year's CS 248, we wrote a detailed recipe for texture mapping that you might want to consult. Bear in mind though that the class had a very different structure last year, and a different approach was taken; so don't be surprised if you can't follow all the details.
Q5. How can I get my RasterizeXor code to run faster?
buffer[x + y * winWidth] ^= currColor;Instead, you should have:
local_buffer[x + y * local_winWidth] ^= local_currColor;where local_* are just local variable copies of the global counterpart. The reason you do this is because the compiler is required to reload globals at every iteration of the loop (due to aliasing), but not required to reload locals.
Also, you should incrementalize the buffer address computation since integer multiplies are so expensive. Thus you code should look like:
scanline[x] ^= local_currColor;
Q6. Do I have to clamp (u, v) on every pixel?
This is the incorrect thing to do.
First, in "render.h", we ask that these values be clamped. Note that clamping at the vertices is not the same as clamping at the pixels. As far as the interface is concerned, this is required: we could give you (u, v) outside of [0, 1].
Second, there is a bigger problem in the assumption that just because (u, v) is in the range [0, 1] at the vertices, then (u, v) will be in the range [0, 1] at the pixels. In a perfect mathematical world, this assumption is correct. However, in the world of floating-point roundoff error, this is not the case. Even though you are interpolating on the range [0, 1], you will get pixels with (u, v) outside of this range on the boundaries of the cube faces. No matter how much you try to minimize these roundoff errors, they will always be there (i.e., going to double lessens the problem, but does not eradicate it). We will be looking for this.
[email protected]