CS 348C - Topics in Computer Graphics
Sensing for graphics
Project suggestions
Image capture
-
Create and render unusual volume datasets from digitized photographs (or
video) and a buzz saw. How about slicing and photographing a tree trunk? For
inspiration, look at the
Visible Human project and
renderings made from this data.
-
Morph between two light fields. First, acquire two light fields A and B, each
of different objects and each consisting of many views. Then specify tie
points, lines, and planes in selected views (the same views in A and B).
Finally, interpolate those tie elements for the other views in each of A and B,
as well for frames in a morph sequence between A and B. This would allow you
to change your point of view during a morph, just as in 3D morphing (see our
paper in Siggraph '95), but without the necessity for a 3D model!
Extension:
Morph between two time sequences of light fields, i.e. between two light field
movies.
-
Some researchers have reported that as few as 4 or 5 spectral basis functions
will capture spectral content of most natural objects. Experimentally verify
(or disprove) this claim. Tell us soon if you are interested in this project,
because we'll have to buy a spectrophotometer. There are good inexpensive ones
on the market.
Image capture combined with texture analysis / synthesis techniques
-
Synthesize tilable displacement map textures (reptile scales, rock, fabric)
from range data using a texture analysis / synthesis technique like
Heeger (Siggraph '95).
-
Enhance the Haeberli "paint by numbers" method (Siggraph '90) to include 3D
brush strokes, as in
Fractal Design's
Detailer program. Derive the brush strokes from range data, either
directly or via texture analysis / synthesis.
-
Develop a way to tile synthesized textures across a surface of arbitrary
topology such as a dense triangle mesh. We have some ideas on how to do this.
Extension:
Incorporate this synthesis technique into a mesh simplification / progressive
mesh refinement system, perhaps building on
Hoppe (Siggraph '96).
-
Synthesize tilable volumetric representations of complex 3D objects from range
data or volumetric data (CT, MR, photographic) using a 3D extension of a
texture analysis / synthesis technique. Try synthesizing foliage from range
data, or wood grain from volumetric data, as in the first project. How about
clouds?
-
Synthesize tilable 4D light fields of 3D objects from range or volumetric data
using a 4D texture analysis / synthesis technique! We have no idea if this
will work.
Extension:
Build a 3D / 4D modeling system. Use it to put 4D ivy on a 3D wall. Or a 4D
crowd scene in a 3D plaza. The 4D objects thus become "hypersprites". This
assumes that your light fields carry alpha and/or Z-depth, to support
compositing.
Image capture combined with blue-screen extraction and compositing
-
Use our camera gantry to record an image sequence or light field of an object
against two backdrops. The motion control would allow precise repetition of
camera moves. The resulting registered images would allow
two-color
blue-screening, yielding an alpha channel for each image.
-
Use the responsive workbench and a video camera to do two-color blue-screening
of a live subject by toggling the workbench at video rates between two backdrop
colors and digitizing the resulting video sequence. Would require some
frame-to-frame interpolation.
-
Composite live video over a light field background, or a light field over a
live video background. The former requires extracting an alpha channel (matte)
from the video; the latter requires computing an alpha channel for the light
field. May require buying a
real-time matte extraction box.
Shape capture
-
Invent a shape reconstruction algorithm that doesn't fail on corners or thin
surfaces, perhaps by retaining information about surface orientation.
Start with
Curless's algorithm from Siggraph '96?
-
Implement a method for determining shape from image sequences taken under
structured light. Use a slide or overhead projector to generate structured
light patterns and
Tomasi's factorization method to determine shape.
-
Push some aspect of
Debevec's work human-in-the-loop, domain-specific approach
to image-based modeling of architectural environments (Siggraph '96).
-
Extract texture and bump maps when doing mesh decimation.
-
Animate an object by interpolating between scanned "3D key frames", essentially
claymation with automatic inbetweening. Requires some facility at sculpting.
Measuring reflectance
-
Determine the diffuse color, specular reflectance, and roughness of a known 3D
shape from digitized images under controlled illumination. Requires
compensating for surface orientation, shadows, interreflections. Start with
the work of
Ikeuchi
or from the approaches we've tried here at Stanford.
-
Compute a BRDF, i.e. reflectance map, from a light field by performing an
inverse radiosity calculation (for diffuse surfaces) or an inverse radiance
calculation (for general surfaces).
-
Do you have any expertise in optics? Help us design a hand-held BRDF meter by
mastering our ray-tracing-based lens simulation program and experimenting with
different designs.
Motion capture
-
Implement motion capture using a video camera and fiduciary marks placed on a
person or object. Use LEDs or colored fluorescent dots shot under black light.
Extension:
Use to track the movement of a hand-held physical tool for the responsive
workbench or a hand-under-mirror VR system.
[email protected]
© 1997 Marc Levoy and Brian Curless
Last update:
Sunday, 19-Feb-2006 23:11:14 PST