Archive for September 2011


September 7th, 2011 — 5:26am

This summer I taught CS 148, the introduction to computer graphics course at Stanford. This was the first time I taught the class and, despite being incredibly time consuming, I really enjoyed it. I created all my own slides and new projects for the course.

The final project was to combine a path traced object with a real photograph—creating a “special effects” style image. I thought this was quite successful. The winning image was created by Zhan Fam Quek:

This was created using a path tracer written by the students, using a HDR environment map and HDR background image captured by the students.

More images from the final project can be found in the course gallery. Slides and assignments can be found on the course webpage.

Comment » | rendering

LaTeX Hyphenation Mist-akes*

September 2nd, 2011 — 10:14pm

One of the great things about TeX is that it will automatically hyphenate words when doing so leads to better overall line breaks in a paragraph. This is somewhat difficult task because, when hyphenating words, it is not acceptable to insert the hyphen between just any pair of letters. Some hyphenations, such as “new-spaper” can lead the reader “down a garden path.” That is, when reading the end of the line (“new-“), the reader guesses incorrectly that “new” is a complete stem within a compound word and is then completely confused when confronted with the unlikely terminating word “spaper”. A similar problem occurs when the hyphenation causes the reader to pronounce the head portion incorrectly, causing them to read nonsensical words (I’ll give an example in a second.)

To address this problem some automatic text layout systems rely on dictionaries in which acceptable hyphenation points have been marked by a human. While generally correct, such dictionary-based approaches require a very large data file to store the dictionary and fail when given new words. An alternate approach, taken in TeX, is to summarize hyphenation points into a small set of patterns which can then be applied to any word. The TeX method was described in Franklin Liang’s thesis Word Hy-phen-a-tion by Com-put-er. Liang’s thesis claims that his pattern-based finds about 90% of the human marked hyphenation points and finds essentially no incorrect ones.

Unfortunately, essentially no incorrect ones is not the same as no incorrect ones. In the last two papers I’ve written, I’ve come across words that TeX’s method failed rather dramatically on. Fortunately, LaTeX provides an easy way to override the automatic hyphen selection through the \hyphenation{} command.

\hyphenation{white-space} fixes TeX’s “whites-pace”, a somewhat racist rendering. Note that this incorrect hyphenation is the opposite of the “new-spaper” example, which came from Liang’s thesis, highlighting the problems facing a purely pattern-based approach.

\hyphenation{analy-sis} fixes TeX’s “anal-ysis” which leads to a rather infelicitous mispronunciation of the first part of the word.

Despite doing a good job most of the time, TeX’s automatic hyphenation can and does go awry. Keep your eyes open when proof reading!

*I’m pretty sure TeX gets the hyphenation of “mistakes” correct.

3 comments » | latex

SIGGRAPH course on Importance Sampling

September 1st, 2011 — 9:14pm

I just noticed that my Masters work on Resampled Importance Sampling was included in a SIGGRAPH course last year.

In standard importance sampling you have to draw samples from a distribution which can be sampled and which has a known scaling factor (so that the area under the pdf = 1). This is very limiting in some MC situations, such as sampling the incoming light direction in a path tracer. The function we want to sample from is the product of the BRDF and the incident light field. However, we can typically only sample from one component or the other, not both. In RIS, we create a sampled, discrete approximation to the desired distribution and then draw our samples from that. Since the approximation is discrete we can easily normalize it and draw samples from it. In my thesis, I showed that when weighted correctly, this results in an unbiased estimate.

RIS will beat IS when it is substantially cheaper to create the approximate distribution than it is to evaluate the true function. In the path tracing context, I did this by evaluating the BRDF and environment map lighting for the approximate distribution, but not the visibility (the ray tracing step). With this approach, the RIS image (on the right) has substantially lower variance than the IS image (left).

IS vs. RIS

As long as the visibility test accounts for a substantial fraction of the execution time, RIS will beat IS. However, this wasn’t particularly true when I wrote the thesis and is probably less true now. In my thesis, I concluded that, for current rendering tasks, RIS wasn’t a big win since the difference in evaluation cost between the approximate samples and the actual samples wasn’t very high. However, I think it could be quite useful in some MC applications where a relatively good and cheap discrete approximation can be found.

Comment » | rendering

Back to top