I’ve posted a corrected version* of our InfoVis paper from last year: An Empirical Model of Slope Ratio Comparisons. In preparing the published version of the paper, we made a change in the parameterization of our space of slope comparisons to simplify the explanation of what we did. In doing this, I made a simple math error that resulted in us using the wrong mid-angles in our analysis. To see the difference, compare Figure 2 in the original and in the updated versions. The impact of the error is minor and doesn’t change our arguments or conclusions, but it required regenerating our plots and it slightly changed our model parameter estimates.
I’ve also posted R code which will reproduce our (corrected) analysis and figures. Along with the stimuli we released earlier, this should allow anyone to reproduce our analysis.
(*The irony of having to correct our paper which itself attempts to correct Cleveland’s earlier paper was not lost on me.)
Here’s a preprint of our paper on aspect ratio selection which will appear in InfoVis 2011. In it we propose a new criteria for banking data plots, building on previous ideas from Bill Cleveland and Jeff Heer and Maneesh Agrawala.
We frame the aspect ratio selection problem as one of minimizing the length of the data curve while keeping the area of the plot constant. This leads to a method that is substantially more robust than previous approaches. We’re also able to demonstrate empirically that the resulting aspect ratios are a compromise between those suggested by previous methods. As shown below, the arc length method can also effectively bank both standard line charts (in this case a loess regression line) as wells as contour charts.
Perhaps the most surprising result is that good aspect ratios can be selected without explicit reference to the slopes or orientations of the line segments within the plot.
I’ve finally had time to pull the labeling algorithm out of my much larger visualization package. It’s now up on github: https://github.com/jtalbot/Labeling. This implements all parts of the labeling paper, including the formatting variations.
Let me know if you run into any problems with it or have any suggestions for improvement.
Version 0.1 of the labeling package has been released on CRAN.
The R version of our labeling code is now hosted at R-forge. You can get it here or install it from within R using
A few small bugs in the implementation of our algorithm have been fixed thanks to feedback from Ahmet Karahan who is working on a Java version. I have also added a number of other labeling algorithms that have been proposed or used in the past, including those by Sparks, Thayer, and Nelder (from about 40 years ago), and adaptations of the matplotlib, gnuplot, and R’s pretty labeling functions.
As a side project this summer, I implemented a simple visual interface for HOP, an extended version of Hadoop. This was used by the HOP creators in their demo of HOP at this summer’s SIGMOD.
Hop visual interface
The graphical elements were produced using Protovis since I needed an excuse to play around with it. We ran into minor performance problems using Protovis for so many plots in a single page. In a production system it would be wiser to generate and cache the plots on the server side.
Update: The screenshot shows a task scheduling imbalance bug that we found in HOP using the visual interface.
Here’s a preprint of our paper on selecting tick labels for axes which will appear in this year’s InfoVis! Source code of the implementation will be made available before the conference. We’re hoping to get this implemented in a number of common plotting libraries. I already have a partial matplotlib version working. I would also like to have one for ggplot. Other suggestions are welcome.
The non-data components of a visualization, such as axes and legends, can often be just as important as the data itself. They provide contextual information essential to interpreting the data. In this paper, we describe an automated system for choosing positions and labels for axis tick marks. Our system extends Wilkinson’s optimization-based labeling approach to create a more robust, full-featured axis labeler. We define an expanded space of axis labelings by automatically generating additional nice numbers as needed and by permitting the extreme labels to occur inside the data range. These changes provide flexibility in problematic cases, without degrading quality elsewhere. We also propose an additional optimization criterion, legibility, which allows us to simultaneously optimize over label formatting, font size, and orientation. To solve this revised optimization problem, we describe the optimization function and an efficient search algorithm. Finally, we compare our method to previous work using both quantitative and qualitative metrics. This paper is a good example of how ideas from automated graphic design can be applied to information visualization.
Update: We’ve released a preliminary R package implementing the three labeling algorithms we compared in the paper. Feedback is appreciated. The final version should be released by InfoVis (in October).
Will Wilkinson points to the Gallup-Healthways Well-Being Index which purports to measure overall health (“not only the absence of infirmity and disease, but also a state of physical, mental, and social well-being”) at the congressional district level for the United States. Will hypothesizes that Utah’s high score may be due to “a skoche of culture-driven upward inflation” (Mormons overstating their happiness).
Fortunately, the components of the Well-Being Index are reported as well. Two components, Life Evaluation and Emotional Health, measure self-reported happiness. If Will’s hypothesis were correct, we would expect these components to account for a disproportionate share of Utah’s overall index. In the scatterplots to the right, the three Utah congressional districts are highlighted in orange. Contrary to Will, Utah is above average only in the Work Quality component. On all the others, including Life Evaluation and Emotional Health, Utah is average or below average.
Wellness data in Excel, since I couldn’t figure out how to get it from the Gallup-Healthways site. The visualization was done in Tableau.
For Jeff’s class I created an interactive visualization of the American Time Use Survey. I got sick last week so didn’t have a lot of time to work on it. As a result it turned out somewhat derivative of the Baby Name Voyager and other stacked area plots.
That said, I think it lets you find some rather interesting patterns in how people use their time. Most noticeable is the extra hour or so that people sleep in on the weekends.
Via Andrew Gelman I came across this long paper (updated version) on statistical visualization by Rafe Donahue. I haven’t read it through carefully yet, but I enjoyed the examples of visualizations from his children’s schoolwork.
He criticizes boxplots, which caused a discussion in the comments to Andrew’s post. I read Tukey’s EDA recently and was surprised to see how much of Tukey’s work was focused on visualization by hand. The boxplot is a sensible visualization when you had to compute and plot manually. Using only 5 numbers it portrayed much of what was important about the data. However, now that plotting is cheap, it makes a lot more sense to just plot all the data.
In general, summaries, visual or otherwise, which assume a single mode, or worse normality, should be treated with a great deal of caution.