Archive for January 2009


R packages

January 21st, 2009 — 6:53pm

In class today we covered R packages. A quick try to create a package in Windows revealed that the Windows version of R does not come with the necessary build tools. I tried again on a Mac and ran into problems where package.skeleton failed to create the package directories since .find.package couldn’t find my newly created package. After a little playing around I found that package names cannot have an ‘_’ (at least on a Mac).

The R CMD CHECK command is very nice. It expands on the idea of static code checking to also check documentation, the install process, example code, etc.

Comment » | Uncategorized

Exploratory Model Analysis

January 21st, 2009 — 4:17am

I’ve recently come across a few papers on Exploratory Model Analysis. I wasn’t familiar with this work when writing the EnsembleMatrix paper, but they are very closely related. I was working with a ML researcher while designing the EnsembleMatrix visual interface and so did quite of bit of looking around in the ML literature. EMA is emerging in statistics and so didn’t appear in my search.

Here are a few pointers:
Parallel coordinates for exploratory modelling analysis (Antony Unwin)
Exploratory modelling analysis: visualizing the value of variables (Antony Unwin)
Meifly: Models explored interactively (Hadley Wickham)

I tried installing meifly, but it appears to depend on ggplot which is no longer available since ggplot2 has been released.

[2004] Exploratory data analysis for complex models (with discussion) (Andrew Gelman)
Discussion of this paper by Andreas Buja (Andreas Buja)
Rejoinder to discussion (Andrew Gelman)

This discussion is quite inspiring. The idea that visualizations can be thought of a statistical tests was quite eye opening. I think that this suggests quite a few directions for research in InfoVis. However, there hasn’t been much work in this area in the 4 years since the paper came out. Why? Perhaps the artificial division between InfoVis and statistical visualization has kept it from being noticed. Perhaps it’s just very hard

1 comment » | visualization

Visualizing Obama’s voter contact operation

January 17th, 2009 — 4:59pm

Mark Blumenthal writes about new voter turnout information from the 2008 election. The following graph shows the level of voter contact from the Kerry and Obama campaigns (red=low to green=high). Obama had a broader voter contact operation spreading resources more effectively across those with a high probability of voting and voting Democrat.

20080116-catlistheatmap-thumb-550x411

Suggestions:

  • swap the direction of the vertical axis to put high turnout on top
  • add scale numbers, how many contacts? how high is high turnout?
  • since the number of contacts is nonnegative, I would use a sequential (one-sided) color scale (running from white, 0, to green) rather than a diverging scale.
  • how many people fall into each bucket? An additional grayscale plot showing the distribution of people would be helpful. Or preferably, if possible, the axes could be transformed to make the distribution of individuals roughly uniform across the plot.

Comment » | visualization

First time designing a visualization

January 15th, 2009 — 10:34pm

The CHANCE contest submission below was my first time creating a complete static visualization that tries to tell a story. It’s sort of sad that I’m in my third year as a Ph.D. student studying visualization and I hadn’t done that yet.

I found it quite satisfying. Back in the olden days when I worked in rendering there was an immense amount of satisfaction that came from getting a rendering right–both visually and algorithmically. In visualization I hadn’t felt that yet, since all of my projects so far have been rather flaky research prototypes.

Over at FlowingData, Nathan is running a biweekly visualization competition/discussion. The first installment uses US poverty statistics. This’ll be a good chance for me to get more design experience.

Comment » | visualization

My submission to the CHANCE contest

January 15th, 2009 — 10:13pm

antibiotics_justintalbot

Contest description is here:
http://www.public.iastate.edu/~larsen/graphics%20contest.pdf

Comment » | visualization

R complaints

January 15th, 2009 — 3:47am

I’ve recently read a number of complaints about the R programming language and thought I’d pull together the complaints into one place.

  • Inconsistent return types, list/vector confusion (Andrew Gelman)
    I always get mixed up about when to use [] and [[]].
  • Lack of useful types (Andrew Gelman)
    HavingĀ  nonnegative or [0,1) constrained floating point types would be quite useful in many circumstances. I haven’t used factors enough to know if they would work in most scenarios where an enumerated type is used in other languages. Having built in random variable types would be useful too.
  • Scalability and S4 complexity issues, mixed with R coding style issues (Andrew Gelman)
    I haven’t used S4, so I can’t comment on that. However, I have found it very useful to be able to type the name of a function on the command line and see it’s code directly. Unfortunately, built-in functions (e.g. lapply) don’t print out (probably because they really only exist in C code). It would be nice for such functions to print out an equivalent R implementation with a note saying that it really executes in C.
  • Vector indexing issues #1, #2, #2a, #3 (Radford Neal)
    As a CS guy I find 1-based vectors hard to justify, but Radford notes a number of other issues. I’ve been bitten by the automatic dimension dropping “feature” rather frequently.

Comment » | Uncategorized

Non-functional elements in R

January 14th, 2009 — 11:34pm

This list is from John’s lecture:

  1. Operators and functions with side effects: (<<-, assign(), options(foo=))
  2. Nonstandard R objects: environments, connections.
  3. Random number generation
  4. Special mechanisms-“closures”

According to the R language definition, environments are mutable objects, so changes are visible outside of the function. R closures can have side effects due to the fact that when an R function returns a list of closures, these closures share the same environment. Since the environment is mutable, using <<- in the any of the closures will affect the other closures as well.

Example (inspired by a more involved example in John’s Software for Data Analysis):

> Counter <- function(start) {
+ t <- start
+ list(
+ inc = function() {t <<- t+1; t},
+ dec = function() {t <<- t-1; t} )
+ }
>
> counter <- Counter(5)
> counter[["dec"]]()
[1] 4
> counter[["dec"]]()
[1] 3
> counter[["inc"]]()
[1] 4

The Counter function returns a list of two functions “inc” and “dec”. Both functions are associated with the same environment. Thus successive calls to “inc” and “dec” operate on the same t variable. This use of closures has become largely out of date with the addition of S4 objects to the language.

Comment » | Uncategorized

Back to top