Monday, October 15, 2007

Models that Account for the Same Data

Perhaps what our minds are doing is accounting for the data. The data is everything around us that we experience with our senses. This information is fed into our neurons, which, by virtue of their network organization, perform some sort of operation on this information. This operation can be likened to a form of model fitting (for those of you who are familiar with the modeling world). Our neurons constantly flux in an effort to represent the information we encounter in the most stable possible way, that allows us to incorporate new information as well as to maintain old information, and even to allow old information to modify new ones.

Consider the method called principal components analysis. This is nothing but redefining the data in terms of another set of dimensions. It thus appears that the same data can be understood in different ways, without changing the data one bit. Furthermore, using one set of dimensions over another set is simply dependent on one's goals or assumptions when trying to arrive at an explanation or investigation.

So then, the question is, which approach is scientific?

1 comment:

  1. Anonymous1:06 PM

    I'm not sure i understand the question at the end of this post.

    Methods like PCA, ICA, LLE don't redefine the data in term of other dimensions but are instead used to reduce the dimensionality of the data or to determine either the statistically reliable or task relevant sub-manifold of the data.

    This approach is perfectly scientific as science is concerned with the predictable (and the predictably useful). Moreover, these methods have well defined generative models (except possibly LLE). As a result, the model selection (or dimension reduction) process has normative solution within the Bayesian framework, and even has the benefit of automatically implementing Occam's Razor.

    ReplyDelete