March 30, 2015

Causality, part II (was it caused by Part I?)

This post serves as a follow-up to a Synthetic Daisies post written in 2012 on new methods to detect causality in data.

Here are a few interesting readings at the intersection of data analysis and the philosophy of science. The first [1] is a new arXiv paper [2] that evaluates two approaches to evaluating causality using two machine learning techniques. A plethora of discriminative machine learning techniques have emerged in recent years to address relatively simple relationships. In terms of cause and effect itself, the distinguishing signal is often subtle and unclear even for seemingly obvious sets of relationships. In [2], techniques called Additive Noise Methods [3] and Information Geometric Causal Influence [4]. A dataset called CauseEffectPairs [5] was used to benchmark each method, and show that causal relationships can be uncovered from a wide variety of data.


The second paper (or rather series of papers) is on the topic of strong inference [6]. Strong inference is an alternative to hyper-reductionism and the use of over-simplified models. Strong inference involves the use of a conditional inductive tree to examine the possible causes for a given phenomenon [7]. Potential causes (or hypotheses) represent nodes of the tree, and these hypotheses are falsified as one moves through the tree using either inductive or empirical criteria. Unlike the machine learning models we discussed, the goal is to lead a researcher to key experiments that help to uncover the sources of variation. In general, this process of elimination lead us to the best answers, Yet according to Platt in [2], this approach can ultimarely provide us with axiomatic statements.

Conceptual steps involved in strong inference. COURTESY: Figure 1 in [8].

While this seems to be a fruitful methodology, it has turned out to be more inspirational than as a source of analytical rigor [9]. Strong inference hs inflenced a variety of scientific fields concentrated in the biological and social sciences. Platt predicted [2] that sciences that concurred with strong inference would be fields that experienced a greater number of breakthrough advances. However, in testing Platt's predictions regarding the efficacy of Strong Inference, is have been found that advances are not directly related to the adoption of the method [10]. This could be due to our incomplete understanding of the factors that drive scientific discovery and the rate of advancement. 


[2] Mooij, J.M., Peters, J., Janzing, D., Zscheischler, J., and Scholkopf, B.   Distinguishing cause from effect using observational data: methods and benchmarks. arXiv, 1412.3773 (2014).

[3] Hoyer, P.O., Janzing, D., Mooij, J.M., Peters, J., and Scholkopf, B.   Nonlinear causal discovery with additive noise models. In Advances in Neural Information Processing Systems (NIPS), 21, 689-696 (2009).

[4] Daniusis, P., Janzing, D., Mooij, J.M., Zscheischler, J., Steudel, B., Zhang, K., and Scholkopf, B. Inferring deterministic causal relations. In Proceedings of the 26th Annual Conference on Uncertainty in Artificial Intelligence (UAI), 143-150 (2010).

[5] This work was part of the CauseEffect Pairs Challenge and was presented at NIPS 2013.

[6] Platt, J.R.   Strong Inference: certain systematic methods of scientific thinking may produce much more rapid progress than others. Science, 146(3642), 347-352 (1964).

[7] Neuroskeptic   Is Science Broken? Let's Ask Carl Popper. Neuroskeptic blog, March 15 (2015).

[8] Fudge, D.S.   Fifty years of J.R. Platt's Strong Inference. Journal of Experimental Biology, 217, 1202-1204 (2014).

[9] Davis, R.H.   Strong Inference: rationale or inspiration? Perspectives in Biology and Medicine, 49(2), 238-250 (2006).

[10] O'Donohue, W. and Buchanan, J.A.   The Weaknesses of Strong Inference. Behavior and Philosophy, 29, 1-20 (2001).

No comments:

Post a Comment

Printfriendly