The readings listed below are the foundation of this course. Where available, journal article abstracts from PubMed (an online database providing access to citations from biomedical literature) are included.
Franklin, J. The Science of Conjecture: Evidence and Probability Before Pascal. John Hopkins University Press, 2001.
Jaynes, E. T. Probability Theory: The Logic of Science. Cambridge University Press, 2003.
Gigerenzer, G., and D. J. Murray. Cognition as Intuitive Statistics. Hillsdale, NJ: Erlbaum, 1987.
Gilovich, T., D. Griffin, and D. Kahneman, eds. Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge University Press, 2002.
Kahneman, D., P. Slovic, and A. Tversky, eds. Judgment under Uncertainty: Heuristics and Biases. Cambridge University Press, 1982.
Pearl, J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufman, San Mateo, CA, 1988.
Breese, J. S. "Construction of Belief and Decision Networks." Computational Intelligence 8, 4 (1992): 624–647.
F. Bacchus, A. J. Grove, J. Y. Halpern, and D. Koller. "Statistical Foundations for Default Reasoning." Proceedings of the 13th International Joint Conference on Artificial Intelligence (IJCAI). Chambery, France, August 1993, pp. 563-569.
Pasula, H., and S. Russell. "Approximate Inference for First-order Probabilistic Languages." IJCAI-01. Seattle, WA, 2001, pp. 741–748.
Halpern, J. Y. "An Analysis of First-order Logics of Probability." Artificial Intelligence 46, 3 (1990): 311–350.
D. Koller, and A. Pfeffer. "Object-Oriented Bayesian Networks." Proceedings of the 13th Annual Conference on Uncertainty in AI (UAI). Providence, Rhode Island, 1997, pp. 302-313.
Waldmann, M. R. "Competition among Causes but not Effects in Predictive and Diagnostic Learning." Journal of Experimental Psychology: Learning, Memory, and Cognition 26 (2000): 53-76.
Ahn, W., and M. Dennis. "Induction of Causal Chain." Proceedings of the Twenty-second Annual Conference of the Cognitive Science Society. Lawrence Erlbaum Associates, NJ: Mahwah, 2000.
Dennis, M. J., and W. Ahn. "Primacy in Causal Strength Judgments." Memory & Cognition 29 (2001): 152-164
Cheng, P. W. "From Covariation to Causation: A Causal Power Theory." Psychological Review 104 (1997): 367-405.
Novick, L. R., and P. W. Cheng. "Assessing Interactive Causal Influence." Psychological Review. (in press)
Pearl, J. Causality: Models, Reasoning, and Inference. New York: Cambridge University Press, 2000.
Glymour, C. "Learning, Prediction and Causal Bayes Nets." Trends in Cognitive Science 7 (2003): 43-48.
PubMed abstract: Recent research in cognitive and developmental psychology on acquiring and using causal knowledge uses the causal Bayes net formalism, which simultaneously represents hypotheses about causal relations, probability relations, and effects of interventions. The formalism provides new normative standards for reinterpreting experiments on human judgment, offers a precise interpretation of mechanisms, and allows generalizations of existing theories of causal learning. Combined with hypotheses about learning algorithms, the formalism makes predictions about inferences in many experimental designs beyond the classical, Pavlovian cue-->effect design.
———. The Mind's Arrows: Bayes Nets and Graphical Causal Models in Psychology. MIT Press, 2001.
Zhang, N., and D. Poole. "Exploiting Causal Independence in Bayesian Network Inference." Journal of Artificial Intelligence Research 5 (1996): 301–328.
Waldmann, M. R. "Knowledge-based Causal Induction." The Psychology of Learning and Motivation, Vol. 34: Causal Learning. Edited by D. R. Shanks, K. J. Holyoak, and D. L. Medin. San Diego: Academic Press, 1996, pp. 47-88.
Ahn, W., C. W. Kalish, D. L. Medin, and S. A. Gelman. "The Role of Covariation vs. Mechanism Information in Causal Attribution." Cognition 54 (1995): 299-352.
PubMed abstract: Traditional approaches to causal attribution propose that information about covariation of factors is used to identify causes of events. In contrast, we present a series of studies showing that people seek out and prefer information about causal mechanisms rather than information about covariation. Experiments 1, 2 and 3 asked subjects to indicate the kind of information they would need for causal attribution. The subjects tended to seek out information that would provide evidence for or against hypotheses about underlying mechanisms. When asked to provide causes, the subjects' descriptions were also based on causal mechanisms. In Experiment 4, subjects received pieces of conflicting evidence matching in covariation values but differing in whether the evidence included some statement of a mechanism. The influence of evidence was significantly stronger when it included mechanism information. We conclude that people do not treat the task of causal attribution as one of identifying a novel causal relationship between arbitrary factors by relying solely on covariation information. Rather, people attempt to seek out causal mechanisms in developing a causal explanation for a specific event.
Ahn, W., L. Novick, and N. S. Kim. "Understanding it Makes it More Normal: Causal Explanations Influence Person Perception." Psychonomic Bulletin and Review. (in press)
Anderson, J. R. The Adaptive Character of Thought. Hillsdale, NJ: Lawrence Erlbaum Associates, 1990.
Ahn, W., and L. M. Graham. "The Impact of Necessity and Sufficiency on Information Choices in the Wason Four-card Selection Task." Psychological Science 10 (1999): 237-242.
Oaksford, M., and N. Chater. "A Rational Analysis of the Selection Task as Optimal Data Selection." Psychological Review 101 (1994): 608-631.
Oaksford M., and N. Chater, eds. Rational Models of Cognition. Oxford University Press, 1998.
Sperber, D., F. Cara, and V. Girotto. "Relevance Theory Explains the Selection Task". Cognition 57 (1995): 31-95.
PubMed abstract: We propose a general and predictive explanation of the Wason Selection Task (where subjects are asked to select evidence for testing a conditional "rule"). Our explanation is based on a reanalysis of the task, and on Relevance Theory. We argue that subjects' selections in all true versions of the Selection Task result from the following procedure. Subjects infer from the rule directly testable consequences. They infer them in their order of accessibility, and stop when the resulting interpretation of the rule meets their expectations of relevance. Subjects then select the cards that may test the consequences they have inferred from the rule. Order of accessibility of consequences and expectations of relevance vary with rule and context, and so, therefore, does subjects' performance. By devising appropriate rule-context pairs, we predict that correct performance can be elicited in any conceptual domain. We corroborate this prediction with four experiments. We argue that past results properly reanalyzed confirm our account. We discuss the relevance of the Selection Task to the study of reasoning.
Camerer, C. Behavioral Game Theory: Experiments in Strategic Interaction (Roundtable Series in Behaviorial Economics). Princeton University Press, 2003.