I'm a Visiting Assistant Professor of Philosophy at Tulane University. Before that, I was a Lecturer in Philosophy at Washington University in St. Louis. I received my PhD from the Philosophy-Neuroscience-Psychology (PNP) program at Washington University in St. Louis on May 18, 2017. I defended my dissertation, Model and World: Generalizing the Ontic Conception of Scientific Explanation, on March 10, 2017.
My research focuses on scientific explanation. My dissertation, Model and World: Generalizing the Ontic Conception of Scientific Explanation, defends a theory of scientific explanation that I call the “Generalized Ontic Conception” (GOC): A model explains when and only when it provides (approximately) veridical information about the ontic structures on which the explanandum phenomenon depends. Causal and mechanistic explanations are species of GOC in which the ontic structures on which the explanandum phenomenon depends are causes and mechanisms, respectively, and the kinds of dependence involved are causal and constitutive/mechanistic, respectively. The kind of dependence relation about which information is provided determines the species of the explanation. This provides an intuitive typology of explanations and opens the possibility for non-causal, non-mechanistic explanations that provide information about non-causal, non-mechanistic kinds of dependence (Pincock 2015; Povich forthcoming). What unites all these forms of explanation is that, by providing information about the ontic structures on which the explanandum phenomenon depends, they can answer what-if-things-had-been-different questions (w-questions) about the explanandum phenomenon. This is what makes causal explanations, mechanistic explanations, and non-causal, non-mechanistic explanations all explanations.
GOC is a generalized ontic conception of scientific explanation (Salmon 1984, 1989; Craver 2014). It is consistent with Craver's claim that, according to the ontic conception, commitments to ontic structures (like causes or mechanisms) are required to demarcate explanation from other scientific achievements. GOC demarcates explanatory from non-explanatory models in terms of ontic structures. For example, the distinction between explanatory and phenomenal models is cashed out in terms of the ontic structures about which information is conveyed: A phenomenal model provides information about the explanandum phenomenon, but not the ontic structures on which it depends. GOC is generalized because it says that commitments to more of the ontic than just the causal-mechanical – the traditional focus of the ontic conception – are required adequately to achieve this demarcation; attention is required to all ontic structures on which the explanandum depends.
I explicate the relation between model and world required for explanation in terms of information rather than mapping, reference, description, or similarity (Craver and Kaplan 2011; Kaplan 2011; Weisberg 2013). The latter concepts prove too strong; accounts that rely on those concepts do not count some models as explanatory that in fact are. Take Kaplan and Craver's (2011) model-to-mechanism-mapping (3M) principle. According to 3M, the variables in an explanatory model must map to specific structural components and causal interactions of the explanandum phenomenon's mechanism. However, you can explain without referring to the explanandum's mechanism or its components and their activities, for example, by describing what the mechanism is not like. This is a way of constraining or conveying information about a mechanism without actually mapping to, referring to, describing, representing, or being similar to it.
In future work, I plan to continue working on scientific explanation and modeling as well as the intersection of the theory of explanation and social epistemology. I also plan to work on theories of realization, the subset theory in particular.