On conceptual rigor: “What do we take ourselves to be doing?”


Key takeaways


  • But there is sometimes a deeper question to be asked: why are we doing what we are doing? What is the conceptual basis for the research we’re conducting?
  • There is frequently a need not just for more methodological and statistical rigor, but more conceptual rigor too.
  • They arise due to under-powered studies, small actual effect sizes, poorly defined hypotheses, exploratory analyses of a high number of variables, excessive degrees of freedom in analyses including mining through covariates, lack of correction for multiple tests, and lack of internal replication – effectively fishing for significance in noisy data.
  • The result? A decade’s worth of work that generated a literature of false positives, and a secondary literature aiming to work out the mechanisms underlying signals that were not real in the first place.
    • Note: It’s not just bad that there are false positives, but the research triggers a lot of new studies that are pointless because they are built on a poor foundation.
    • Kind of “unlucky winner” where there is a hidden literature of negative findings. Whoever gets the false positive will think it’s a true signal and chase smoke.
  • so-called brain-wide association studies (or BWAS, where thousands of neuroimaging parameters are compared across groups of cases and controls, looking for some kind of difference somewhere) require the same kind of methodological rigor as GWAS.
  • (Indeed, the literature claiming to have found all kinds of imaging “biomarkers” of all kinds of psychiatric conditions is as inconsistent and irreproducible as the candidate gene association literature).
  • The general solutions are pretty clear: bigger samples, fewer degrees of freedom in the analyses, proper corrections for multiple testing, independent samples for replication of exploratory findings before publication, and publishing both positive and negative results.
  • Philosophers have a phrase they often employ, when talking or writing about their work – kind of a self-narration of the processes of thought. They will often say: “What I take myself to be doing here is…”.
  • it’s a discipline of reflection on the activity one is engaged in. It makes you make explicit the premises and assumptions on which a line of reasoning is based.
  • Philosophers make these declarations precisely so that others can challenge them, but it is, more fundamentally, a useful exercise in clarifying your own thoughts to yourself.
  • In contrast, “the biology” of something like autism or schizophrenia manifests at the level of the highest functions of the human mind – social cognition, conscious perception, language, organised thought. Finding the genes that convey some risk for the condition is highly unlikely to immediately inform on the biology underpinning those cognitive processes and psychological phenomena.
  • The genetics has clearly shown us that “autism” is an umbrella term that describes an emergent cluster of symptoms at the psychological and behavioral levels, linked with extremely diverse genetic etiologies. Should we then expect some commonalities across patients at the level of brain structure, though they may have a hundred different genetic causes? Would we propose such an experiment for a category like “intellectual disability”?
    • Note: Equifinality
  • And it is not obvious, under that model, why so many distinct genetic etiologies would then lead to a consistent signature across patients.
  • You could do these kinds of projects with all the statistical and methodological rigor that could possibly be brought to bear and still not learn anything useful, if they are not founded on clear conceptual premises.
  • To sum up, the focus on improving statistical and methodological rigor in these and in all fields is crucial if we want to make our science robust and reproducible. But, if we don’t take the time to ask ourselves what we take ourselves to be doing, we’re likely to end up doing something other than what we think. Poorly conceptualised experiments can waste just as much time and resources as poorly executed ones.