This work tackles the problem of whether the dissociation between two performances in a single-case study should be computed as the difference between the raw or between the standardized (e.g. z) scores. A wrong choice can lead to serious inflation of the probability of finding false dissociations and missing true dissociations. Two common misconceptions are that (i) standardized scores are a universally valid choice, or (ii) raw scores can be subtracted when the two performances concern the same “task/test”, otherwise standardized scores are better. These and other rules are shown to fail in specific cases and a solution is proposed in terms of in-depth analysis of the meaning of each score. The scores that should be subtracted are those that better reflect “deficit severities” – the latent, unobservable degrees of damage to the cognitive systems that are being compared. Thus explicit theoretical modelling of the investigated cognitive function(s) – the “scenario” – is required. A flowchart is provided that guides such analysis, and shows how a given neuropsychological scenario leads to the selection of an appropriate statistical method for detecting dissociations, introducing the critical concept of “deficit equivalence criterion” – the definition of what exactly a non-dissociation should look like. One further, overlooked problem concerning standardized scores in general (as measures of effect size, of which neuropsychological dissociations are just one example) is that they cannot be meaningfully compared if they have different reliabilities. In conclusion, when studying dissociations, increases in false-positive and false-negative risks are likely to occur when no explicit neuropsychological theory is offered that justifies the definition of what are to be considered as equivalent deficit severities in both performances, and which would lead to appropriate selection of raw, standardized, or any other type of score. More generally, the choice of any measure in any research context needs explicit theoretical modelling, without which statistical risks cannot be controlled.

Dissociations in neuropsychological single-case studies: Should one subtract raw or standardized (z) scores?

Toraldo A.
2022-01-01

Abstract

This work tackles the problem of whether the dissociation between two performances in a single-case study should be computed as the difference between the raw or between the standardized (e.g. z) scores. A wrong choice can lead to serious inflation of the probability of finding false dissociations and missing true dissociations. Two common misconceptions are that (i) standardized scores are a universally valid choice, or (ii) raw scores can be subtracted when the two performances concern the same “task/test”, otherwise standardized scores are better. These and other rules are shown to fail in specific cases and a solution is proposed in terms of in-depth analysis of the meaning of each score. The scores that should be subtracted are those that better reflect “deficit severities” – the latent, unobservable degrees of damage to the cognitive systems that are being compared. Thus explicit theoretical modelling of the investigated cognitive function(s) – the “scenario” – is required. A flowchart is provided that guides such analysis, and shows how a given neuropsychological scenario leads to the selection of an appropriate statistical method for detecting dissociations, introducing the critical concept of “deficit equivalence criterion” – the definition of what exactly a non-dissociation should look like. One further, overlooked problem concerning standardized scores in general (as measures of effect size, of which neuropsychological dissociations are just one example) is that they cannot be meaningfully compared if they have different reliabilities. In conclusion, when studying dissociations, increases in false-positive and false-negative risks are likely to occur when no explicit neuropsychological theory is offered that justifies the definition of what are to be considered as equivalent deficit severities in both performances, and which would lead to appropriate selection of raw, standardized, or any other type of score. More generally, the choice of any measure in any research context needs explicit theoretical modelling, without which statistical risks cannot be controlled.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11571/1452291
Citazioni
  • ???jsp.display-item.citation.pmc??? 0
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 2
social impact