In most research, the strongest test of the causal hypotheses of a cognitive vulnerability model is provided by a true experimental design. The unique importance of experimental designs is that independent variables are directly manipulated, and extraneous factors, including individual differences that are present prior to the study, are controlled. For example, in "clinical trials" or therapy-outcome studies, different treatment conditions (e.g., cognitive therapy versus pharmacotherapy) represent the manipulated independent variable(s), and participants with some disorder (e.g., all with major depressive disorder) are randomly assigned to the different treatment groups or conditions. The effects of the randomly assigned independent variables (e.g., treatment conditions) are then assessed on measures of the dependent variables (e.g., scores on depression inventories). In true experimental designs, the experimental control over sources of error permits a relatively strong basis for assuming that experimental treatments cause group differences. The strong basis for causal in ference reflects the fact that the experimental groups are most probably equivalent on all variables (e.g., individual differences between the participants that were present before the experiment) except those that are experimentally manipulated.
The ideal research design also incorporates additional elements that help to rule out possible sources of error. For example, the internal validity of experimental manipulations (or of what they are intended to manipulate) is established by means of manipulation checks (e.g., assessments of the fidelity of therapy treatment to a therapy manual). Additionally, "pre-test-postest designs" can be used to ensure that the experimental groups are, indeed, equated on the dependent variables of interest (e.g., depression) before the experimental treatments. "Posttest only" designs that lack pretest measures are open to the threat that their internal validity is compromised because of differences between the groups that were present, due to chance, before assignment to the treatments. In the best case scenario, the ideal research design is entirely experimental, and uses additional elements and experimental controls to rule out possible sources of error.
Needless to say, it is hard to imagine that a researcher would randomly assign participants to different conditions to experimentally manipulate their level of cognitive vulnerability (e.g., high vs. low) and stress (high vs. low). For example, for both ethical as well as pragmatic reasons, researchers would not attempt to randomly assign young children to conditions that manipulate their attachment relationships (e.g., "good" versus "bad") to test their probability of developing future emotional disorders. Thus, cognitive vulnerability researchers almost inevitably rely on other research designs, including analogue, quasi-experimental, and correlational research designs (Alloy et al., 1999).
Given that true experimental designs are usually impossible to implement in research on human cognitive vulnerability, some cognitive vulnerability research uses analogue and quasi-experimental designs. Analogue studies (which can use laboratory animals or nonclinical human participants as proxies for actual clinical patients) can sometimes have value for testing parts of cognitive vulnerability theories. For example, experimental manipulations in animal analogue studies have been used to test potential causal variables featured in the learned helplessness model of depression in humans (e.g., see Seligman, 1975). Likewise, experimental manipulations in analogue studies with humans have tested cognitive models of depression by randomly assigning normal (i.e., nondisordered) participants to different mood induction conditions (e.g., depression vs. elation vs. neutral)
and assessing mood changes or changes in cognitive biases (e.g., Riskind, 1989). As detailed elsewhere (Abramson & Seligman, 1977), analogue studies must meet certain criteria if they are to have validity as analogues for human psychological problems. In particular, they must establish close similarities between the analogue model (e.g., learned helplessness) and the clinical disorder (e.g., human depression), and between essential features of the disorder (e.g., the motivational and cognitive deficits in depression) and the features of the analogue model (e.g., behavioral and learning deficits in helpless animals). In other words, these studies must establish the construct validity of both the experimental manipulations (e.g., helplessness) and of the dependent variables as models for the disorder of interest. Under conditions when these criteria are met, analogue studies can provide a useful type of convergent evidence for the "construct validity" of the causal mechanisms hypothesized by a cognitive vulnerability model.
A quasi-experimental design can offer another option for testing cognitive vulnerability models. These designs contain some elements of experimental control (e.g., there is, by definition, at least one experimental manipulation), yet they are not wholly experiments because they do not assign participants on a random basis to one of the independent variables (referred to as the "quasi-experimental variable"). For example, responses of high risk (cognitively vulnerable) and low risk (nonvulnerable) individuals might be compared on an experimental laboratory task of memory that manipulates a "within-subjects" factor (e.g., the positive or negative valence of information). Such quasi-experimental designs are, to be sure, generally interpretable and wholly defensible as designs that permit causal inferences for some purposes. For example, the designs are adequate for testing causal inferences "within" the high (or low) cognitive vulnerability group, in that half of the participants are randomly assigned to serve as a control for the other half of the participants who receive a different manipulation or intervention.
At the same time, when any of the statistical analyses involve the quasi-experimental "between-group" variable of interest (e.g., the cognitive vulnerability), the design does not permit an unambiguous test of causal hypotheses. The crux of the difficulty is created by the fact that the participants are "self-selected" into the quasi-experimental groups, rather than randomly assigned. That is, the quasi-experimental groups (e.g., of cognitive vulnerability) may be inadvertently different in neuroticism, gender, other psychopathology, or any other number of third variables that are correlated with the quasi-experimental variable.
One of the main solutions that researchers using quasi-experimental designs often implement to minimize this difficulty is to use a participant matching approach. In one of the two main variants of participant match ing, a researcher ensures that when the groups as a whole are compared on the third variables, they are shown not to differ ("samplewide matching"). In the second main variant, each group participant is paired (matched) on a case-by-case basis with a participant in another group who is matched with similar characteristics ("case-by-case matching"). For example, each participant in a cognitive vulnerability group can be paired with another participant who is matched on potential confounding third variables such as gender, educational level, or another demographic detail. In addition to these variants, a further common solution is to use statistical methods such as hierarchical regression analysis or analysis of covariance to remove the effects of potential confounding variables. But none of these solutions can replace the advantages of direct experimental control and random assignment.
Correlational Designs: Cross-Sectional, Look-Back, and Prospective
Still other design options that can be used to study cognitive vulnerability include "cross-sectional," "remitted disorder," and "retrospective or follow-back" designs (Alloy et al., 1999). First, cross-sectional (case control) studies compare a group with a disorder of interest to a normal control group (and, perhaps, groups with other disorders) on characteristics such as their respective scores on cognitive vulnerability measures. Such studies can be seen as preliminary tests or sources of hypotheses of potential vulnerability factors. Even so, they are wholly inadequate for establishing the temporal precedence or stability of a vulnerability independent of the symptoms of the disorder. That is, such designs are saddled with the alternative possibility that scores for the putative cognitive vulnerability are simply correlates, consequences, or "scars" of the disorder, rather than antecedent causes or risk factors of the disorder (Just et al., 2001; Lewinsohn, Steinmetz, Larson, & Franklin, 1981).
Similar difficulties interfere with the causal inferences that can be drawn from "remitted disorder" designs in which previously symptomatic individuals are examined in a remitted state to see whether a hypothesized cognitive vulnerability is present. Such designs can be useful for circumscribed purposes, such as for testing if the presence of cognitive vulnerability factors following an episode of disorder predicts relapses or recurrences of the disorder. Even so, these cannot be used to determine if the cognitive vulnerability factors of interest were actually present before the episode, or if they are really an outcome of the disorder (Just et al., 2001).
Retrospective and follow-back designs are types of longitudinal studies that "look backward" (instead of forward) over time. In retrospective studies, participants are asked to recall information about their cognitive vulnerabilities (or past stresses) before their first episodes. The main problem with these designs is that the recall of participants can be influenced by forgetting, cognitive biases, or even the presence of a current disorder (or early beginnings of disorder; Alloy et al., 1999). For example, if depressed individuals are asked to recall past life experiences, they might exhibit biased recall of stressful events or past dysfunctional attitudes as a consequence of their current depressive moods. In follow-back studies, which are more unbiased in these respects, objective records of participants are located that existed before the onset of disorder (e.g., medical records, personal diaries) and are then compared for group differences. This being said, for present purposes, an especially relevant form of follow-back studies applies content analysis techniques (Peterson, Seligman, Yurko, Martin, & Friedman, 1998) to extract cognitive vulnerability patterns (e.g., depressive attributional patterns) from verbatim material. For example, cognitive vulnerabilities may potentially be assessed by verbatim material from diaries, letters, or narratives written by participants years or even decades before. The primary reason for preferring follow-back studies over other "look backward" designs is that the "study" groups examined are compared on objective data, or at least on data that are coded objectively, independent of the experimenters' knowledge of the diagnostic status of the groups. An occasional problem such studies face, however, is that there have been changes over time in the methods or procedures by which the original objective data were recorded.
Beware, however, when using most of the preceding designs ("look backward," remitted disorder, cross-sectional designs) because, in testing cognitive models, individuals in the disorder groups may have developed clinical disorders for highly heterogeneous etiological reasons. For example, the origins of the depressive disorders for some individuals may reside in a biological diathesis or dysfunctional interpersonal patterns, not the hypothesized cognitive vulnerability. Hence, if researchers were simply to compare individuals with or without the emotional disorders, then they may be examining superficially similar disorders that are in fact generated by quite different causal processes. So subjects may present with emotional disorders that have seemingly similar phenotypes, but different underlying causes or genotypes (see chap. 2, this vol., on hopelessness depression). If it should turn out that no differences in the hypothesized cognitive vulnerability are obtained, then the null results represent an ambiguous basis for inference. For example, it might simply be that an incorrect "subtype" of disorder was selected (not the one with the putative cognitive vulnerability). The main work so far to emphasize this general proposition is on depression. Moreover, a variety of depression theories have advanced "specific symptom" hypotheses about distinct constellations of symptoms associated with specific vulnerability factors (e.g., chap. 2, this vol.; also see Blatt & Zuroff, 1992; Beck, Epstein, & Harrison, 1983). By the same token, it could also prove important for many other emotional disorders or their subtypes to distinguish them by their putative causal processes or underlying genotypes.
Given the normal inability to implement true experimental studies of the development of emotional disorder, it seems safe to say that prospective ("look-forward"), longitudinal studies are the preferred design. Thus, in the "real" best case, a prospective design is used in which the potential cognitive vulnerability is assessed in participants prior to the onset of an episode (or symptoms) of the disorder of interest. In such a design, the cognitive vulnerability is assessed before the measurement at a later point in time of the symptoms or diagnoses of the disorder. On the basis of these features, prospective designs can help to establish both the vulnerability factor's temporal precedence and independence from symptoms (Alloy et al., 1999). Still, these are not the only benefits of a prospective design. For example, an additional reason to prefer prospective studies is that high risk participants have not yet experienced the clinical disorder. Thus, this design removes the potentially confounding effects of the previous presence of the disorders (e.g., of medication, hospitalization). Moreover, possible experimenter bias is eliminated because the researcher does not know who will eventually develop the disorder. It is also worth noting that prospective studies can also be used to establish if the hypothesized cognitive vulnerability applies with specificity to the clinical disorder of interest and not other disorders (i.e., discriminant validity).
Finally, the most preferred form of prospective, longitudinal design is the behavioral high risk design (Alloy et al., 1999). In this kind of study, participants are selected who are presently nondisordered but who have behavioral (or cognitive) characteristics postulated to make them vulnerable to possibly developing a particular disorder. These "high risk" participants are then followed prospectively over multiple points in time, along with a comparison group of individuals who score low on the hypothesized risk factor. The behavioral high risk design has the advantage of allowing the researcher to establish the precedence and stability of the hypothesized cognitive vulnerability factor in individuals who do not presently possess the disorder of interest. Another benefit is that the design allows the researcher to examine the role of other factors (e.g., stress, protective factors) in influencing which high risk participants later develop the emotional disorder. Nevertheless, care is still required in such behavioral high risk studies and in selecting participants. For example, it is necessary to ensure the retention of participants because of the possibility of differences in attrition (mortality) between groups that can undermine the validity of results.
Taken collectively, in addition, the generalizability or "external validity" of all these study designs must be examined with care. In other words, it can be an open question as to whether the results generalize or transfer from the specific study sample to other parts of the population. Thus, other studies may be necessary to show that the study generalizes to dissimilar regional, ethnic, socioeconomic segments of the population (for more on sampling strategies, such as heterogeneous sampling, see Alloy et al., 1999).
To summarize the main ideas of this section on design, the ideal design for testing cognitive vulnerability models (i.e., the true experimental study) cannot usually be implemented to examine the development of emotional disorders. Methodological limitations result in highly uncertain conclusions. Behavioral high risk studies provide a good compromise, and are clearly the best designs currently available. However, ultimately, evidence from multiple designs may provide the most compelling convergent validity for the effects of cognitive vulnerabilities. Thus, other research designs can provide useful supplemental support for the construct validity of the hypothesized cognitive vulnerability (e.g., for proposed information-processing biases).
Was this article helpful?