How do we know if an important, causal, and controllable variable has been identified? There are two ways to answer this question. First, a useful functional analysis has treatment utility if it leads the therapist to do something differently from what was planned (Hayes et al., 1987), and a differential treatment effect is then observed (Nelson, Hayes, Jarrett, Sigmon, & McKnight, 1987). The second way of knowing whether an important functional relationship has been identified is to manipulate the variable and observe its effect in the clinical setting. There are many methods of conducting single-subject design studies that are applicable in clinical settings (e.g., Haynes & O'Brien, 2000; Haynes & Williams, 2003; Kazdin, 1982). Because of well-known heuristic biases that affect the interpretation of clinical findings (Turk & Salovey, 1988), it is important to gather data on whether a putative functional relationship actually matters. Having just referred to a variety of technical solutions for examining single-subject studies in the clinical setting, it is still possible to make use of simple strategies that can shed light on the utility of an analysis.
Simple data gathering procedures and ethical strategies are necessary in a clinical setting. The therapist does not want to provide ineffective treatment, nor does he or she wish to add to the client's stress unreasonably. Having said that, it is indeed ethical to inform the patient of deviations from evi-denced-based interventions and collect data to test whether these changes are benefiting the patient.
There are different strategies that provide higher- or lower-quality data about the utility of making a clinical change based on a hypothesized functional relationship. A weak but simple procedure that can provide probative data about whether the analysis is correct is an A-B within-patient design. A and B refer to two different treatment conditions. A is usually some baseline or steady-state condition, and B is a different treatment condition. Assume that an evidenced-based intervention is being used and that the patient has either not progressed or has progressed but stabilized at some less-than-opti-mal level of symptom state. For example, say that the therapist has been gathering data on family functioning, and a stable, significant degree of estrangement remains. These estrangement data constitute a steady state or treatment phase we can call A. The therapist then changes treatment on the basis of a presumed functional relationship that addresses problems beyond the standard protocol. This new treatment element is the B phase. If the functional relationship (e.g., see the above discussion of tacts and mands) has an effect, then an improvement in estrangement rating should occur during the B phase. However, a change in estrangement also could have occurred by chance just when B was implemented. If no change is observed, then the absence of change is evidence the analysis is incorrect or incomplete.
A more stringent demonstration of control over estrangement would require a reversal design, or an A-B-A-B. As is implied by the name, in this type of data gathering strategy, if the first B produced a useful effect, then the baseline or steady-state condition, A, is reinstated with the expectation that estrangement would then increase. A second implementation of B and a corresponding improvement in estrangement reduces the likelihood that the relationship between treatment element B and estrangement were random. The ethical problem is obvious. The second A period would be expected to be associated with a worsening of symptoms. This is difficult for both patients and therapists to undergo. However, there is a real value to knowing with greater certainty which factors will help maintain improvement and increase generalization. Sometimes treatment B cannot be undone. For example, once a patient has learned a skill, that skill cannot be taken away in the reversal portion of the treatment. It would be like teaching someone to ski and then saying, "Now pretend you don't know how to ski."
An alternative form of testing an hypothesis regarding behavior change is a multiple baseline design. In this design steady-state data are gathered and then a change strategy is implemented sequentially in different settings in the patient's life. Staying with the estrangement example, after gathering the baseline data, the therapist could teach the spouse how to better interact so that the patient could learn to tact and mand. The patient could gather data about communications with the spouse and in another setting, such as with the patient's parents. We would expect improvement in one setting (with the spouse) but not the other (with the parents) because the intervention has only been implemented in the one setting with the spouse. The estrangement problem with the parents should remain at baseline. Next, the same intervention could be introduced to the parents, and now the estrangement should change in that setting. This logic can be applied to multiple settings; estrangement should change independently in each setting only when the treatment is implemented in that setting. This is a more elegant demonstration of control. In reality, some cross-communication could eventually occur which would lead to improvements in settings not yet targeted for intervention. However, multiple baseline designs are convenient and avoid the reversal problem described above.
One other design that is very convenient and natural to use is the alternating treatment design. This type of design can be used within a single session or across multiple sessions to gather evidence about the utility of a hypothesized functional relationship. In this design the therapist uses one type of intervention in one session and a different type in another session. If the therapist is gathering some kind of clinically relevant data, he or she can do a simple comparison of how the patient responds to each intervention. If there are no differences in how the patient responds, there is no evidence that the change in therapy procedure works, though the reasons for the failure could be complicated to interpret.
Employing these simple designs to test a functional analysis is useful because functional analyses are dynamic, often starting out incomplete and becoming more refined as data are gathered. As cautioned above, a functional relationship is limited in the domains it can influence and is limited in the time it is useful. Once one clinical problem is resolved, another may emerge that requires a whole new analysis. It is this idiographic procedure that can produce individualized treatment plans that can greatly supplement the evidenced-based treatment of individuals with PTSD.
Was this article helpful?