“MEASUREMENT: WHY WE GET NO R.E.S.P.E.C.T.”

Psychologists, therapists, and researchers in mental health:

How many times have you been at a party and told someone you’re a psychologist only to hear, “So, you can read my mind?”… or, “Can you analyze my dreams for me?”

Why does this happen?

The public has no idea what we do. At best, we’re often perceived as paid friends or mistaken for psychiatrists.

And, maybe we can live with this but it isn’t just the public. 

The National Institute of Mental Health (NIMH) is a major funder of our research… or it used to be. The recent move towards Research Domain Criteria Initiative (RDoC) for NIH funding means that obtaining a grant for an RCT from the NIH requires that you heavily integrate investigation of possible biological factors in your study to obtain funding. This has occurred despite the fact that most of the field (psychology) agrees that biological components aren’t driving contributors in most maladaptive behavior. In fact, years of searching for specific biological profiles for diagnoses has turned up little useful information. Still we’re on the search for the right blood test, fMRI, EEG, or otherwise, that will diagnose people, why?

Because – it makes what we do ‘real’ for them.  

So, what are the consequences of this search for ‘real’… thing-y-ness in mental health?

If you’re a clinician, not seeking research funding, you may not immediately contact what this means for you. So, here’s my take: If you’re using Beckian-CBT or even ACT, you’re probably fine. We’ve already got loads of RCTs to show these ‘work’. This means you can probably count on insurance companies giving you less of a hassle for treatment reimbursement.

If, by chance, you are using anything else that has had few RCTs you might have problems eventually. If we can’t get said treatment determined an ‘Empirically Supported Treatment’ through the current standard of massive and repeated RCTs. (Eh hem.. FAP. One of the most behavior analytic in-the-moment treatments struggles with RCTs because they are based on our most effective tool (functional analysis). Functional analysis is ideographic and doesn’t easily conform to RCT methodology. This is part of the reason for the build out of the ACL model… a need to standardize functional analysis. )

Well, I’m sorry but if we have to alter a treatment that is driven by a tool we all respect then our overall measurement/methodology strategy sucks. In fact, psychoanalysts were saying this about RCTs from the beginning but when we were in a foot race with them it was a little hard to hear the truth in it.

So, what I’m getting at here is several levels of pervasive problems related to our field… but thankfully, they’re related. 

Some of you may not like what I say here. I fully expect to get a few angry emails (Save it, prove me wrong with data.).

So, here’s my analysis of what’s causing these problems:

In a word: Measurement!

In a few words: Reifying rigidity! Constructs! and lack of integration!

Okay, so I’m probably at a level of geekery here that few will understand. So, this is what I’m talking about.

So, why am I picking on constructs?

We all use constructs. We have to so we can get through the day. Clinicians can’t walk around explaining to each other from the ground-up what “psychological flexibility”, “response flexibility”, “borderline”, “depression”, or anything means. That’s impractical but we do need to continually contact the effect of this on our methods and the perception of the world. Then we need to choose our level of analysis appropriately.

If we assess only at the level of constructs without awareness of the consequences then we’re essentially shooting ourselves in the foot. 

We’ve measured mostly in constructs because measuring real behavior was HARD. We know that behavior and report of behavior vary by context (e.g., mood state bias, retrospective report bias, rule-governed behavior, and the list goes on…) so we’ve tried to standardize the heck out of measures. We’ve measured mid-level concepts that attempt to represent whole clusters of supposedly important relationships. Then, because the public wouldn’t understand this… we have to integrate symptom inventories to give it some ‘realness’. It’s a chain reaction.

When we measure constructs we need them to hold still and mean something so we apply psychometric rules that assume thing-y-ness and stability to these airy clouds of invention. Then we make it ‘real’ with symptom inventories that use diagnostic labels that the public gets, but which we know have poor as hell diagnostic reliability (not surprising since they are essentially Chinese menu style creations. Congrats! pick 5 out of 7 and ooo. la. la. you’re depressed.)

Before you get ‘depressed’ reading this let’s take a ‘beginner’s mind’ to assessment (as Todd Kashdan suggests) and look at how we can fix these problems. 

Let’s build from the ground up. 

Let’s understand our assumptions and what works. Let’s start by measuring behavior, in context, across contexts. 

Contextual Behavioral Science has been moving towards this for years. Some of our brightest minds in theory, philosophy of science, treatment, and methodology have been telling us to go there for years (e.g., Roger Vilardaga, Kelly Koerner, Todd Kashdan, Kelly Wilson, and many others.)

For the interested, here are a few citations:

Wilson, Hayes, Gregg, & Zettle (2001). Psychopathology and Psychotherapy (Chapter in Big Purple).

Wilson (2001). Some notes on constructs: Types and validation from a contextual behavioral perspective

Hughes, Barnes-Holmes, & Vahey (2012). Holding onto our functional roots while exploring new intellectual islands: A voyage through implicit cognition research ***The Relational Elaboration Coherence model and RFT based assessment***

Vilardaga, Bricker, & McDonell (2014). The promise of mobile technologies and single case study designs for the study of individuals in their natural environments.

Iwata, DeLeon, & Roscoe (2013) The FAST. Functional Analysis Screening Tool

Hurl, Wrightman, Hayes, & Virues-Ortega (2016). Does a pre-intervention functional assessment increase intervention effectiveness? A meta-analysis of within-subject interrupted time-series studies. (**Spoiler alert: Yes, it does.**)

Since you probably didn’t click on any of those:

We have better methods now. We can use technology to assess behavior (across contexts), to intervene, and to rapidly and cheaply assess behavior. Take a moment: Look at your iPhone… That thing ‘knows’ more about you than your best friend or your spouse.

So, why aren’t we using these methods? Well, I hear you. Most of us weren’t taught to create Apps in grad school, to deal with data flow that exceeds the capability of SPSS, or to understand the intersection between technology and confidentiality. For most of us, even though we let Target (who lost tons of credit card numbers. yikes!), Apple, Best Buy, Netflix, and many others track our every move we’re not utilizing this technology well in the behavioral sciences.

Essentially: Who has time to learn entire new areas of science (App design, UX, Data Science, Python, R, etc.)  in order to have better and cheaper assessment? 

It’s not that people aren’t trying. I certainly heard a lot of interest in Ecological Momentary Assessment (EMA), Ecological Momentary Intervention (EMI), Relational Frame Theory, and links from basic to applied at the CBS conference this year but these things aren’t exactly user- friendly straight out the ‘box.’

Notably: There have been some valiant efforts to create systems of assessment and data tracking that ‘work’ for clinicians and researchers.

See:

Learn2ACT an integrated system of Acceptance and Commitment Therapy (ACT) driven mobile client-client centered data collection and intervention. It tracks and logs data for multiple clients and displays it for clinicians. Big props to Ellen & Bart for taking this on from programming to testing. Release of this product is currently scheduled for some time in Fall (so show them some love and for doing all this work for us)!

Other systems in development include Matrix (ACT-driven) Apps out of Mike Levin and Beniji Schoendorff’s groups. Roger Vilardarga and Jonathan Bricker and others also have out Apps that are a bit more target specific (e.g., ACT driven for psychosis, smoking cessation, etc.) – (Forward me links to anything else that is evidence-based or getting that way and I’ll consider listing them too.)

The process of gaining an evidence base for this technology (Mental Health Smart Phone Apps: Review and evidence-based recommendations for the future development), while mastering all this tech, and paying attention to user experience (UX) AND getting people aware of these technologies is a difficult one. So, as a community I think we need to support efforts to develop technologies that make it easier for clinicians and researchers to use functional contextual behavioral assessment.

I’m working on an integrated functional analysis driven assessment platform and I need your feedback. 

My concept is a bit different but also includes EMA/EMI, as this is our best CBS consistent context sensitive assessment effort thus far.

Stay with me here:

I propose that we also go from basic research and theory and build a system that integrates what we know to the best of our ability. One that is functional analysis driven, contextually-sensitive, rapid, and user-friendly. Then we make this available such that we can funnel meta data (read de-identified behavioral data on relations) to basic and applied researchers from clinicians. After all, those RCTs aren’t even touching how to treat complicated multi-problem clients.  

Such a system would involve:

  1. Contextualized behavioral assessment (EMA/EMI and passive assessment of biometrics. Hey, we’re not going to bowl the NIH and RDOC over all at once.)
  2. Assessment of verbal/symbolic related behavior (aka… integrating what we know from RFT into understanding contextualized functional analysis driven assessment.

Note: You won’t have to go read Big Purple to use this system. We’re planning to present relations in pretty visual analytics that even clients can make sense of. We’d like to make explaining relationships (between verbal behavior and verbal behavior or verbal behavior and EMA/EMI passive behavioral data ) functional. Wouldn’t it be nice if you could such demonstrate your outcomes in forms that show you make ‘real’ change in the lives of your clients?

See previous post on RFT: Relational Frame Theory (RFT)- What’s the big deal? And, Hayes & Berens (2004) Why Relational Frame Theory alters the relationship between basic and applied behavioral psychology for why RFT is important to this. If, your mind just squealed… “but relating and frames are just constructs!” See future post on empirical logic and the difference between reifying constructs and properties.

Essentially, we need to add in RFT because we know that verbal/symbolic relations can more powerfully influence behavior in the moment than the actual contingencies. Additionally, integrating RFT allows us to step back and forth from behavior, to intervention, to level of appropriate measurement across diagnoses and therapy orientation – so maximum flexibility and applicability.

I understand that many of you may be thinking at the point… so, are we talking assessing the content of language? Word counts? 

Well, no and yes… we do look at the verbal content but we can look at functional relations indicated between verbal relating and verbal relating, or between this and other behavioral measures. I’ll save that for another post.

For now, here’s some ground work within CBS that supports the use of attempting to assess verbal/symbolic relating through language:

Atkins & Styles (2016). Measuring self and rules in what people say: Exploring whether self-discrimination predicts long-term well-being (ACBS membership needed to view).

Collins, Chawla…Marlatt (2009). Language-based measures of mindfulness: Initial validity and utility

If you’re interested in learning more about clinical behavior analysis, RFT, and advanced measurement methods – let us know in the comments below! We also have some online, on-demand training events on a variety of topics that may interest you.

Angela Coreil, PhD

Angela Coreil, PhD

Consultant and Educator

Angela J. Coreil, PhD works with individuals and organizations to promote better connected, purposeful, and effective living through behavior analytic principles. She has over a decade of clinical experience treating human suffering and promoting human excellence using Acceptance and Commitment Therapy (ACT) and other behavioral therapies. She now focuses on the promotion and translation of Clinical Behavior Analysis as a way to improve our science.

Pin It on Pinterest

Share This