Using Clinical Scales in Child Psychiatry
The Carlat Child Psychiatry Report, Volume 13, Number 1&2, January 2022
https://www.thecarlatreport.com/newsletter-issue/ccprv13n1-2/
Issue Links: Learning Objectives | Editorial Information | PDF of Issue
Topics: Assessment | Diagnosis | Outcome tracking | Outcomes
Rajeev Krishna, MD, PhD, MBA
Medical Director of Inpatient Services for Behavioral Health at Nationwide Children’s Hospital in Columbus, OH. Dr. Krishna’s comments are his private opinions and do not represent Nationwide Children’s Hospital.
Dr. Krishna has disclosed no relevant financial or other interests in any commercial companies pertaining to this educational activity.
CCPR: Welcome, Dr. Krishna. Could you tell us what drives your work in measurement-based care?
Dr. Krishna: Sure. I have a PhD in computer engineering as well as being a child psychiatrist. I’m a member of the American Academy of Child and Adolescent Psychiatry Healthcare Access and Economics Committee. I am an engineer at heart, and I’m interested in using technology to improve quality and accessibility of care. Measurement-based care is a natural nexus of these interests, and I believe effective use of self-reported outcome measures can dramatically improve the quality and efficiency of the services we provide.
CCPR: What’s the rationale for using structured scales in assessment and treatment?
Dr. Krishna: We know from other medical disciplines that outcomes-based approaches improve the speed and quality of clinical improvement. In psychiatry, treating to a specific outcome and measuring progress results in more intentionally assertive treatment and more willingness to change directions when the current approach isn’t working (Guo T et al, AM J Psychiatry 2015;172(10):1004–1013). Structured scales add precision and efficiency when time is limited. If the patient and family have already filled out assessments prior to the appointment, I have a wealth of information before we even start.
CCPR: In autism we have measures we can use every session, but the outcomes are bigger measures that we do about every six months, like the Childhood Autism Rating Scale (CARS), to see if we’ve made a dent in the overall pathology.
Dr. Krishna: The autism world is ahead of the curve. Every kid with a behavior plan has measures, whether it’s frequency of aggressive episodes or of inappropriate social interactions or what have you. That is the essence of measurement-based care. It’s not measuring everything under the sun all the time but targeting some areas until we agree that we need to work on something else. Treatment is dynamic. You adjust your targets based on trade-offs, such as balancing side effects with symptom control (Lambert MJ, Psychotherapy Research 2007;17(1):1–14). Consider a medical example: You wouldn’t be OK with your physician treating your diabetes without checking your hemoglobin A1C. You need that measure to make choices about diet and medications. But in psychiatry we routinely just ask about symptoms during follow-up.
CCPR: What other studies show how outcome measurement improves child or adolescent psychiatric care?
Dr. Krishna: As clinicians we often disagree on diagnoses but agree more when we add structured instruments (Galanter CA and Patel VL, J Child Psychol Psychiatry 2005;46(7):675–689). In psychiatry we want to track remission, so we want measures that are sensitive enough to pick up on whether there is still a problem and specific enough to that particular condition. A number of scales have long-established utility. For example, PHQ-9 scores greater than 9 are 89.5% sensitive and 77.5% specific for picking up DSM-IV major depression in children (Richardson LP et al, Pediatrics 2010;126(6):1117–1123). The Brief Child Mania Rating Scale-Parent (B-CMRS-P) has an 84% sensitivity and 83% specificity of differentiating bipolar disorder from ADHD (Henry DB et al, J Clin Psychol 2008;64(4):368–381). For the Children’s Yale-Brown Obsessive Compulsive Scale (CY-BOCS), a score of 14 predicts remission with a sensitivity of 0.91 and specificity of 0.90 (Storch EA et al, J Am Acad Child Adolesc Psychiatry 2010;49(7):708–717). These scales do not replace clinical assessment, but they are great for helping you know if you are on the right track.
CCPR: What’s your sense of how many psychiatrists currently use scales?
Dr. Krishna: There has been a reluctance to move toward using scales. We collected some unofficial data from the AACAP General Assembly group a couple of years ago and found that less than 25% of child psychiatrists were using scales consistently at a level that would reflect outcomes-based care. The reasons for this could be educational, generational, or regulatory. Psychiatrists primarily use their clinical interview and judgment. While we are using scales at an increasing rate, we tend to use scales that we learned during training. There are not a lot of resources for learning about other tools, and we stick to what we know.
CCPR: What are the barriers to implementing outcomes-based care?
Dr. Krishna: I break them into three categories: 1) psychological, 2) infrastructure- and workflow-related, and 3) economic. Psychological barriers have to do with clinician trust; infrastructure is about the mechanisms for delivering, scoring, and reporting the tests; workflow has to do with how you get the testing into the patient care experience; and economic barriers pertain to managing the costs of implementing the use of the tests.
CCPR: Tell us more about clinician trust in outcomes-based care.
Dr. Krishna: It’s the idea that outcome measures are extraneous because clinical assessment and clinical judgment are better, or that measures are dangerous because they might replace that clinical assessment and judgment. Neither is true. Research shows that clinical assessment isn’t as good as we think it is, and no one can realistically argue that a self-reported outcome measure in psychiatry has the diagnostic precision to eliminate clinical judgment (Hatfield D et al, 2010 Clin Psychol Psychother 2010;17(1):25–32).
CCPR: So how should we think about these measures?
Dr. Krishna: Think about these assessments like lab values. They augment clinical decision making but they don’t replace it. We all learned in medical school to treat the patient, not the lab value, and not to order a lab study unless we know what we are looking for and how we are going to use the result. But we also don’t reject all labs just because some are not valid in a particular situation.
CCPR: Got an example?
Dr. Krishna: Sure. The PHQ-9 is well validated, but a patient who maximizes symptoms might consistently report high scores on the PHQ-9 that don’t correlate with actual pathology (Hannan C et al, J Clin Psychol 2005;61(2):155–163). When a tool stops being useful for a patient, we use our clinical judgment, set aside the results, and document why. But we don’t generalize to say that the PHQ-9 is never useful for anybody. On average it provides us with very useful information.
CCPR: What issues come up when health systems bring outcomes-based care to scale?
Dr. Krishna: We’ve been rolling this out in our large behavioral health program with psychiatrists, clinical social workers, counselors, and psychologists. Everyone comes to the table with different training and experience. Different disciplines also bring different concerns about using these tools. In my program psychiatrists are the most willing to accept the assessment data, but also the most willing to dismiss information if they don’t find it useful. Clinical social workers are reluctant to dismiss information from rating scales when there’s a mismatch with clinical assessment, and our psychologists are concerned about even administering assessments that may not be fully validated for a given age or population. At the end of the day, scales are meant to help us think about the case to bring our clinical judgment to bear, not to tell us what to do. If you convince practitioners of this, it strips away many psychological barriers.
CCPR: What are some examples of workflow barriers?
Dr. Krishna: There is the practicality of getting the information. Anybody who’s tried to get an NICHQ Vanderbilt Assessment to a teacher through the parent knows that there is perhaps a 20% success rate. Even if you do manage to get the scale filled out, manually scoring a Vanderbilt is not fun. Trying to manually score a Screen for Child Anxiety Related Disorders (SCARED) is even less fun.
CCPR: Any solutions?
Dr. Krishna: For barriers related to time spent scoring, technology could come into play. For example, the IMH offers the Patient-Reported Outcomes Measurement Information System (PROMIS measures) where you can download a bunch of measures for free (www.commonfund.nih.gov/promis/index). They have an electronic version where you pay a nominal sum to put it on a device, after which it will do the scoring for you. Several companies do this, and EHR systems are incorporating patient-reported outcomes as well.
CCPR: What about patient barriers?
Dr. Krishna: We have problems with folks not completing assessments at home or showing up late and not being able to complete them at the office. We incorporate time for the assessment into the patient’s visit. For example, the patient is told to come in at 2:00, but the provider’s schedule will say 2:15. We do tell them that they will do assessments prior to seeing their provider.
CCPR: Do they show up early?
Dr. Krishna: Yes. If you get the workflow right and you approach it right clinically, patients and families will complete the assessments because they see them as valuable. It’s important to show patients that you are using these scales. If you get bloodwork done for a PCP but don’t know why you need it and your PCP never talks about it, you might not get bloodwork done again the next time it is ordered.
CCPR: How do you talk with patients and families about the results?
Dr. Krishna: Assessments are great educational resources. You can say, “Hey, remember that assessment you filled out? Here’s what it says about your depressive symptoms. Here’s why you should be getting treatment for this disease.” I tell families that these assessments are like getting your blood pressure during a physical. They help us have a more objective sense of how you are doing, so that we can look at that over time and really plan treatment with you.
CCPR: These assessments take time and personnel hours to process. What about the economic barriers to implementation?
Dr. Krishna: Technological solutions cost money to build, run, and maintain. Paper takes manpower too. You can’t get around the resource costs, but you can recover some of them. For example, there are CPT codes that reimburse for reviewing clinical measures, maybe $4–$7 per measure each time the code is used, which adds up over time. Some payers cover it, but then you may need to pay someone to track recovered costs. As a field we need to advocate for payers to reimburse providers who have put in the work.
CCPR: How do these measures work in value-based care?
Dr. Krishna: In value-based care, payers might not reimburse you unless you show improvement on certain measures. They may stop paying or require extra reviews for a particular patient if a measure shows insufficient improvement. Measures should not replace clinical judgment, so an insurance company relying entirely on the numbers is problematic. Engage with insurance companies to shape their plans rather than waiting for them to define how these measures will be used.
CCPR: This sounds easier to implement in larger institutions.
Dr. Krishna: Regulatory bodies have been a major driver here. The Joint Commission requires behavioral health institutions under their purview to do outcomes-based care. Larger institutions have resources, but they also have more complicated workflows and stakeholders who need to work together. The costs also grow with bigger numbers, which shapes how you approach the problem. For example, we have as many as 250,000 patient visits per year. We cannot use anything that costs money as a first-round assessment. We start with public domain.
CCPR: What are your favorite screening tools?
Dr. Krishna: I work on implementation and I am not an expert on the measures. But I can say that we use the Vanderbilt as first-line assessment for ADHD symptoms. The PHQ-9 is a great starting place for depressive symptoms. We are using the Pediatric Symptom Checklist-17 (PSC-17) as a broad assessment and as a universal measure of progress in treatment (www.depts.washington.edu/dbpeds/Screening%20Tools/PSC-17.pdf). We use the SCARED for anxiety, the B-CMRS-P for mania symptoms (www.brainandwellness.com/accordian/upload_file/CMRS-P_followup.pdf), and Car, Relax, Alone, Forget, Friends, Trouble (CRAFFT) for substance use (www.crafft.org) (Jeffrey J et al, Child Adolesc Psychiatric Clin N Am 2020;29(4):601–629). I work primarily on the inpatient side, which is an added challenge because all of the measures we are talking about assess symptoms over weeks, not the days of a normal inpatient hospitalization.
CCPR: What about cultural biases in these assessments? The Ainsworth Strange Situation attachment assessment reports the bulk of northern European children to have aloof attachments.
Dr. Krishna: Some scales have been validated in different populations. But translations are not necessarily validated in the new population. If the results are not consistent with what you are seeing in a clinical evaluation, find a different tool. Look at the patient. When you develop an assessment plan, make sure it makes sense. The goal is not to administer a particular scale or at a particular frequency. The goal is to have a consistent way of measuring how your patient is doing so that you can set a clear treatment target that you can get to and track your progress.
CCPR: Thank you for your time, Dr. Krishna.