Clear, engaging, and practical updates on clinical psychiatry.

Earn CME for listening to the podcast with a multimedia subscription. Listen for free here or using Apple Podcasts, Android, or Stitcher.

Previous Post
Episode
Next Post
Episode

Interpreting Assessment Discrepancies from Multiple Sources

Podcast, Volume , Number ,
https://www.thecarlatreport.com///

Print Friendly, PDF & Email


In this episode, we will discuss why assessment discrepancies arise, how we can manage them, and what these discrepancies tell us about a patient’s symptoms and response to treatment.

CME: Podcast CME Post-Tests are available using this subscription. If you have already enrolled in that program, please log in.

Published On: 10/17/22

Duration: 23 minutes, 04 seconds

Transcript:

Dr. Feder: As clinicians, we receive assessments about our child and adolescent patients from multiple sources, such as parents, teachers, clinical staff, trained observers, etc. And when interpreting these assessments, there’s a common tendency to suppress or discount discrepancies and focus on commonalities amongst the reports. But, do discrepant reports add anything of value to our diagnoses and treatment plans. The short answer is yes they do! And, in this episode, we will discuss why these discrepancies arise, how we can manage them, and what these discrepancies tell us about a patient’s symptoms and response to treatment.

Welcome to The Carlat Psychiatry Podcast.

This is another episode from the child psychiatry team. 

I’m Dr. Josh Feder, The Editor-in-Chief of The Carlat Child Psychiatry Report and co-author of The Child Medication Fact Book for Psychiatric Practice and the brand-new book, Prescribing Psychotropics.

Mara: And I’m Mara Goverman, a Licensed Clinical Social Worker in Southern California with a private practice. 

Dr. Feder: We have some exciting news for you! You can now receive CME credit for listening to this episode and all new episodes going forward on this feed. Follow the Podcast CME Subscription link in the show notes to get access to the CME post-test for this episode and future episodes.

Mara: Parents tend to report disruptive symptoms such as overactivity and kids independently tend to report internalizing ones such as depression and anxiety. It’s a ubiquitous phenomenon. Back in the 1950’s, Lapouse and Monk created an interview to assess the base rates of psychiatric symptoms with parallel items for parents and children to complete. Both groups showed very different reports about children’s psychiatric symptoms.

Dr. Feder: It’s also true when you compare child reporting with teachers, clinical staff, trained observers, and even peers. In 1987, Achenbach and colleagues found a 0.28 correlation between informants like parents and kids and teachers, which is pretty low. And, in 2015, Dr. De Los Reyes and colleagues conducted the same meta-analysis with other studies and found the same .28 correlation. 

Mara: Discrepancies between reporters are a global phenomenon. A recent meta-analysis found that these discrepancies manifested in every assessment in thirty countries on seven continents, in every language tested. The consistency of this discrepancy effect rivals the placebo effect. And, reporter discrepancies are not limited to subjective measures only, they also occur on objective ones too. 

Dr. Feder: That’s right Mara. You see these discrepancies regardless of how well-established an instrument is. Discrepancies emerge when informants are distressed, depressed, or when they do not understand what is being assessed. The care we’ve taken in developing these instruments cannot take these discrepancies away, including the sensitivity to the characteristics of the person reporting. 

Mara: Dr. Andres De Los Reyes is the Editor-in-Chief of the Journal of Child and Adolescent Clinical Psychology and he’s a Professor of Psychology and the Director of the Comprehensive Assessment and Intervention Program at the University of Maryland. In our interview with Dr. De Los Reyes, we asked him how we can use these discrepant reports to our advantage, and here’s what he had to say.

Dr. De Los Reyes: The reason why these phones track our locations so well is because they’re linked to satellites that are strategically placed around us. They triangulate on us. That’s where the action is with these information sources. If you think about them that way, you don’t get accurate location on a patient’s mental health status, by lumping all your satellites in one place or picking your favorite satellite. The trick is to triangulate, to place satellites in spaces where when they give you different estimates, you say to yourself, this is what I planned. This is what I expected to observe. Now I can make an informed decision.

And so, the trick from a measurement standpoint is to think about it kind of like, I’m not playing latitudes and longitudes here. I’m not trying to figure out which restaurant to go to. Rather, I have to think about the factors, those latitudes and longitudes that I think reliably predict the discrepancies. In the case of youth mental health, those things often land on where are they seeing stuff and from what perspective they’re seeing stuff. So, if you think about it that way in most of our assessment occasions, if you’re varying your information sources such that you get a couple of really good observers – authority figures who are looking at the patient. From that perspective, if the patient is old enough, being able to get their perspective. And by old enough, you can go pretty young if you use like the purple puppet interview, for instance, we’re talking just a few years old.

And if you vary the information sources in the terms of which context they observe behavior, like home and school. In the case of peers, like home and not at home. You know, home with adult authority figures or siblings versus nonparental or nonfamilial – you know, same age sources – and they differ, then you’re able to say to yourself, this isn’t that uncertain for me. I have some good reasons for why I would think these sources disagree and that’s how you get a few steps closer to making good decisions, by essentially being able to force the discrepancy into the process. But then walk away from it saying, I planned this. This is what I planned to do.

Mara: How do we know if a discrepancy is real?

Dr. De Los Reyes: This is embedded in texts by Skinner, the notion that context may vary by contingencies. And if that’s true, then you might expect certain kinds of behaviors to be present in some contexts and absent in other contexts. Or present to a lesser degree. And if our information sources covary with those changes across context, then you could use – theoretically, you could use that information to come to a decision as to where target intervention. You know, what to see and let’s sit for a while. Like it looks like maybe functioning is okay here. But let me monitor it over time to see if it gets worse. It looks like most of the action is this particular context with focus on this for a while and then let’s see if we were right.

And the right part is I think one of the challenges with regards to making sense of the discrepancies is our team doesn’t think that all discrepancies are created equal. Some of these discrepancies might very well be junk and it’s a little tougher to get a sense of which of these discrepancies are useful and which ones are not when you’re dealing with individual cases. You might very well have some circumstances where an informant very well had a bad day and then the instrument didn’t perform the way you were hoping for.

And that’s the place where I say to my team, this is a question of trust, but verify. So, you have the information sources. That’s great. Now, if you think these discrepancies are going to manifest in such a way where the patient might behave differently at home versus school, then the trick is what kind of independent assessments, numbers that I’m getting apart from these sources, are going to help me corroborate whether or not the discrepancies feel real versus whether or not I’m getting noise.

So, if you see a circumstance where the teacher is endorsing a behavior that the parent…or you might be getting a sensory and referral question that maybe this is a school problem and not so much a home problem. Then the question is, what other pieces of information can I gather? You know, grades, school records versus a short observation of how the family interacts at home. Like a short visit. I mean, that principle isn’t all that different than how we conduct neuro psych assessments, how we conduct educational assessments. We take holistic multimodal assessments of these products. That notion, basically, is repurposing stuff. We already do in other clinical circumstances.

Because here’s the thing. If you don’t cut those independent sources and the informants tell you different things, your mind can go everywhere. That informant’s no good. That informant’s depressed. That informant didn’t tell me something consistent with the referral question which commonly happens when you see discrepancies and all the assessor has is the informant ratings, there’s some data in our lab and in other labs that indicates that they just tend to agree more with the referral informant. And the referral informant might be right. But we also know that behavior can vary across context and no one information source has a complete picture of the totality of how the patient behaves across circumstances. We just don’t. 

Mara: Okay, so when we notice discrepancies in reports from multiple sources, we need to gather more information to verify these discrepant reports. And, this will help us understand our patient’s clinical presentation in different contexts. But, can discrepant reports tell us anything about treatment response?

Dr. Feder: They definitely can! Tracking these discrepancies can track treatment response. Look at whether the discrepancies change. For example, you tend to get more agreement between parents and teachers in autistic children when the challenges are more severe. But successive reports can change over time, with one or the other seeing less severity of symptoms. Growing disagreement between informants can signal that the child’s function is improving. 

Mara: How can we explain this phenomenon to parents? I can imagine that it could be concerning for a parent when their child’s teacher’s reports are improving but they don’t see any meaningful improvement at home. 

Dr. Feder: Yeah, we need to normalize this process. It’s like asking people to estimate how many marbles are in a jar. There’s an objective answer but each person will estimate differently. And different people rating behavior can have even more divergent views about it. For instance, the same child with the same strengths and challenges can get along great with one teacher and not at all with another. That might have to do with the teacher or the situation or both.  

Mara: To diagnose ADHD, we want to see ADHD symptoms present across two or more settings. So, how is this diagnostic criteria affected by discrepant reports from a child’s parent and teacher?

Dr. De Los Reyes: There isn’t any strong data to indicate that agreement on these symptoms tracks impairment to such a degree where the only kind of ADHD case that we should be diagnosing are the ones where parents and teachers agree.

And sometimes, assessors don’t even use the two informants. They’ll take one of them, usually the parent because if they’re the referral source and use them as a proxy or how the child’s behaving at home and how the child’s behaving in a non-home context, typically school. And we know from research that parents reports of home stuff don’t correspond with teachers’ reports of home stuff. Teachers’ reports of school stuff don’t correspond with parents’ reports of school stuff. They’re in different places. They have different layers of expertise. I think of them less like informants and more like consultants, by the way. I think of them like consultants because they are quite literally these information sources, parents usually quite literally have an expertise that I as an assessor lack unless I’m just hanging out with them all the time.

So, it’s a glaring hole in our evidence base. Where are the effect sizes that from several studies from really great teams, where are the effect sizes of parent-teacher agreement in ADHD symptoms and impairment indices on independent assessments, stuff where the parent/teachers weren’t even involved? You know, school grades, peer difficulties on the playground, like where is that data? Because the question I think is for this issue is the following. Are the differences in impairment between these agreement and disagreement groups so high that the only group that I would wind up caring about at the end of that study are the agreement groups. I would venture to guess that if we had a bunch of really good studies on this topic and then did like a metanalysis afterwards, we’d come to the conclusion of we’re missing out on scores of kids who would benefit from care because we’re just chucking aside the informant-specific patients because we don’t think they rise to the level of a diagnostic threshold. I don’t think we have a good answer to that yet.

Dr. Feder: To sort this out, look at independent assessments like grades or observations of peer relations. Computerized performance testing (CPT) is not diagnostic per se, but it can give additional data about the child’s cognitive attention and impulsivity. If the outside evidence weighs toward ADHD, then the child may benefit from treatment. Otherwise we’re missing out on kids who would benefit from care because they don’t have symptoms in two settings so they don’t meet the diagnostic threshold. 

Mara: What about kids with depression, who might appear ok when they are in the company of people but suicidal when they are alone?

Dr. Feder: For a long time, we’ve been saying that getting kids active with other people doesn’t bring much change in depression. But in the meta-analyses there’s a discrepancy: the children report positive effect sizes that are several times larger than what the parents report. 

Mara: We tend to think, well, if parents aren’t seeing a big change, then how much stock can we put into the kids’ reports? I think these kids are telling us that CBT does help them with some problems, like when they are with their peers, but not necessarily at home when they are with their parents. However, we need carefully conducted studies with independent assessments of how children behave with peers over the course of treatment to truly sort this out.

Dr. Feder: That reminds me of kids who are passive and anxious when I see them with their parents but less anxious when I see them alone. 

When you have a child referred for assessment of anxiety, their parent’s report often indicates that the child is avoiding social situations. He won’t go to birthday parties, refuses most play dates, and fusses when he’s supposed to go to soccer practice. Then you see the child, who says he’s fine with other kids and has no problem doing any of these activities. This is the opposite of our usual expectation that kids will report anxiety symptoms that parents don’t notice because they are not disruptive. Here’s Dr. De Los Reyes’s explanation for why this discrepancy might arise.

Dr. De Los Reyes: when we’re looking at these multiinformative assessments for internalizing states, we tend to focus a lot on social anxiety and in that context, if you don’t have those independent sort of corroborator assessments, it’s so easy for you to discount that kid’s report. So easy to discount the patient’s report. Because we know that a core feature of social anxiety is fear of negative evaluation. You don’t want to look bad. Especially in front of strangers! You know, like assessors. And so, when the parents’ and kids’ reports differ, it’s easy for your mind to go, oh, I know why they differ. The kid’s downplaying the concerns. They don’t want to look bad. It’s part of the parental presentation.

That’s possible. It’s possible with some patients. But we’ve seen in our own work, the take-home message that we’ve seen when we’ve compared these assessments to sort of independent assessments of how kids behave in non-home context, the contexts that parents aren’t privy to, it looks to us like these kids’ reports are giving us a mix of what’s happening at home and a mix of what’s happening outside the home. And the kind of stuff that parents’ ratings often can’t predict all that well.

Mara: Ultimately, when this discrepancy occurs, we need to look for more data. The same goes for similar discrepancies in different disorders. See if you can find out from peers and teachers how the child is doing. Maybe the child is standing back and not engaging on the school playground even though the child reports that they are ‘fine’ in that situation. Then you have a better idea that the child is indeed anxious.  

Dr. Feder: We need to take cultural and racial influences into consideration when assessing discrepant reports. If you have a non-white child who is having learning difficulties, what do you usually do? You tell the family to ask the school for psycho-educational testing. Ok, so let’s say that the testing comes back and shows that the child has problems with reading comprehension. She doesn’t do well on answering questions about the standard stories on reading comprehension tests, like building and floating boats at the local park or ‘pet day at the fair.’  Ok, so the school gives the student extra help. But three months later you see the child back and she’s doing no better than before. Parents from different cultures or communities may also over or under report symptoms. Also, our tests have been normed on groups of patients with very similar backgrounds. 

Mara: That means that they might not be as accurate for people with different cultural, racial, and ethnic backgrounds. So if the tests indicate that this student is struggling, it might be because she has not been exposed to the information or experiences she needs to be successful on those tests. She might perform well if reading comprehension tests focus on other kinds of social events that are more relevant to her day-to-day experiences. 

So, Dr. Feder, what about the inequities and discrepancies surrounding the pandemic? How does that affect the information we receive about our patients from multiple sources? 

Dr. Feder: So, that’s really complicated, Mara. Since we know that people from lower socioeconomic status are more heavily impacted by the pandemic. They have to go in to work more. A lot of these community has been impacted by racial strife, including police violence… We need to inquire about the social determinants that are impacted our patients, and often times we don’t ask about these social determinants … 

Dr. Feder: If you suspect that certain discrepancies are associated with cultural and racial influences, then seek assistance in culturally sensitive assessment specific to this child and family by talking with colleagues and using specific tools for thinking through the problem. Check out our article on Cultural Competence in the January/February/March 2021 issue of The Carlat Child Psychiatry Report for more information on how to navigate cultural and racial differences that can impact our clinical assessments. 

Mara: To wrap-up this episode, we asked Dr. De Los Reyes for his bottom-line message to clinicians. 

Dr. De Los Reyes: For me, the bottom line is a couple of things. One is you’re bound to encounter discrepancies. It’s unlikely for you to create a set of circumstances where you’re going to get everybody on the same page to the degree that you – it’s important that if you do expect these discrepancies to happen for you to set up the assessments so that they reliably manifest as a function of things that you as a service provider care about, where stuff happens, who thinks these issues are a problem.

But then beyond that, it’s important to also take into account the possibility that the discrepancies might not be leading you in the right direction. So, you go back to trust, but verify. Gathering information sources that allow you to corroborate whether or not there’s something real here versus it’s some noise and I have to find new information sources, or just hone in on who’s suggesting there’s a problem. The one danger with just focusing on who’s saying there’s a problem is that you run the risk of then thinking, somebody says there’s a problem. Now, let me just treat it. Where treat it can be just I’m going to assume that it’s everywhere. It might not be. It might be localized to the particular circumstances that that information source is privy to.

But I think that requires augmenting the instruments that we often collect with additional information sources – unbiased information sources where the scores aren’t dependent on those same people.

Dr. Feder: The newsletter clinical update is available for subscribers to read in The Carlat Child Psychiatry Report. Hopefully, people will check it out. Subscribers get print issues in the mail and email notifications when new issues are available on the website. Subscriptions also come with full access to all the articles on the website and CME credits. 

Mara: And everything from Carlat Publishing is independently researched and produced. There’s no funding from the pharmaceutical industry. 

Dr. Feder: Yes, the newsletters and books we produce depend entirely on reader support. There are no ads and our authors don’t receive industry funding. That helps us to bring you unbiased information that you can trust. 

Mara: Go to www.thecarlatreport.com to sign up. You can get a full subscription to any of our four newsletters for $30 off using the coupon code LISTENER.

And don’t forget, you can now earn CME credits for listening to our podcasts. Just click the link in the description to access the CME post-test for this episode.

As always, thanks for listening and have a great day!


Comments

Leave A Comment