You’ve invested in a targeted HCC coding refresher to improve accuracy, boost RAF scores, or support clinicians. Maybe it was a brief CME module, a full-day workshop, or an in-app coding education tool. Now what?
Too often, coding retraining efforts are launched without a clear plan to evaluate impact. But in a value-based care environment where risk adjustment accuracy directly affects revenue, compliance, and care quality, knowing whether your refresher worked is not optional—it’s essential.
In this article, we’ll walk through how to assess the effectiveness of an HCC coding targeted refresher. We’ll cover:
-
- What success looks like (and how to define it upfront)
- The most telling KPIs to track post-retraining
- How to structure a pre/post analysis
-
- Signs your refresher didn’t work—and what to do next
But before you dive into metrics or charts, take a step back: What were you hoping to change with this targeted refresher?
Define what “success” means for your organization
Before diving into dashboards or reviewing MEAT compliance, zoom out: what problem did you want this targeted refresher to solve?
Some organizations retrain clinicians on HCC coding because:
-
- RAF scores were declining without a clear cause
- Certain chronic conditions were consistently under-coded
- Coding audits revealed documentation gaps (especially MEAT-related)
- V28 model updates created confusion
New clinicians weren’t coding at the same level as their peers
The refresher may have aimed to:
-
- Improve MEAT-compliant documentation
- Reduce dropped or unspecified codes
- Increase visibility of high-impact HCCs like diabetes with complications or depression
Boost provider confidence at the point of care
Tip: Revisit your original “why” for doing the refresher. That goal should guide your metrics and evaluation method.
Now, let’s explore more immediate metrics you can track: coding completeness, documentation quality, suspected HCCs reviewed, and how often coders or auditors are flagging issues. These will help you monitor changes in real time.
Choose the right metrics (beyond RAF alone)
It’s tempting to look only at changes in RAF scores, but RAF is a lagging indicator. Many other metrics will show change faster and more clearly. Consider tracking:
✅ HCC capture rate
-
- Are clinicians documenting more unique HCCs per patient compared to pre-training?
- Did the number of chronic HCCs increase to be in line with expected prevalence?
✅ MEAT compliance rate
-
- What percentage of notes show full MEAT criteria (Monitor, Evaluate, Assess, Treat)?
- Are clinicians adding clear supporting language for HCCs?
✅ Diagnosis specificity
-
- Did unspecified codes (e.g., E11.9 or F32.9) decrease?
- Are more diagnoses being coded with full detail, including all necessary companion codes? E.g., type 2 diabetes with kidney complications should be coded as E11.22 and paired with a corresponding CKD stage code, such as N18.4 for stage 4 CKD.
✅ Audit pass rate
-
- Has the rate of successful internal or external coding audits improved since retraining?
- Are fewer diagnoses being rejected due to insufficient documentation?
✅ Clinician engagement
-
- Did documentation behavior change in high-risk groups (e.g., new hires or low performers)?
- Are more clinicians completing documentation on time or using point-of-care coding tools?
You must remember that these indicators might be a north star, but beyond the metrics, if you want to know if the refresher made a difference, it’s also useful to compare clinician performance before and after the training period.
Run a before-and-after analysis
To truly know if your targeted refresher worked, you need a controlled comparison. Here’s how to set that up:
Step 1: Define your time periods
Choose a reasonable “before” window (e.g., 3–6 months pre-retraining) and an “after” window (e.g., 3 months post-retraining, with a 2-week buffer if needed).
Step 2: Segment by clinician group
Not all providers may have participated equally. Segment your data by:
-
- Clinicians who completed the refresher vs. those who didn’t
- By region or clinical specialty to account for differences in patient populations or expected coding.
- By performance tier (low vs. high coding accuracy)
Step 3: Compare changes over time
Track the delta in metrics like:
-
- Average HCCs per patient
- Percent of charts with full MEAT documentation
- Drop in unspecified or rejected codes
Step 4: Normalize for visit volume
Make sure changes aren’t due to simple increases or decreases in patient volume. Normalize metrics per 1,000 visits or per clinician FTE if needed.
In general, not all improvements show up in volume. Sometimes the real gains are in the quality of how things are written.
Look at the progress note quality, not just quantity
Let’s say your average RAF score hasn’t budged. That doesn’t mean your refresher failed.
Dig into note-level quality. Are clinicians:
-
- Linking conditions to treatment plans?
- Documenting chronic conditions, even if not the visit reason?
- Using templated language but customizing for each patient?
Let’s look at this hypothetical example on what to compare pre- vs. post-refresher:
These kinds of improvements show behavioral change that can lead to long-term ROI, even if payment models haven’t caught up yet. But take into account that, if the refresher felt irrelevant, time-consuming, or confusing, it likely didn’t stick.
Use qualitative feedback from clinicians
Numbers matter—but so does narrative. Talk to the clinicians who went through the targeted refresher.
Ask:
-
- Has it changed how they approach documentation?
- Do they feel more confident identifying chronic conditions?
- What barriers still prevent full MEAT documentation?
- Is there clarity on which HCCs changed under V28?
These insights can reveal:
-
- Where the training stuck
- Where it needs reinforcement
- Whether tools and workflows are supporting the knowledge
Clinician feedback helps you adapt future refreshers to feel less like “retraining” and more like ongoing support—it gives you direction. Their insights reveal where the training clicked, where it felt out of touch, and what still isn’t translating to daily practice. But don’t stop there. The real value comes when you turn that feedback into action.
Identify gaps and opportunities
If your analysis shows little to no improvement, don’t throw out the whole refresher. Ask:
-
- Was the format right? (Live session vs. app-based? Self-paced vs. coached?)
- Did all clinicians actually complete it?
- Were the examples relevant to their specialty?
- Was it too broad—or too detailed?
- Did it include V28-specific updates or real case examples?
In many cases, the issue isn’t the content—it’s retention and application.
Consider:
-
- A second-phase refresher with new cases
- Embedding reminders or nudges in the EHR
- Highlighting small wins (e.g., showing how one note change raised a RAF score)
Even if this round wasn’t perfect, every effort gives you clues for what to do next.
Pinpointing what didn’t work is just the beginning. Once you understand where the gaps are—format, relevance, or retention—you can start designing smarter refreshers. But even the most engaging training won’t stick without the proper infrastructure. Next, we’ll look at the tools that help clinicians turn learning into lasting behavior change at the point of care.
Use tools that support post-training behavior change
Even the best targeted refresher can fall flat if clinicians don’t have time or tools to apply what they learned.
After retraining, it helps to:
-
- Offer a point-of-care assist platform that flags missed diagnoses or vague codes
- Run automated chart reviews to identify trends in MEAT compliance
- Share monthly feedback reports with each provider, showing their coding patterns
These tools reinforce learning, support compliance, and reduce reliance on memory alone.
Of course, tools alone don’t create lasting change, but they do make it easier. When post-training support is embedded into daily workflows, it reduces friction and reinforces the right behaviors over time. Now that you’ve set the stage with education, feedback, and supportive systems, it’s time to zoom out and ask the bigger question: What does meaningful improvement in HCC coding look like, and how do you sustain it?
Final thoughts: It’s about more than just education
HCC coding improvement isn’t just about information—it’s about integration. A one-time targeted refresher can only go so far. But when it’s paired with behavioral nudges, aligned workflows, and consistent feedback loops, it becomes a real lever for lasting change.
So, how do you know your HCC coding targeted refresher made a difference?
You track the numbers. You listen to clinicians. You dig into the notes. And you stay committed to continuous improvement.
This is where DoctusTech makes a difference. Through its comprehensive insights and data metrics, DoctusTech helps organizations track and analyze coding patterns, measure improvements, and identify areas for further support. With access to actionable data, healthcare teams can make informed decisions that drive both compliance and quality outcomes.
Because in the end, accurate HCC coding isn’t just a compliance checkbox. It’s a proxy for strong documentation, coordinated care, and better outcomes for the patients who need it most. And with DoctusTech’s data-driven approach, organizations are equipped to continuously optimize their processes and achieve sustainable success.