In our last blog, our discussion revolved around the ambiguous nature of coders’ job descriptions. Today I want to discuss how we can restore some order and eliminate a lot of the ambiguity and uncertainty.

Today’s coders and coding teams must be flexible and ever so conscious in the difference in coding expectations of various organizations. But beyond that, how do teams of coders consistently adhere to the expectations? How does the consumer of the coded information know that their expectations have been met and how to use the information accordingly?

After mulling over these issues, talking to many colleagues and experts, and working with our internal team, some clear patterns have emerged:

  1. To what extent are coders expected to capture diagnosis codes (only diagnoses relating to HCCs and RxHCCs)? All diagnoses? Only the first DOS in a review period? All DOSs? Let’s call this the coding level.
  2. There are two schools of thought when it comes to HCC coding; the traditional view where all diagnoses have to have monitoring, evaluation, assessment and/ or treatment, and proper signatures (MEAT). The second group would include those who believe that a chronic condition does not require additional supporting documentation and that a lack of a proper signature should not deem a diagnosis non-submittal since CMS permits attestations, etc. etc. (what we called the gray areas in the last blog). Let’s call that ‘Tracking documentation deficiencies with Validation Reason Codes (VRCs).'

Armed with these concepts, we can now instruct coders very specifically and explicitly how to approach each medical record. At a minimum, you can specify the ‘Coding Level’ (we have defined three) telling coders exactly what they need to code and not code and have them capture at least one VRC for each diagnosis (we use up to 27). VRCs are the sharpest tool in the toolbox; they can cut the coding differences between teams, projects, even vendor teams like butter. If a diagnosis isn’t 100% RADV proof, a VRC would be assigned to denote the possible issues (documentation deficiencies). Issues can range from no provider signature, the diagnosis is in the assessment only, or there could be conflicting information within the date of service documentation.

Once the VRC codes are finalized, the results can then quickly and clearly be grouped into diagnoses/HCCs/RxHCCs that should/should not be submitted to CMS without any other review and which codes need further evaluation.

This will go a very long way to address ambiguities for coders and management and deliver the best possible results (including audit protection). Not bad, huh?

HEDIS Best Practices

About The Author

Reveleer is a healthcare-focused, technology-driven workflow, data, and analytics company that uses natural language processing (NLP) and artificial intelligence (AI) to empower health plans and risk-bearing providers with control over their Quality Improvement, Risk Adjustment, and Member Management programs. With one transformative solution, the Reveleer platform allows plans to independently execute and manage every aspect of enrollment, provider outreach, data retrieval, coding, abstraction, reporting, and submissions. Leveraging proprietary technology, robust data sets, and subject matter expertise, Reveleer provides complete record retrieval and review services, so health plans can confidently plan and execute programs that deliver more value and improved outcomes. To learn more about Reveleer, please visit Reveleer.com.