Automated Claims Adjudication

Use case of automated claims adjudication

Automated claims adjudication shall provide optional process steps for the automated processing of larger claim volumes once the claim has been submitted to the payer organization. Three levels of verification should be available at the payers side:

  1. Rule based level: Automated verification of claims through the rule sets that were configured in a Configurable Claims Review Engine.

  2. Artificial Intelligence level: Automated verification of claims through a decision support model that was generated with machine learning algorithms.

  3. Manual level: Manual review of claims by reviewers at the payers side.

Sample configuration of an automated claims adjudication process

The payer organisation shall have the choice of how to implement the claims adjucation process using Configurable Workflows. A possible implementation of an AI supported claims adjudication process could be as follows:

  1. Health Care Provider enters on or more claims.

    1. each claim is verified online during data entry according to the rules from the Configurable Claims Review Engine.

    2. when the claim is verified, it is submitted to the payer.

  2. At the payers side, the rule based engine reviews all claims (which could have been submitted by an external system without prior verification) according to the rules from the Configurable Claims Review Engine.

    1. invalid claims are flagged for manual review

    2. valid claims go to the AI level.

  3. The AI level verifies all claims that passed the previous rule based step according to a previously trained decision support model.

    1. suspicious claims are flagged for manual review.

    2. a certain percentage of unsuspicious claims is retained for manual QA processes to estimate the validity of the AI model.

    3. all other unsuspicious claims are being released for immediate payment.

  4. Human reviewers review flagged and QA claims from step 2 and 3.

    1. either the flagged claim is cleared immediatly and released for payment

    2. or the flagged claims are sent back to the health service provider in a claims dispute process

    3. or the claim is rejected directly

The above example is on possible configuration that assumes a reduction of costs for the insurance company by directly paying unsuspicious claims. Of course this estimation is only valid, when the loss through falsly released payments is far less than the loss through high costs of human reviewers. To control this effect, a continuous QA process from 3.b. is needed.

Technical requirements for the AI component

  • The AI component shall ideally be part of the official openIMIS distribution, but should be designed in way that it can operate independently though data exchange via the FHIR APIs of openIMIS.

  • It shall ideally be build using technologies form the openIMIS Target Technology Stack.

  • The componenent shall be desigend in a way that allows the implementing organisation to:

    • Select suitable decision support models (e.g. decision trees, regression models etc.)

    • Define suitable factors (e.g. attributes) in the chosen models

    • Train the models with historical claims data from manual review processes

  • The component shall allow an export/import of trained models as blue-prints for other organisations

  • The decision mechanisms of a trained model for or against flagging claims must be human readable and explainable on a per claims basis.

  • A QA loop for a continuous evalutaion of the validity of the model must allow monitoring of sensitivity and specificity in terms of false-positive and false-negative decisions.





Did you encounter a problem or do you have a suggestion?

Please contact our Service Desk



This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. https://creativecommons.org/licenses/by-sa/4.0/