AI in Health Insurance
Topic | Target group | Mode | Duration | Language | Latest update | No. participants |
---|---|---|---|---|---|---|
AI in health insurance | Health insurance officers, clinical specialists, IT specialists (pre-conference session AeHIN GM 2023) | in-person session + recording available | 2h | EN | 2023-11-06 | approx. 20 + recording viewers |
Description
The session is envisioned as a space wherein countries can share about their digital initiatives in managing health insurance schemes and the practical challenges that come with it. It will feature artificial intelligence (AI) as a potential tool for addressing some of these challenges - exploring the aspects AI can and cannot solve in the health insurance sector.
Contributors
Joint Learning Network/World Bank: Pak Somil Nagpal (Lead Health Specialist) and Pak Pandu Harimurti (Senior Health Specialist)
BPJS: Mr. Donni Hendrawan (Deputy Director of Data Management and Information), Mr. Jusron (Data Analytics Head)
openIMIS Initiative: Saurav Bhattarai (Advisor and openIMIS Team Lead)
GIZ: Karlina Octaviany (AI Specialist, FAIR Forward)
Objectives
Discuss country challenges in digitizing health insurance management
Describe BPJS’ experience in adopting AI for health insurance
Explain how AI works in conjunction with existing insurance management information systems
Critique the role of AI in the health insurance sector
Agenda
GIZ Indonesia: Session orientation / faciliation (3-5 min)
JLN/WB: “Examples of Applied AI in Health Insurance” (15-20 min)
● Collection of concrete examples (e.g. fraud detection)
● Conclusions on long-term perspective and risks (reference to digital health report)
BPJS: “Experiences of BPJS-K on Data Analytics: From Data Governance to AI” (15-20 min)
● Personal data protection
● Implementation for fraud management (claims verification)
openIMIS Initiative: “Opportunities of AI in Health Insurance” (15-20 min)
● Examples of AI claims adjudication (AI module in openIMIS)
Discussion and Open Forum (50 min)
Documentation
Presentations (see under 06 Nov 2023 'Parallel Presentations SET B'
Summary on the session (below)
openIMIS talks about Opportunities on AI for Health Insurance at the AeHIN GM 2023
To discuss the potentials and challenges of using Artificial Intelligence (AI) as a tool in managing health insurance schemes, the openIMIS Initiative together with the Badan Penyelenggara Jaminan Kesehatan (BPJS -K) or Social Security Agency on Health Indonesia, openIMIS Initiative, Joint Learning Network, and World Bank, held a joint session on ‘AI for Health Insurance’ at the Asia eHealth Information Network (AeHIN) General Meeting (GM) 2023 on November 6, 2023, at JS Luwansa Hotel and Convention Center, Jakarta, Indonesia. The AeHIN GM discussed various digital health topics under the overall theme, “Ensuring Digital Health for Better Outcomes: Putting Blueprints into Practice.”
From the left: Karlina Octaviany (FAIR Forward - AI for All, GIZ), Saurav Bhattarai (openIMIS, GIZ), Somil Nagpal (World Bank, Joint Learning Network), Donni Hendrawan (BPJS-K), and Malarvizhi Veerappan (World Bank, Joint Learning Network) during the ‘AI for Health Insurance’ session’s open forum.
Around 50 delegates representing the government, academe, development partners, civil society, and professional societies within and beyond South and South-East Asia participated in the ‘AI for Health Insurance’ session, which explored the aspects AI can and cannot solve in the health insurance sector. Karlina Octaviany, AI Specialist at the ‘FAIR Forward – Artificial Intelligence for All’ initiative at GIZ Indonesia, moderated the two-hour knowledge-sharing session, which discussed examples of applying AI for health insurance; presented opportunities on how AI could work with existing (health) insurance management information systems; and shared BPJS-K’s experience in adopting AI for health insurance.
Opportunities for AI in Health Insurance
Saurav Bhattarai (openIMIS, GIZ) presenting about opportunities of AI in health insurance, specifically in claims adjudication
Saurav Bhattarai, Advisor and lead for the openIMIS initiative at GIZ, presented opportunities for AI in health insurance in the context of claims adjudication. He started his presentation with an example of a typical insurance claims workflow:
Claims submission: It starts with claims submission which could be done manually or digitally by health facilities.
Rules engine: A typical IT system has some rules programmed in, which generally contain simple checks to see if the health insurance policy is active, if the person is insured or not if the services are covered and applicable in the policy, and the frequency limits. Bhattarai further explained that the rules engine is classified as part of AI, even if it is rudimentary, as the computer is taking a decision about claims getting approved or rejected.
Manual evaluation: After passing the checks from the rules engine, it will undergo manual evaluation.
Payment: After claims are verified from the manual evaluation, payment will be approved. If not, a health facility will receive a response.
As an implementation example, Bhattarai showed the number of claims coming in per day to the National Health Insurance in Nepal –– from around hundreds of claims per day in 2016 to around 15,000 claims per day in 2021. The trend shows an exponential increase even up to now. Bhattarai mentioned that digital health is one of the contributing factors to this increase, which began when the National Health Insurance mandated the use of Fast Healthcare Interoperability Resource (FHIR) standards for electronic health records. This meant that more electronic health record systems (EHR) in hospitals are now compatible with the IT system used for the submission of digital claims in health insurance. Digitalization is making it easier for health facilities to submit claims; however, health insurers are finding it difficult to review the increasing number of claims. As a result, there is a huge gap between the number of claims received and the number of claims reviewed.
Bhattarai reiterated that the bottleneck lies in adjudication. The number of claims per day increases every year, while the human intervention capacity remains limited. In 2019, only 16 officers were hired to review claims. On a good day, one officer can review up to 100 claims per day. Theoretically, all available officers combined can only review up to 1, 600 claims per day out of around 15,000 claims received per day. This situation hurts healthcare delivery because when health facilities are not being paid, they cannot provide services to the population that needs them. At the moment, 30,000 claims are projected to be received per day, which means around 300 officers would be needed to manage this daily demand. Thus, automated claim categorization through AI models is needed to reduce time between claims submission and claims payment and, therefore, to increase access to health.
By adding machine learning to the same claims submission workflow, claims can now be automatically categorized. In this context, an AI engine was created for automatic classification based on manual review and the ability to flag rejected claims. If the AI engine starts rejecting claims, it will go through a manual quality assurance process so human reviewers can check if the AI is rejecting claims properly. The manual quality assurance helps the AI engine learn more, as human action is what makes the AI engine better and better. Bhattarai explained the steps followed to incorporate machine learning in the claims submission workflow:
Data gathering and preparation: Creation of an AI input data model, including sanity check on the database, processing of categorical data, and normalizing data.
Implementation of the AI algorithm: Design AI methods, model outputs, and evaluation metrics, then programming the AI model itself.
Software development: Development of the AI modules as well as program interfaces using the FHIR standard. Claims that come in the AI module are FHIR claims.
User acceptance testing: Testing of the AI modules.
On challenges encountered, Bhattarai shared that during data gathering, the challenge was that there was not a lot of data to learn from initially and that the few ‘rejected data’ did not indicate reasons for rejection. In the database, it was also observed that text fields were not standardized. During algorithm development, one of the challenges Bhattarai shared was the lack of non-numerical data. As most data were categorical, rejections were based on different types of data and visit types. Very few AI models were suited to the type of data that was present. Thus, the resulting AI model was based on extreme gradient boosting. For the development of the claims module, two data streams were used – offline and online data. Offline historical data were used for gathering data, cleaning data, and data analysis for training the AI model. The model was then applied to online data where claims were coming in before finally executing the AI model for actual analysis and execution of AI in accepting and flagging claims.
The whole research and development of the AI module presented by Bhattarai was implemented on openIMIS, an interoperable, versatile open-source software for managing health insurance systems. Not only is the software available for download, but all the logic and thought process that went into developing the AI module is free to use and modify. As openIMIS is a global good, the openIMIS Claim-AI module can be integrated into any management information system (MIS). Organizations can adopt the AI module and continue using their own MIS without needing to use the whole openIMIS technology suite. With a readily available AI module for claims, interested implementers will only need to conduct data preparation, customize, deploy and test. Bhattarai encouraged everyone to take advantage of this global public good, “In typical AeHIN fashion, when we help friends, friends will help us; this is us helping friends.”
The openIMIS AI module resources are open-source and freely available with a wider community available for support. Model design details are available here. Codes are also available via GitHub.
Questions addressed and answered by the openIMIS Initiative during Open Forum:
AI will flag anomalies and so on, has there been a regular human intervention to see whether it’s correct?
Saurav Bhattarai: Quality assurance is one of the steps presented. After there is an AI intervention, let’s say an AI flags the claims, there is a provision for a manual review of the AI action. That’s basically where the learning also happens if there is a mistake from the AI. As we go on, we still have a lot more data to process but from the data that we have, the accuracy is increasing as we go along. It takes about a year to gain half of a percentage, but that’s basically how it improves. The manual quality assurance is recommended and very much used.
Since it is an open system, what kind of dataset have you used for training? What is the FPI for this algorithm? It’s good to review them and see whether they fall under the rection category, but what about the claims that are false-positive and selected as legitimate claims?
Saurav Bhattarai: For the data, the software is available as a digital public good, when we developed it, it had to be developed for a certain use case because there is no global dataset available. The initial solving of the problem, even the model development, was done based on the dataset that we had in the implementation in Nepal - the National Health Insurance in Nepal. That data was used to develop the model, training, and everything. Right now, what is available as a digital public good is everything about that data. But, you can train the (AI) model using your own data. Right now, we’re trying to see if we can get access to some publicly available data, but that’s quite difficult. There are some developer teams that are artificially generating data. Of course, it’s not the same as real data but just for our testing purposes and showcasing purposes. But, the model itself is there. For the false positives, I am not an AI expert, but I can refer you to the resources, and I can send you the links. We did have performance issues on version 1, so this is version 2 already.
So, we have to shift gears. (For example), if a minister would be interested in AI either in MoH or insurance agency, what might be an adivse you can give the minister? One or two priority things you can start with, knowing that AI might be very complex and might be difficult for them to do many things all at the same time.
Saurav Bhattarai: I don’t think we want the minister saying, ‘We should have AI.’ The minister should be saying, ‘Let’s reduce fraudulent claims.’ If the minister starts saying, ‘Let’s have AI,’ then we’re gonna have everyone running around buying anything that has the word, ‘AI’. It’s more of, ‘Let’s build capacities within the advisory team of the minister so that the minister isn't saying let's use AI.’ He’s talking about the problems in the health sector and asking for solutions to that problem. That means the digital health and the technical teams will decide how, what, and when you can use AI, or maybe 5 years later, we’ll be talking about a different technology – it won't be AI because these technologies will definitely change.
Did you encounter a problem or do you have a suggestion?
Please contact our Service Desk
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. https://creativecommons.org/licenses/by-sa/4.0/