Release 23.10 Performance Testing

Introduction

Performance testing plays a pivotal role in ensuring that software applications run smoothly under their intended workloads.

It's not just about how fast the system responds but also about how reliably and consistently it performs.

With the growing user base of openIMIS, it's essential to understand the system's limits and ensure that it delivers an optimal experience, irrespective of the load.

Objectives of Performance

To ensure that the openIMIS meets predefined performance criteria and delivers a satisfactory user experience.

Identify performance bottlenecks, scalability issues, and areas for optimization before the software is deployed to production.

Core Functionalities to be Tested

Core scenarios regarding Insuree, Family, Claim and Product components

Types of Tests to be Performed

Load Testing 

Is a type of performance testing that assesses how well a system or application performs under various loads (measured with various workloads profiles and concurrent users) without performance degradation. It involves defining test scenarios, generating load, monitoring metrics, and reporting findings. The goal is to detect performance issues early, enhance reliability, improve user satisfaction, and optimize resource allocation. Different load testing types include stress testing, volume testing, concurrency testing, scalability testing, and endurance testing, each serving specific purposes in assessing a system's performance capabilities.

Stress Testing 

Determine the breaking point of an application. During that we want to provide users and stakeholders information about how much load and for how many users system will break or will show potential risks and weak points. In that case we can provide accurate information of openIMIS capabilities for given hardware and software

Endurance testing 

Is a type of performance testing that evaluates how a system or application performs under a sustained workload for an extended period. It helps identify issues like memory leaks and resource depletion that may affect long-term system stability.

Test Environment

Hardware and Software Details

AWS EC2 service m5.4xlarge instance

Network Configuration

standard WAN configuration

Data Preparation

Data Creation Scripts

Scripts for data generation will be created and stored under common repository.

Data Loads Used during Testing

Data Set

Load Test Volumes (millions)

Data Increase (per Hour)

Relations 

Concurrent Users 

Insurees

0.1/1/10

N/A

Family, Claims

2/10/50

Claims 

0.3/3/30

100/1k/5k

Policies, Insuress, Products

2/10/50

Families

0.04/0.4/4

N/A

Insurees, 

Policies,

Contributions

2/10/50

Users

0.0001/0.001/0.01

N/A

N/A

2/10/50

Products

0.0001/0.001/0.01

N/A

Claims

2/10/50

Policies

0.08/0.8/8

N/A

Families, Claims

2/10/50

Tools

Tests will be performed using JMeter tool, results will be visualized using Grafana.

All code used for generation of data, test execution, data transformation will be available under https://github.com/openimis/performance_testing repository.

Performance Testing Plan

Test Results

Deliverables

Performance Testing Strategy Document: A comprehensive guide detailing the testing approach, objectives, scope, and methodologies.

Test Scripts: Coded scripts that virtual users will execute during the tests.

Github repository with whole performance tests project

Test Environment Specifications: Documentation of the environment setup, including hardware, software, and configurations.

This Confluence page with environment details is filled with proper data

Test Results: Raw data and logs collected during test execution.

Separate Confluence page with reports from every performance test run (initial run and next before releases)

Performance Analysis Report: A document detailing the test findings, including key metrics, bottlenecks, and recommendations.

For initial tests run, detailed document on findings and recommendations

Recommendations and Action Items List: A prioritized list of suggested improvements and next steps based on test results.

Tickets with recommendations and fixed are created, prioritized by criticality

Issue and Bug Reports: Detailed reports of any defects or performance issues identified.

Bugs tickets are created for each defect tagged with proper label (i.e. performance_bug)

Did you encounter a problem or do you have a suggestion?

Please contact our Service Desk



This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. https://creativecommons.org/licenses/by-sa/4.0/