openIMIS virtual workshop - Feedback Survey & Workshop openIMIS Testing
This page summarizes feedback from testers regarding the OpenIMIS test cases, collected as of February 20, 2025. The responses address clarity, coverage, usability, and potential improvements for test cases, execution, and tools. The goal is to analyze the feedback, derive actionable insights, and propose enhancements to the testing process.
Virtual workshop date: Feb 20, 2025 11 AM CET
Agenda Presentation:
Agenda Points:
|
|
|
|
|
|
Workshop Output, Analysis and Recommended
A total of five testers provided feedback across ten questions. Below is a condensed analysis of their responses:
Are there any test cases that are unclear or difficult to understand?
60% (3/5) said "No."
40% (2/5) said "Yes," citing issues like:
Descriptions not matching app workflows (e.g., antenatal claim creation).
Lack of clear instructions for specific items/services.
Have you encountered any missing test scenarios or edge cases?
80% (4/5) said "No."
20% (1/5) said "Yes," noting better organization of prerequisites for test cases.
Are there repetitive steps or redundant test cases?
100% (5/5) said "No."
Are there specific tools or automation strategies to improve efficiency?
80% (4/5) said "No."
20% (1/5) suggested Selenium and API testing.
How hard is it to use TestLink for testing?
40% (2/5) rated it "1 - Not hard at all."
60% (3/5) rated it "2 - Slightly hard."
Issues with test case environment setup or configuration?
60% (3/5) said "No."
40% (2/5) said "Yes," mentioning:
Hard-to-configure conditions for unfamiliar users.
Single test environment causing data manipulation issues.
Do test cases cover functional and non-functional aspects adequately?
100% (5/5) said "Yes."
Problems with pre- and post-conditions?
60% (3/5) said "No."
40% (2/5) said "Yes," citing:
Multiple testers overwriting results.
Need for clearer precondition examples (e.g., creating health facilities, insurees).
Understanding of OpenIMIS business flow?
Testers demonstrated varying familiarity:
Suggestions included better documentation of flows (e.g., Enrollment, Claims Management).Training on OpenIMIS was requested.
Other suggestions?
Training on OpenIMIS (local hosting, AeHIN course).
Multiple test environments to avoid data conflicts.
Synthetic data for preconditions.
Reward points for testers.
Key Metrics
Based on the responses, here are some derived metrics:
Test Case Clarity: 60% of testers found all test cases clear (3/5 "No" to unclear cases).
Coverage Gaps: 20% reported potential missing scenarios (1/5 "Yes").
Environment Issues: 40% faced setup challenges (2/5 "Yes").
Tool Usability: 60% found TestLink slightly hard (3/5 rated "2").
Automation Interest: 20% suggested automation tools (1/5 "Yes").
Analysis
Strengths
High Coverage: All testers agreed that functional and non-functional aspects are adequately covered.
No Redundancy: No repetitive steps were identified, indicating efficient test design.
General Usability: Most testers found TestLink manageable (average difficulty ~1.6/5).
Weaknesses
Clarity Issues: 40% struggled with unclear test cases, often due to mismatched descriptions or missing instructions (e.g., antenatal claims).
Environment Setup: 40% highlighted difficulties with single-server testing and complex configurations.
Precondition Clarity: 40% noted issues with pre/post-conditions, including result tracking for multiple testers.
Opportunities
Training: Testers requested OpenIMIS business flow training and local hosting guidance.
Automation: Tools like Selenium, Cypress, or http://Bugbug.io could streamline execution, though developer skills are needed.
Multi-Environment Testing: Separate instances could prevent data conflicts.
Threats
Scalability: Single-server testing may worsen as tester numbers grow.
Onboarding: New testers may fail tests due to unclear steps (e.g., Dr. Julius’ feedback on navigation).
Recommendations
The recommendations below are based on assumptions or suggestions, particularly in the "Training and Documentation" section. Exploring automation would likely involve developing a codebase, which may require testers to have a developer skill set. Improving the environment could face challenges due to limited resources and potential server management burdens.
Improve Clarity:
Revise test case descriptions to match app workflows (e.g., specify antenatal claim items).
Standardize Step 1 navigation (e.g., "Go to Social Protection > Benefit Plans").
Enhance Environment:
Set up multiple test instances to avoid data manipulation issues.
Provide synthetic data for preconditions (e.g., pre-populated health facilities, insurees)
Training & Documentation:
Offer OpenIMIS business flow training (e.g., AeHIN course probably).
Document key flows: Enrollment, Claims, Financials, Analytics.
Automation Exploration:
Pilot Selenium or API testing for repetitive tasks.
Evaluate low-code options like http://Bugbug.io for non-developers.
Tester Support:
Track results per tester to avoid overwrites.
Introduce a points-based reward system.
Next Steps
Review Test Cases: Identify and fix unclear cases.
Investigate Missing Scenarios: Check feedback on prerequisites.
Plan Testing: Schedule OpenIMIS onboarding by March 2025.
Test Environment: Explore multi-instance setup feasibility.
Did you encounter a problem or do you have a suggestion?
Please contact our Service Desk
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. https://creativecommons.org/licenses/by-sa/4.0/