openIMIS Testing Scenarios
Draft 19 05 2021
Several authors agreed that software testing is an essential step for detecting and preventing bug; the testing phase will also guarantee that the software satisfies the user requirements, with a good quality and reliable product. A software free from bugs and that met the requirements of the user, need to run various tests. The tests are generally grouped into two larger type; functional testing, or the testing of all functions of the software is mandatory and ends with the user acceptance test, confirming that the software is ready to be implemented; the non-functional testing verifies that the non-functional part of the software is for instance secured.
Non-functional test and functional test are equally important and required, but also different[1].
Functional testing | Non-functional testing |
Verifies each function/feature of the software | Verifies non-functional aspects like performance, usability, reliability, etc |
Can be done manually | Hard to perform manually |
based on customer requirements | Based on customer’s expectations |
To validate software actions | To validate the performance of the software |
Describes what the product does | How the product works |
Table 1: Difference between functional and non-functional testing
openIMIS is a free open-source software that is developed and maintain by a group of developers, organised in a community. The community supports the update and the maintenance of the software. If the free licence is on the key benefit of the open-source, a disadvantage is the maintenance, with the crucial question “What if the update stops?”, would the users of openIMIS have the capacity to maintain and update the software if the community initiative ends?
With no doubts, the maintenance and the update of openIMIS will be significantly affected by the end of a community initiative. To mitigate that risk, the openIMIS initiative is facilitating the transfer of knowledge and expertise. A proposed approach is to transfer the knowledge on the testing of openIMIS via testing cases and scenarios. Where a scenario will correspond to the set of tests to perform in a particular context, and the testing case will the list of action to perform when running a test.
Testing Scenarios of openIMIS
From the first deployment of openIMIS, the testing has played a key role, especially the functional testing. Every new deployment of the software goes through the entire functional test list. A new implementation of the software required to perform all the functional testing and some of the non-functional testing. Another scenario requiring the same level of attention is the release of a new version of openIMIS. Before a new version of the software is released or before a new functionality is added to the software, several tests are performed.
In the post-release of openIMIS, it is possible to find a bug. According to the severity of the bug, different type of test could be run, both functional and non-functional. The severity of a bug is indicated by the degree of negative impact it has on the quality of the software[2].
Critical bug: When critical functionalities or data are affected, the software is at great risk. That can be for example, a failure to access the data of beneficiaries or a security breach into the system.
Major bug: When critical functionalities or data are affected but not stopping of the use of the platform. For example, if a user is unable to enrol a new beneficiary but can manage the existing list of beneficiaries.
Minor bug: Is bug that affects minor functionalities or non-critical data. Like a field not displaying the entire name of the beneficiary.
Trivial bug: Is bug not affecting any functionality or data, but just causing some inconvenience. Like a wrong translation of a field.
A new implementation of openIMIS, an update or a bug found could initiate different scenarios, with different set of tests for each scenario.
Figure 1: List of test
Scenario 1: A new implementation
For every new implementation of openIMIS, the functional tests are mandatory before the software is accepted by the user. Currently, only the testing steps of the acceptance testing are shared with the public. If the test cases of the integration and the system testing are performed but not recorded, the steps taken to fix an issue encountered are not reproducible by another developer. A reproducible solution corresponds to a specific issue that needs also to be well described.
Steps | Type of test | Target | Test type | Actors |
Before the implementation |
| Test each unit/module | Manual | Developers |
Before the implementation |
| Test if the combined modules work together | Manual | Developers |
Before the implementation | 2.a User interface test | Test if the user interface works as required | Manual | Developers/Users |
Before the implementation |
| Test the complete software | Manual | Developers/Users |
Before the implementation |
| Test if the software meets the user requirements | Manual | Users |
During the implementation |
| Test if the software is correctly installed | Manual | Developers |
During the implementation | 4.a. Documentation testing | Test if documentation about how to use the system matches with what the system does | Manual | Developers/Users |
During the implementation | 5.a. Security testing | Test that security breaches can be prevented | Manual/Automated | Security expert |
During the implementation | 5.b. Performance testing | Test the speed, stability, and scalability and resource usage under particular workload. | Manual/Automated | Testing expert |
Before and during an implementation, many actors with different expertise are required to test the software. If some tests can be automated, the assistance of an expert to plan and execute the test is still required.
Scenario 2: A new release
Unlike to the tests for a new implementation, the tests for a new release are not meant to satisfy a client. They are no user requirements to satisfy and the number of tests to run are significantly reduced. The security and the performance test are not mandatory for a new release but recommended if a new release has affected more than one module. In case two modules are more are updated; this scenario should integrate a unit and integration test.
Steps | Type of test | Target | Test type | Actors |
Before the release (if more than one module is updated) |
| Test each unit/module | Manual | Developers |
Before the release (if more than one module is updated) |
| Test if the combined modules work together | Manual | Developers |
Before the release | 2.b. Regression test | Test if the software works properly after a change in a module | Manual | Developers |
Before the release |
| Test the complete software | Manual | Developers/Users |
During the release |
| Test if the software is correctly installed | Manual | Developers |
During the release | 4.a. Documentation testing | Test if documentation about how to use the system matches with what the system does | Manual | Developers/Users |
Scenario 3: A bug is found
Minor or trivial bugs required testing but not a specific scenario. The critical bug, because of its high severity, required to perform a few tests, not only to fix the issue but also to prevent any other critical bug to happened. The tests to address the major bug are specific to the module affected and not the entire system.
Bug severity | Type of test | Target | Test type | Actors |
Critical |
| Test each unit/module | Manual | Developers |
Critical |
| Test if the combined modules work together | Manual | Developers |
Critical | 2.a User interface test | Test if the user interface works as required | Manual | Developers/Users |
Major | 2.b. Regression test | Test if the software works properly after a change in a module | Manual | Developers |
Critical/Major |
| Test the complete software | Manual | Developers/Users |
Critical/Major |
| Test if the software is correctly installed | Manual | Developers |
Critical | 5.a. Security testing | Test that security breaches can be prevented | Manual/Automated | Security expert |
Critical | 5.b. Performance testing | Test the speed, stability, and scalability and resource usage under particular workload. | Manual/Automated | Testing expert |
[1] Functional Vs Non-Functional Testing – Difference Between Them
[2] https://softwaretestingfundamentals.com/defect-severity/
Did you encounter a problem or do you have a suggestion?
Please contact our Service Desk
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. https://creativecommons.org/licenses/by-sa/4.0/