$customHeader
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Objective

Explore the migration of openIMIS database from Microsoft SQL Server to PostgreSQL

Description

The project includes the following parts:

  1. Become familiar with the existing openIMIS implementation in general and the openIMIS database in particular.

  2. Create the openIMIS database structure in PostgreSQL: tables, views, user-defined types.

  3. Migrate Stored Procedures and Functions to PostgreSQL. Analyse and propose optimisation during the migration of Stored Procedures. This can also include the replacement of stored procedures by application-level code.

  4. Migrate in-line SQL scripts in all openIMIS related components: Web Application (legacy and modular), REST API, Web Services, Windows Services. Optional: construct SQL scripts using framework's ORM (Entity Framework or Django ORM) to become DBMS independent.

  5. Develop a tool for the transfer of data from the MSSQL database to the PostgreSQL database.

  6. Start the migration of SP from the database level to Python code in the modular backend using Django ORM (DBMS independent).

Further Reading

Discussions

  • DB-structure & fields:

    • how compatible are MS & PSQL data types?

    • include the data migration tool:

      • pages or streaming

  • Stored procedures:

    • how many stored procedures still need migration in openIMIS?

    • can stored procedures (which are not business logic) be integrated into github?

    • go incremental

    • start with a small one to get used to the migration

    • reports are important (21 SP)

    • how to unittests (python in stored proc)?

  • In-line SQL scripts:

    • avoid in-line SQL, instead use django flavor

    • look at generic DB access (is it possible to have the current functional DB access work on MS SQL and PostgreSQL?)

  • Data migration tool:

    • is already a goal in itself:

    • certainly if we need to cope with large data sets (streaming/pagination,…)

    • data types may require transformations (dates, -large- text fields)

    • … use existing (open source?) ETL?

  • Report of difficulties, attention points ('hacks' put in place),… is more important than the result itself as it would anticipate a difficulty to migrate the related ‘module’ in new architecture later on.

  • No labels