/
2025-02-27 Developers Deep Dive Call: Performance

2025-02-27 Developers Deep Dive Call: Performance

Overview

Date: 2025-02-27 - 11:00 am (UTC)

Room:

Objective: Performance improvements

Participants: (kindly only add your own names, not those of other participants)

Agenda:

When (UTC)

Duration

Who

Topic

When (UTC)

Duration

Who

Topic

14:00 (UTC)

5 min

GIZ

Updates & administrative things

14:05 (UTC)

90 min

Y-Note

Overview of performance tweak collection

Minutes

  • CM: 4 mio insurees, but 50k services created

    • 2 SQL servers (60 cores): reporting & claiming

  • lack of a monitoring system for bottleneck analysis (on MS SQL Server)

    • try to monitor extended events

    • there is a docker for a monitoring server to log every graphql call

    • cameroon planned a syslog system based on elastic search

    • NP did log-monitoring for dangling indexes

    • use sentry in openIMIS -

      • management is TracerMiddleware

      • server needs to be set up separatly with a monitring link

  • caching / pre-caching after update to 2024-04

    • removed a lot of duplicated db-queries

  • splitting servers:

    • claims: productive system

    • reporting: duplicated database

  • caching GraphQL

    • cache individual requests based on UID

 

Ermond says:in database leve, what I would check is:

  • Check via extended events what are the queries that are causing the locks. After identifying the queries you can check on how to optimize them (missing indexes, heavy uses of CTE instead of TEMP tables, index scanning vs index seeking, etc)

  • In Memory OLTP databases / data tables. They improve drastically the transactions performance because they reside mostly in RAM

  • SQL monitoring script:

  • CREATE EVENT SESSION [Deadlocks_Find] ON SERVER ADD EVENT sqlserver.database_xml_deadlock_report( ACTION(package0.callstack,sqlserver.context_info,sqlserver.database_name,sqlserver.sql_text,sqlserver.username) WHERE ([database_name]=N'OpenIMIS' OR [database_name]=N'DB_2_IfNeeded')), ADD EVENT sqlserver.xml_deadlock_report( ACTION(package0.callstack,sqlserver.context_info,sqlserver.database_name,sqlserver.sql_text,sqlserver.username)) ADD TARGET package0.event_file(SET filename=N'Deadlocks_Find') WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=ON) GO

Ermond says:

  • regarding the splitting of the DB Servers, it's a good idea to split the reporting server from the operational one. In this case, it's a good idea to create a High Availability Group (HAG). This feature allows for automatic failover if the active database will go offline. In every case, you can use the non-active node for reports because HAG allows the readonly routing. Maybe this is what you have implemented?

Questions

  •  

Presentations / Attachments

 

  File Modified
No files shared here yet.

Additional Resources

Did you encounter a problem or do you have a suggestion?

Please contact our Service Desk



This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. https://creativecommons.org/licenses/by-sa/4.0/