OpenIMIS Monitoring

Stress testing beforehand is great but one often has to investigate the source of issues on a live system. There are a few recommended approaches. This page is not exhaustive and feel free to add your own recommendations.

Logging

The first line of investigation is the regular logging from openIMIS. On systems running in DEBUG=True, this is enabled by default, on other systems, one needs to adapt the settings.

In openimis-be_py/openIMIS/openIMIS/settings.py, one will find:

LOGGING_LEVEL = os.getenv("DJANGO_LOG_LEVEL", "WARNING") DEFAULT_LOGGING_HANDLER = os.getenv("DJANGO_LOG_HANDLER", "debug-log") LOGGING = { "version": 1, "disable_existing_loggers": False, "formatters": { "standard": {"format": "%(asctime)s [%(levelname)s] %(name)s: %(message)s"}, "short": {"format": "%(name)s: %(message)s"}, }, "handlers": { "db-queries": { "level": LOGGING_LEVEL, "class": "logging.handlers.RotatingFileHandler", "filename": os.environ.get("DB_QUERIES_LOG_FILE", "db-queries.log"), "maxBytes": 1024 * 1024 * 5, # 5 MB "backupCount": 10, "formatter": "standard", }, "debug-log": { "level": LOGGING_LEVEL, "class": "logging.handlers.RotatingFileHandler", "filename": os.environ.get("DEBUG_LOG_FILE", "debug.log"), "maxBytes": 1024 * 1024 * 5, # 5 MB "backupCount": 3, "formatter": "standard", }, "console": {"class": "logging.StreamHandler", "formatter": "short"}, }, "loggers": { "": { "level": LOGGING_LEVEL, "handlers": [DEFAULT_LOGGING_HANDLER], }, "django.db.backends": { "level": LOGGING_LEVEL, "propagate": False, "handlers": ["db-queries"], }, "openIMIS": { "level": LOGGING_LEVEL, "handlers": [DEFAULT_LOGGING_HANDLER], }, }, }

 

LOGGING_LEVEL will be WARNING for production environments and DEBUG for development ones.

There are two main logs:

  • openimis-be_py/openIMIS/db-queries.log that will, if set to DEBUG, log all database queries with their timing. This is really ALL queries, not just from the openIMIS code. It is set on the django.db.backends

  • openimis-be_py/openIMIS/debug.log This file will contain the openimis logs according to the set level. Be careful, this only applies to logs on loggers starting with openimis. Those simply logging on __name__ won't appear unless you add a corresponding section in the settings file.

In the code, the proper way to create loggers is:

import logging logger = logging.getLogger("openimis." + __name__)

Some loggers don’t include the “openimis.” yet and would need to be explicitly included in the logging config.

db-queries.log looks like (where (0.004) means 4 milliseconds to execute the query)

2023-11-08 13:19:05,450 [DEBUG] django.db.backends: (0.004) SELECT "core_User"."id", "core_User"."i_user_id" FROM "core_User" WHERE "core_User"."username" = 'Admin' ORDER BY "core_User"."id" ASC LIMIT 1; args=('Admin',) 2023-11-08 13:19:05,452 [DEBUG] django.db.backends: (0.001) SELECT "tblUsers"."ValidityFrom", "tblUsers"."ValidityTo", "tblUsers"."LegacyID", "tblUsers"."UserID", "tblUsers"."UserUUID", "tblUsers"."LanguageID", "tblUsers"."LastName", "tblUsers"."OtherNames", "tblUsers"."Phone", "tblUsers"."LoginName", "tblUsers"."LastLogin", "tblUsers"."HFID", "tblUsers"."AuditUserID", "tblUsers"."EmailId", "tblUsers"."PrivateKey", "tblUsers"."StoredPassword", "tblUsers"."PasswordValidity", "tblUsers"."IsAssociated", "tblUsers"."RoleID" FROM "tblUsers" WHERE "tblUsers"."UserID" = 1 LIMIT 21; args=(1,) 2023-11-08 13:19:05,453 [DEBUG] django.db.backends: (0.000) SELECT "core_User"."id", "core_User"."username", "core_User"."t_user_id", "core_User"."i_user_id", "core_User"."officer_id", "core_User"."claim_admin_id" FROM "core_User" WHERE "core_User"."username" = 'Admin' LIMIT 21; args=('Admin',)

debug.log looks like:

Log analysis tools

There are plenty of log collection and analysis tools: ELK (Elasticsearch, Logstash, Kibana), Sentry, Splunk, NewRelic, HyperDX, … These tools can either collect logs with extended contextual information or add an instrumentation layer to analyze in depth the internals of the operations.

Sentry

Bluesquare is using sentry and has integrated the advanced logging into framework. (Note that this only works with released version of Graphene-django, v3 will break it but bring a lot of interesting new features)

To use it, first install the sentry sdk:

Then configure it as shown in the settings file (set the SENTRY_DSN environment variable):

This allows setting alarms with notifications on some events and to closely monitor the time spent within the app:

The above example shows the details about the user triggering this event, browser and all; it contains a detailed timeline of the execution. This also makes crystal clear the n+1 situations where fetching details in a list results in a bunch of subqueries instead of running a single .fetch_related() or equivalent.

 

 

Did you encounter a problem or do you have a suggestion?

Please contact our Service Desk



This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. https://creativecommons.org/licenses/by-sa/4.0/