How to configure Logging in Python

The inbuilt logging module in python requires some handful of lines of code to configure log4j-like features viz - file appender, file rotation based on both time & size.

For one-liner implementation of the features in your code, you can use the package autopylogger.

Here are the basics.

1. Install package

pip install autopylogger

2. Usage

# import the package
from autopylogger import init_logging

# Initialise the logging module
mylogger = init_logging(log_name="myfirstlogger", log_directory="logsdir")

That's it, logging object has been initialized with file writing and rotation feature enabled.

You can write the logs by the below commands.

# Write logs - DEBUG | INFO | WARNING | ERROR
mylogger.debug('This is a INFO log')
mylogger.info('This is a DEBUG log')
mylogger.warning('This is a WARNING log')
mylogger.error('This is a ERROR log')

Why should you use autopylogger?

  • File Appender - Enabled by default.

  • Log rotation - Enabled by default. Can also configure rotation based on both time & size, which is missing in basic logging library.

  • Critical log mailing feature - Send mail for critical logs by initializing library with your SMTP server credentials. Comes handy in PRODUCTION environments.

For documentation, refer to Official Github page, PyPI Official page


The docs provide a pretty good example of using your logger in multiple modules. Basically, you set up the logging once at the start of your program. Then, you import the logging module wherever you want to have logging, and use it.

myapp.py

import logging
import mylib

def main():
    logging.basicConfig(filename='myapp.log', level=logging.INFO)
    logging.info('Started')
    mylib.do_something()
    logging.info('Finished')

if __name__ == '__main__':
    main()

mylib.py

import logging

def do_something():
    logging.info('Doing something')

This example shows a very simplistic logger setup, but you could very easily use the various ways to configure logging to set up more advanced scenarios.


Actually in Python it looks pretty much similar. There are different ways to do it. I usually create a logger class which is very simple:

import os
import logging 
import settings   # alternativly from whereever import settings  

class Logger(object):

    def __init__(self, name):
        name = name.replace('.log','')
        logger = logging.getLogger('log_namespace.%s' % name)    # log_namespace can be replaced with your namespace 
        logger.setLevel(logging.DEBUG)
        if not logger.handlers:
            file_name = os.path.join(settings.LOGGING_DIR, '%s.log' % name)    # usually I keep the LOGGING_DIR defined in some global settings file
            handler = logging.FileHandler(file_name)
            formatter = logging.Formatter('%(asctime)s %(levelname)s:%(name)s %(message)s')
            handler.setFormatter(formatter)
            handler.setLevel(logging.DEBUG)
            logger.addHandler(handler)
        self._logger = logger

    def get(self):
        return self._logger

Then if I want to log something in a class or module I simply import the logger and create an instance. Passing the class name will create one file for each class. The logger can then log messages to its file via debug, info, error, etc.:

from module_where_logger_is_defined import Logger

class MyCustomClass(object):

    def __init__(self):
        self.logger = Logger(self.__class__.__name__).get()   # accessing the "private" variables for each class

    def do_something():
        ...
        self.logger.info('Hello')

    def raise_error():
        ...
        self.logger.error('some error message')

Updated answer

Over the years I changed how I am using Python logging quite a bit. Mostly based in good practices I configure the logging of the whole application once in whatever module is loaded first during startup of the application and then use individual loggers in each file. Example:


# app.py (runs when application starts)

import logging
import os.path

def main():
    logging_config = {
        'version': 1,
        'disable_existing_loggers': False,
        'formatters': {
            'standard': {
                'format': '%(asctime)s [%(levelname)s] %(name)s: %(message)s'
            },
        },
        'handlers': {
            'default_handler': {
                'class': 'logging.FileHandler',
                'level': 'DEBUG',
                'formatter': 'standard',
                'filename': os.path.join('logs', 'application.log'),
                'encoding': 'utf8'
            },
        },
        'loggers': {
            '': {
                'handlers': ['default_handler'],
                'level': 'DEBUG',
                'propagate': False
            }
        }
    }
    logging.config.dictConfig(logging_config)
    # start application ...

if __name__ == '__main__':
    main()

# submodule.py (any application module used later in the application)

import logging

# define top level module logger
logger = logging.getLogger(__name__)

def do_something():
    # application code ...
    logger.info('Something happended')
    # more code ...
    try:
        # something which might break
    except SomeError:
        logger.exception('Something broke')
        # handle exception
    # more code ...

The above is the recommended way of doing this. Each module defines its own logger and can easily identify based on the __name__ attribute which message was logged in which module when you inspect the logs. This removes the boilerplate from my original answer and instead uses the logging.config module from the Python standard library.