I recently created a PEP to propose a new way of configuring logging, using dictionaries: PEP 391.
Saturday 17 October 2009
Thursday 24 September 2009
How to treat a logger like an output stream
Sometimes, you need to interface to a third-party API which expects a file-like object to write to, but you want to direct the API's output to a logger. You can do this using a LoggerWriter, which wraps a logger with a file-like API. Here's a short script illustrating the class:
class LoggerWriter:
def __init__(self, logger, level):
self.logger = logger
self.level = level
def write(self, message):
if message != '\n':
self.logger.log(self.level, message)
def main():
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger("demo")
info_fp = LoggerWriter(logger, logging.INFO)
debug_fp = LoggerWriter(logger, logging.DEBUG)
print >> info_fp, "An INFO message"
print >> debug_fp, "A DEBUG message"
if __name__ == "__main__":
main()
When run, the script prints:
INFO:demo:An INFO message DEBUG:demo:An DEBUG message
Many thanks to Barry Warsaw for the suggestion.
Monday 14 September 2009
Python Logging 101
Introduction
The Merriam-Webster dictionary definition of the verb to log is:
To make a note or record of : enter details of or about in a log
At its most basic, a log is a list of events which may be of interest to someone in the future. In the context of software development, a log is a list of events of interest which occur during execution of an application program. (I use the word events as a synonym for happenings, rather than in the sense of mouse movements or keystrokes.) You can define an event quite simply, by asking the questions
- What happened?
- When did it happen?
- Where did it happen (in which area of the application)?
- How important is it?
The first three of these are objective, but the last is subjective. Lots of things happen during program execution, but only some of these are worth recording in a log. Generally, the developers of an application are best placed to decide which events are worth logging, and of their relative importance in the scheme of things.
Typically, logs are read by developers (when tracking down bugs), system administrators (as part of routine monitoring of systems), support desk staff (when dealing with particular support issues), and even end users (when they see an error message). Events that are of interest to one audience may not be interesting to another. The audience might even be dispersed across different geographical locations. A good logging system caters for all of these audiences by allowing:
- Developers to easily record events - what happened, when, where and how important it is.
- Flexible dissemination of event information to wherever an interested reader (whether human or another program) can read it.
- Flexibility of configuration, so that recording of events can be made conditional on where they occurred and how important they are, without changing the application code.
Basic logging of errors to text files and system logs is an old technique, but not very flexible. In this post, I introduce a logging system for the Python programming language. This system, while it borrows ideas from other systems, is not a port of anything but an independent implementation for use by Python developers.The logging package has been part of Python since Python 2.3 (released in 2002).
I'll cover each of the above points - recording events, disseminating events and configuring the system.
Recording events
Recalling that an event was defined above in terms of four dimensions, the following table shows how these dimensions are provided to the logging API.
What happened | This is passed in using a formatting string with optional arguments |
When it happened | This is not passed in explicitly. The module assumes that the time you called the logging API was the time of the event. |
Where it happened | This is specified in terms of a logging channel or logger name (defined below). |
How important it is | This is specified in terms of an integer level (defined below). |
Loggers
A logging channel indicates an area of an application. How an area is defined is up to the developer. Since an application can have any number of areas, logging channels are identified by a unique string. Application areas can be nested (e.g. an area of input processing might include sub-areas read CSV files, read XLS files and read Gnumeric files). To cater for this nesting, channel names are organized into a namespace hierarchy where levels are separated by periods, much like Java or Python package namespaces. In the above instance, channel names might be "input" for the upper level, and "input.csv", "input.xls" and "input.gnu" for the sub-levels. Having determined the relevant application area, an application obtains a reference to a logger object, which is specific to a particular channel name:Levels
By default, there are five levels of importance associated with logging events. Experience has shown that having more levels is unhelpful, since the choice of which level to assign to an event becomes subjective. The five levels are DEBUG, INFO, WARNING, ERROR and CRITICAL. Their significance is described in the following table.DEBUG | Detailed information, of no interest when everything is working well but invaluable when diagnosing problems. |
INFO | Affirmations that things are working as expected, e.g. "service has started" or "indexing run complete". Often ignored. |
WARNING | There may be a problem in the near future, and this gives advance warning of it. But the application is able to proceed normally. |
ERROR | The application has been unable to proceed as expected, due to the problem being logged. |
CRITICAL | This is a serious error, and some kind of application meltdown might be imminent. |
and you can then log away:
csvLogger = logging.getLogger("input.csv")
csvLogger.debug("Trying to read file '%s'" % filename)
csvLogger.warning("File '%s' contains no data", filename)
csvLogger.error("File '%s': unexpected end of file at line %d, offset %d", filename, lineno, offset)
csvLogger.critical("File '%s': too large, not enough memory, amount used = %d", filename, memused)
If you are handling an exception, you can provide for a stack trace to be inserted into the log, as indicated by the three equivalent statements in the following listing. When the logger sees the exc_info argument, it treats it as an indication that traceback information is desirable.
logger.error("Error reading file '%s' at offset %d", filename, offset, exc_info=1)
logger.log(logging.ERROR, "Error reading file '%s' at offset %d", filename, offset, exc_info=1)
Disseminating events
So far, so good. We've got hold of some loggers, and told them about various events at various levels of importance. What happens to those logged events? That's where handlers come in. Recall that a key component of a good logging system is getting the events to somewhere where interested parties can read them. In today's heterogeneous computing environments, where NT rubs shoulders with UNIX and multiple applications need to work together to deliver complex requirements, there are often requirements for event log information to be available on machines other than where the events were generated.
Handlers allow flexible dissemination of logging events. To see how flexible, the following shows a list of the handlers currently provided with Python logging.
Handler | How it's used |
StreamHandler | Used to write to an output stream, typically sys.stdout or sys.stderr |
FileHandler | Inherits from StreamHandler to allow writing to a disk file. |
RotatingFileHandler | Used for logging to a set of files, switching from one file to the next when the current file reaches a certain size. |
TimedRotatingFileHandler | Used for logging to a set of files, switching from one file to the next at specified times. |
SocketHandler | Used to send the events, via a socket, to a remote server listening on a TCP port. |
DatagramHandler | Similar to SocketHandler, except that UDP sockets are used. There's less overhead but less reliability. |
SMTPHandler | Used to send the events to designated e-mail addresses. |
SysLogHandler | Used to send the events to a UNIX/Linux syslog. |
NTEventLogHandler | Used to send the events to an NT event log. |
HTTPHandler | Used to post the events to a Web server. |
MemoryHandler | Used to buffer events in memory until a trigger is received, at which point the events are sent to another handler to deal with. |
NullHandler | Used in library code which uses logging to avoid misconfiguration messages when used in an application which doesn't configure logging. |
These classes inherit from the base classes Handler and BufferingHandler, depending on whether they need to deal with events one at a time or whether they process events in batches.
Handlers are associated with loggers in a flexible way. A logger can have handlers associated with it directly by means of the addHandler method (with a corresponding removeHandler method for dissociation, if required). But not every logger needs to be associated with a handler. This is because whenever a logging event is passed to a logger, the logger and all of its parents are searched for handlers, and ALL these handlers are asked to handle the event. In order to get output from the logging system, all that is needed is that one or more handlers be associated with the root logger, and all loggers will automatically use those handlers.
If no logger has any handlers, the system will indicate this to sys.stderr (as it's assumed to be a misconfiguration) and then keep quiet.
If it is desired for a particular logger that handler search does not propagate to its parents, then the propagate attribute of the logger can be set to 0. If this is done, the search for handlers will stop at that logger and not continue upwards through the hierarchy.
A Handler usage scenario
Suppose you want some events to be brought to the attention of developers, others operations staff, and yet others to the user. Then one way you could go about this is:
- Configure an SMTPHandler with a developer email address, configured to pass ERROR events or worse.
- Configure an SMTPHandler with a support desk email address, configured to pass CRITICAL events or worse.
- Configure a FileHandler to pass all DEBUG events.
- Configure your loggers however you want. For example, in production, you can set the topmost logger's level to WARNING, and you'll never see DEBUG messages anywhere. If you find there's a problem with a specific area of the application, you can set the logger for that area to send DEBUG events. Then you will see all events with levels >= DEBUG from that area, but only events with levels >= WARNING from the other areas.
SMTPHandlers are generally used to notify people about problems which need urgent handling. If there's no urgency, then it's appropriate just to configure FileHandlers. It's not uncommon to have one log file for errors only and another log file which also includes DEBUG and INFO events.
Configuring the level of output
Handlers and loggers are all very well, but we need to control the verbosity of output generated by the logging system. When a developer writes an application, they should add logging calls for every conceivable event an audience may be interested in. After all, if they don't do this, those events would never be captured. During application testing, most of these events will probably be logged at some point. However, when the application is shipped, suppose that a bug is reported. No one wants to wade through reams of detail looking for the key pointers to the problem - so the verbosity has to be turned right down across the application. As developers home in on the problem, they would like to be able to turn the verbosity up and down selectively in particular areas, to see what is happening. They should be able to do this without changing the application code in any way.
The most basic form of verbosity control is provided by setting a threshold level on loggers, handlers or both. Both Logger and Handler have a setLevel method which takes a level and associates it with the logger or handler. When a logging call is made, if the logger's threshold is above the level of the call, no event is actually generated. For example, if a logger's threshold is set to ERROR, then debug(), info() and warning() calls on that logger do not generate any events, but error() and critical() do. Similarly, if a handler's threshold is set above the level of an event which is passed to it for handling, the handler ignores the event.
From a performance point of view, it is better to apply thresholds at the logger level than the handler level. However, there may be times when levels must be applied at the handler level to get the desired effect.
There is also an overall, high-level filter which can be applied to all loggers at one stroke. The module contains a function disable() which takes a level argument and acts as a threshold for all loggers. This setting is checked before the logger's own level setting.
Filters
If simple level-based filtering of the kind described above is not enough, then you can instantiate Filter objects and associate them with both loggers and handlers. A filter object has a filter() method which is passed an event and which returns a value indicating whether the event is to be processed. Multiple filters can be associated with loggers and handlers, which both inherit from a base class Filterer having addFilter(), removeFilter() and filter() methods. The Filterer.filter() method calls the filter() method on all attached filters until all have seen the event, or until one of the filters rejects the event. If the event comes through unscathed, it is processed.
LogRecords
In the discussion so far, we have talked in general terms about events. But how is an event represented? In this system, events are described by LogRecord instances. The LogRecord class has minimal functionality, acting as a repository for all the information of interest in an event; its single method getMessage() is intended to provide a hook for converting the message and arguments (passed to it at creation time) into a string representation of the event.
The main information passed in a logging call is the level, a message and arguments for use with that message. All of this information is held in the LogRecord. There is also additional information which the system generates automatically, and all of the information in a LogRecord can appear in the final output from the system. Here's a list of the information currently held in a LogRecord:
- Logger name (logging channel - e.g. "input.csv")
- Event level (e.g. DEBUG)
- Event level text (e.g. "DEBUG")
- Pathname of source file from where the logging call was issued
- Filename of source file from where the logging call was issued
- Line number in source file where the logging call was issued
- Name of function from which logging call was issued
- Time when the event (LogRecord) was created (as seconds after the epoch)
- Millisecond portion of the creation time
- Creation time relative to when the logging module was loaded (typically, application startup time)
- Thread id (available if threading is available on your platform)
- Process id
Message Objects
In the preceding discussion and examples, it has been assumed that the message passed when logging the event is a string. However, this is not the only possibility. You can pass an arbitrary object as a message, and its __str__() method will be called when needed to convert it to a string representation. In fact, if you want to, you can avoid computing a string representation altogether - for example, the SocketHandler emits an event by pickling it and sending it over the wire.
Event Processing
Let's assume that an event has been generated, and needs to be processed. What do I mean by processing? All that's left to do is to format the event appropriately for the target audience, and then output it to the relevant sink.
In almost all cases, this means formatting the event into some form of text string. The actual formatting is done at the last moment, to avoid unnecessary processing. Formatting is controlled through the use of Formatter objects, which are associated with handlers using the setFormatter() method.
Controlling Output Formats
Formatters control the formatting of an event into text. There are two base classes - Formatter (which works on single events) and BufferingFormatter (which works on a set of events, and includes the ability to add header and trailer text). Formatters use Python's powerful % operator (similar to C's sprintf) to format the output flexibly.
Formatters know how a LogRecord is laid out - they know the field names. If you specify a format string such as "%(asctime)s %(level)-5s %(message)s", this would output the time, level of the logging event and the user's message (itself obtained by evaluating msg % args where msg and args were specified by the user).
Formatters are initialized with a format string and an optional date/time format string. The latter is used as an argument to the standard library strftime function. Formatters try to avoid needless processing - for example, the creation time is formatted into text using the date format string only if the main format string contains "%(asctime)s". If exception information is required and available, it is formatted using the standard traceback module and appended to the formatted string. It's also cached for subsequent format operations.
Optimization
Formatting of message arguments is deferred until it cannot be avoided. However, computing the arguments passed to the logging method can also be expensive, and you may want to avoid doing it if the logger will just throw away your event. To decide what to do, you can call the isEnabledFor method which takes a level argument and returns true if the event would be created by the Logger for that level of call. You can write code like this:
logger.debug("Message with %s, %s", expensive_func1(), expensive_func2())
so that if the logger's threshold is set above DEBUG, the calls to expensive_func1 and expensive_func2 are never made.
There are other optimizations which can be made for specific applications which need more precise control over what logging information is collected. Here's a list of things you can do to avoid processing during logging which you don't need:
You don't want information about where calls were made from. | Set logging._srcfile to None. |
You don't want threading information in the log. | Set logging.logThreads to 0. |
You don't want process information in the log. | Set logging.logProcesses to 0. |
Also note that the core logging module only includes the basic handlers. If you don't import logging.handlers and logging.config, they stay out of your way.
Threading
The logging module doesn't use threads itself, but it ensures that it is thread-safe by using a reentrant lock (threading.RLock) to serialize access to internal data structures, as well as to I/O handlers. Each handler instance gets a reentrant lock, and each operation to emit an event is bracketed with a lock acquisition and release.
Convenience Functions
For casual users of the logging system, or Python novices, it may be too much trouble to create loggers in a namespace hierarchy. For such modes of use, the module defines module level functions debug(), info(), warning(), error() and critical() which delegate to the root logger. If no handler has been configured for this logger, then basicConfig() is called to attach a handler to the root logger. This means that the very simplest use of the logging module is as indicated:
logging.debug("Here's some %s information about %s", "debugging", "something")
logging.info("Here's some %s", "information")
logging.warn("This is your first %s", "warning")
logging.error("To %s is human", "err")
logging.critical("The situation is getting %s", "critical")
logging.exception("Please add a %s to this message", "stack traceback")
What's with the config stuff?
The configuration system in the logging.config is a very basic implementation of a one-shot configuration, not likely to be of much help in sophisticated usage but providing some value for novice and/or casual users. If you don't want to use it, don't - the entire logging system can be programmatically configured, and if you want you can have your own configuration file format and load that using your own code, making logging API calls to actually do the configing of loggers, handlers etc. The implementation uses ConfigParser (in order to avoid having yet another configuration meta-format) and so the format is more verbose than it could be.