Monday 27 September 2010

Unit testing and logging

I’ve had questions in the past about using logging and unit testing together – what’s the best way of doing it?

Beyond the choice of unit testing frameworks (e.g. unittest, nose, py.test), unit testing appears to be one of those things (like configuration) where people have strong, and differing, opinions about how they like to do it. This is one reason there is no specific support for unit testing in logging. Besides, logging is one area where functionally, your application should work in exactly the same way no matter how logging is configured, and even if logging is completely switched off (because there are no handlers configured, or the verbosity thresholds are set to be above CRITICAL). I am of course distinguishing here between application logging and request logging as practiced by web servers – the latter is part of the function of the web server, whereas application logging is meant as a operational and diagnostic aid for developers, system administrators and support teams, and orthogonal to the application’s functionality. There are of course scenarios where logging is used to output messages to end users (typically, INFO or WARNING level messages), but it should be borne in mind that the application should keep going even if logging verbosity is turned off.

There are two things that would be useful when using logging and unit testing together:

  • Some way of capturing all the messages which are logged by the code being unit tested.
  • Some way of verifying that certain expectations with respect to logged messages are met.

Logging already provides a BufferingHandler class which allows you to capture LogRecords generated by logging activity. You can, for example, subclass this to store the LogRecord.__dict__ values rather than the LogRecords themselves – this will facilitate checking whether expectations are met. You typically don’t want to flush anything until the end of the test, though, so a handler which facilitates testing might look like this:

from logging.handlers import BufferingHandler

class TestHandler(BufferingHandler):
    def __init__(self):
        # BufferingHandler takes a "capacity" argument
        # so as to know when to flush. As we're overriding
        # shouldFlush anyway, we can set a capacity of zero.
        # You can call flush() manually to clear out the
        # buffer.
        BufferingHandler.__init__(self, 0)

    def shouldFlush(self):
        return False

    def emit(self, record):

Now let’s consider the checking of expectations about what’s been logged. We could implement this in the TestHandler class directly, but it’s the sort of area where different people may want to do different things. The bare minimum we need in TestHandler would be something that looks for some kind of match between what’s been logged (the buffer of dictionaries) and the expected values. So as an idea, let’s delegate the details of matching to a separate Matcher class, which must have a matches method. Because matching of dictionaries is likely to crop up in tests other than to do with logging, creating a separate Matcher class allows us to deploy the functionality in other scenarios. So, we can develop the TestHandler class a little:

from logging.handlers import BufferingHandler

class TestHandler(BufferingHandler):
    def __init__(self, matcher):
        # BufferingHandler takes a "capacity" argument
        # so as to know when to flush. As we're overriding
        # shouldFlush anyway, we can set a capacity of zero.
        # You can call flush() manually to clear out the
        # buffer.
        BufferingHandler.__init__(self, 0)
        self.matcher = matcher

    def shouldFlush(self):
        return False

    def emit(self, record):

    def matches(self, **kwargs):
        Look for a saved dict whose keys/values match the supplied arguments.
        for d in self.buffer:
            if self.matcher.matches(d, **kwargs):
                result = True
        return result

The signature for the matches method allows us to pass just the keys we want to test for in the call. In implementing the Matcher class, we could do whatever we wanted – regular expression matching, for example – but we’ll keep it simple. Let’s assume that we want to either match values exactly, or else do partial matches for string values such as messages. (In order to ensure that a formatted message appears in a LogRecord’s __dict__, we need to call self.format(record) in the emit method.)

To decide which keys to do partial matches on, we can store a set of the relevant keys in a _partial_matches attribute. This is defined in the Matcher class, but you can replace it in a subclass or even an instance of Matcher if you need to. Here’s the Matcher class:

class Matcher(object):

    _partial_matches = ('msg', 'message')

    def matches(self, d, **kwargs):
        Try to match a single dict with the supplied arguments.

        Keys whose values are strings and which are in self._partial_matches
        will be checked for partial (i.e. substring) matches. You can extend
        this scheme to (for example) do regular expression matching, etc.
        result = True
        for k in kwargs:
            v = kwargs[k]
            dv = d.get(k)
            if not self.match_value(k, dv, v):
                result = False
        return result

    def match_value(self, k, dv, v):
        Try to match a single stored value (dv) with a supplied value (v).
        if type(v) != type(dv):
            result = False
        elif type(dv) is not str or k not in self._partial_matches:
            result = (v == dv)
            result = dv.find(v) >= 0
        return result

Notice that the class is independent of logging and might prove useful elsewhere. Also note the use of str in the match_value method – you may want to replace this with basestring for Python 2.x. (Of course, I could have used isinstance, too.)

Now we can consider what a typical test case using TestHandler and Matcher might look like (some imports omitted):

import unittest

class LoggingTest(unittest.TestCase):
    def setUp(self):
        self.handler = h = TestHandler(Matcher())
        self.logger = l = logging.getLogger()
    def tearDown(self):
    def test_simple(self):
        "Simple test of logging test harness."
        # Just as a demo, let's log some messages.
        # Only one should show up in the log.
        self.logger.debug("This won't show up.")"Neither will this.")
        self.logger.warning("But this will.")
        h = self.handler

    def test_partial(self):
        "Test of partial matching in logging test harness."
        # Just as a demo, let's log some messages.
        # Only one should show up in the log.
        self.logger.debug("This won't show up.")"Neither will this.")
        self.logger.warning("But this will.")
        h = self.handler
        self.assertTrue(h.matches(msg="ut th")) # from "But this will"
        self.assertTrue(h.matches(message="ut th")) # from "But this will"

    def test_multiple(self):
        "Test of matching multiple values in logging test harness."
        # Just as a demo, let's log some messages.
        # Only one should show up in the log.
        self.logger.debug("This won't show up.")"Neither will this.")
        self.logger.warning("But this will.")
        self.logger.error("And so will this.")
        h = self.handler

if __name__ == '__main__':

The whole script is here, and if you run it, you should have no errors:

vinay@eta-jaunty:~/projects/scratch$ python2.7
Ran 3 tests in 0.001s

vinay@eta-jaunty:~/projects/scratch$ python3.2
Ran 3 tests in 0.001s


Is there anything I’ve missed? Anything you don’t like about these suggestions? Please feel free to comment. Thanks for reading.

Sunday 26 September 2010

"Too long; didn’t read"

Someone asked: what’s with the tl;dr at the start of every post? My first reaction was surprise that they’d even read any of these posts. After all, logging is a prosaic, even mundane subject – well beneath the radar for many developers. There’s definitely more fun to be had thinking about NoSQL databases, or high availability, or templating systems, or web application frameworks, or (your favourite topic here).

But people do arrive here via search engines, not knowing what they’ll find. Even when people actually come here looking for information about logging, there’s no reason to suppose that every post is of (equal) interest to them.

In these days of ever-shortening attention spans, many people are only interested in tweet-sized chunks, but unfortunately there are some things I can’t cover in 140 chars or less. A reasonable treatment of some topics does require some descent into detail; and, while developers are generally masters of detail, they are often picky about which details they want to be bothered with. You’ll also see that I’m somewhat inexperienced at writing blog posts – though I do hope to get better over time, my writing style might seem more than a little prolix to some.

So it seems like simple courtesy to advertise at the outset what a post-with-detail is about, so as to avoid wasting your time if the topic is of no interest to you. It’s just like the abstract you get at the head of academic papers. While it might seem whimsical to call it a tl; dr, it definitely seems pretentious to call it an abstract. To be fair, I don’t put it at the start of every post :-)

Friday 24 September 2010

Integrating logging with diverse notification services

tl;dr: Nowadays, there is a growing number of ways in which people prefer to receive notifications about happenings of interest. While Python logging cannot provide, out-of-the-box, handlers that allow you to send notifications to all of these different systems, the basic functionality provided in the stdlib makes it fairly easy for developers to support the notification mechanisms that are preferred by their audiences. This post will give some examples of how this can be economically achieved.

Logging is basically about getting information about things that happen in your software (whether application or library) out to an interested audience. The diversity of ways in which that audience wants to receive information is growing all the time: from the earliest days of being limited to email and pagers, people have progressed to a using a plethora of social networks, IM and mobile phone platforms, different desktop notification systems - and the list keeps growing. Even if the number of basic types of notification method grows slowly, the number of individual instances of those types can grow faster. For example, if you just look at a subset of desktop notification methods, you have Growl (OS X, Windows), Snarl (Windows), libnotify and mumbles (Linux). The Wikipedia page for Instant messaging lists (at the time of writing) over 20 different IM systems, some of which are only used in particular parts of the world. Likewise, for social networks one might think first of Facebook, Bebo or Twitter, but there are many others which are very popular, but in different parts of the world (for example, Orkut in South and Central America and parts of Asia).

How can you, as a developer who uses logging in your library or application, take account of your users’ preferences when getting information out to them? Of course, the large majority of logging messages will be sent to console or file. However, there will be certain messages in the ERROR or CRITICAL category which need to be sent to people so that urgent action to be taken. These have generally been sent by email, but there are circumstances when you might want to use alternative mechanisms (or even multiple mechanisms) to get information out so that it can be received as soon as possible and (hopefully) acted on in a timely fashion.

Clearly, it’s not practical for the logging package in the stdlib to provide native support for the myriad specialized ways of sending notifications which exist today. Logging currently provides 14 handlers (not including base classes), just about all of which relate to basic infrastructure communication or storage mechanisms:

NullHandler Used by libraries to avoid misconfiguration messages StreamHandler Used to write to streams
FileHandler Used to write to disk files RotatingFileHandler Used to write to size-based rotating log files
TimedRotatingFileHandler Used to write to time-based rotating log files WatchedFileHandler A FileHandler which supports external log file rotation mechanisms
SocketHandler Used to write to TCP sockets DatagramHandler Used to write to UDP sockets
SysLogHandler Used to write to a syslog daemon via either UDP or Unix domain sockets SMTPHandler Used to send emails
NTEventLogHandler Used to write to Windows NT event logs HTTPHandler Used to send to arbitrary web sites
MemoryHandler Used to buffer up events and process in batches QueueHandler (3.2) Used to send to in-process or multiprocessing queues

As you can see, all of these are very generic and relate to storage mechanisms and protocols which are unlikely to go out of fashion any time soon.

However, you don’t need to worry unduly if you want to send notifications using some of the newer facilities available today. Of course you can use something like HTTPHandler (subclassing it if necessary) to send information to any web site, but you may not see any support for e.g. desktop notification systems. How can logging provide this?

It doesn’t make much sense for me to put in “native” support for multiple notification protocols into the stdlib, for a number of reasons:

  • The methods, protocols and sites which are popular today may not be popular tomorrow. Any support added for them in the stdlib would have to live on for a very long time, even if their popularity waned.
  • The methods, protocols and sites may keep changing how they work, leading to an increased maintenance burden.
  • Python is used globally, and people would (understandably) expect to have support for popular systems in their neck of the woods – which may not be easy if, for example, all the documentation about the protocols used is inadequate or in a foreign language I’m not familiar with.
  • Native support of external protocols may require additional dependencies, which is not an appropriate burden for Python to carry, or use third-party modules which give rise to warnings or errors (for example, some Python libraries for Growl use the deprecated md5 module, which can give rise to deprecation warnings – this requires code in the libraries or applications using them to turn those warnings off).
  • Even if native support is provided for various systems, there are many options that are available when using such systems.The API for stdlib facilities to support such flexibility in options may be unwieldy, and even if that isn’t the case, it will certainly evolve over time, likely requiring you to subclass and change things, or fork and modify if the original classes weren’t reusable via subclassing. So, any idea of a long-lived “out-of-the-box” solution which requires no work from you is possibly an impossible dream :-(

Of course, developers could write “native” handlers for systems and upload them to PyPI (or post them elsewhere), and indeed people have done this in the past. That may be an approach that works for you; but anyone can upload their offerings to PyPI, which leads to what has been called the selection problem.

Nevertheless, some people might be thinking that some support for these newer notification systems would be nice to have with stdlib logging. What’s a good way of achieving this? We should take inspiration from the Unix philosophy:

Write programs that do one thing and do it well. Write programs to work together.

To apply this philosophy to the problem at hand, we can take advantage of the fact that many of these notification systems have command line interfaces! This is true at least on Unix and Unix-like systems, meaning primarily Linux and OS X. Here are some examples:

Before Twitter changed their authentication system to disallow Basic authentication, you could just use curl like this: curl –u username:password -d status="Hello, world!" Now that Twitter has switched to using OAuth, things are not quite so easy, but they can be after a small amount of one-time set-up.

One-time setup

We’ll use Mike Verdone’s Python Twitter Tools. It’s as easy as doing easy_install twitter and waiting for the installation to complete. Then, just type twitter in the shell. You’ll see something like this:

vinay@zeta-lucid:~/tools$ twitter Hi there! We're gonna get you all set up to use the Command-Line Tool.

In the web browser window that opens please choose to Allow access. Copy the PIN number that appears on the next page and paste or type it here:

Please enter the PIN: 

A browser window opens on the Twitter website, and you enter your username and password into the presented form and click “Allow” to submit. A numeric PIN code is displayed in the response page, which you type or paste into the shell at the above prompt. You’re then told:

That's it! Your authorization keys have been written to /home/vinay/.twitter_oauth.

and that’s the end of the one-time setup. You can type twitter –-help to find the commands options available via the tool; they’ll fit most people’s needs.

Regular usage

Once you’ve done the above setup, you can just use a command line like twitter set hello to update your status on Twitter: Twitter screenshot
To use libnotify from the command-line, use the notify-send program, which is part of libnotify. (On Ubuntu, you should be able to do sudo apt-get install notify-bin to install it). The man page will tell you all you need to know, but a simple usage would be notify-send title message –I icon_path You can set the urgency, time limit etc. through command-line parameters.
You can use the growlnotify program which is part of Growl. This has numerous options, but a simple usage would be growlnotify –n appname –m message –I iconpath You can set priority, stickiness etc. using various command-line parameters.
If you use Mumbles, you can use the mumbles-send script which comes with it, for example as follows. mumbles-send title message
If you use Snarl, you can use the Snarl_CMD.exe program, for example as follows. Snarl_CMD nShowMessage TIME TITLE BODY [iconPATH]
Other Web-based notification services (for example, ticketing systems)

For Web-based services in general, you may be able to use logging.handlers.HTTPHandler either directly or via subclassing, but if you want to look at a command-line based solution which is similar to the ones above, you can use the excellent curl command-line tool:

curl is a command line tool for transferring data with URL syntax, supporting DICT, FILE, FTP, FTPS, GOPHER, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, TELNET and TFTP. curl supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload, proxies, cookies, user+password authentication (Basic, Digest, NTLM, Negotiate, kerberos...), file transfer resume, proxy tunnelling and a busload of other useful tricks.

You might be thinking that it seems a bit clunky in this day and age to be using the command-line in this way, and you may be right to think so. But it represents a straightforward, practical approach to integrating logging with many popular notification services in a way which doesn’t load the stdlib with unnecessary baggage. Plus, it may well involve less work for you: for example, you might be able to use a single command-line handler class, configured with various different command lines for different notification methods. Less code is good! And while in the past there was less than an ideal level of support under Windows for command-line tools, perhaps the development of PowerShell shows that the situation is changing for the better.

Perhaps you’re concerned about the overhead of running external command-line programs in subprocesses and the delays this might cause; in that case, you might want to look at this earlier post which explains how you can use QueueHandler and QueueListener classes to delegate the heavy lifting to separate threads or processes, leaving your Web application threads (and other performance-critical threads) as responsive as possible.

One possible realization of a command-line handler might be:

class CommandLineHandler(object):
    A class which executes parametrized command lines in a separate process.
    def __init__(self, cmdline):
        Initialize an instance with a command-line template.
        The template consists of a set of strings, some of which may contain
        variable content merged in from a LogRecord.
        The LogRecord merge will be done before executing as a command.
        self.cmdline = cmdline
    def handle(self, record):
        Handle a record.
        This just merges its data into the command-line template and
        executes the resulting command.
        cmdline = [c % record.__dict__ for c in self.cmdline]
    def execute(self, cmdline):
        Execute the specified command
        import subprocess
        p = subprocess.Popen(cmdline)

Recall from the earlier post that QueueListener can be given (as a handler) any object which has a handle method: so a CommandLineHandler instance could be used as a handler.

Of course, you should very careful with any facility which executes potentially arbitrary commands on your server, making sure that command line configuration is covered by security diligence. An example command line as passed to the constructor might be ['notify-send', '%(appname)s - %(name)s', '%(message)s'] where appname is a context value inserted in the LogRecord (e.g. by a Filter, see this documentation).

I’d be grateful to get any feedback about the ideas expressed in this post. If you got this far, thanks for reading :-)

Wednesday 22 September 2010

Improved QueueHandler, QueueListener: dealing with handlers that block

tl; dr: Sometimes you have to get your logging handlers to do their work without blocking the thread you’re logging from. This post discusses one way in which you can do that, and presents a new QueueListener class as well as some improvements in the QueueHandler class discussed in earlier posts. A working script is provided so you can play with these ideas yourself.

Sometimes, you’ll work with code that runs in threads which must do their processing quickly. This is common in Web applications, though of course it also occurs in other scenarios. You may well need to log messages from those threads; however, if you are a library developer, you typically won’t have control over which handlers are configured by the running application. What happens if some of the configured handlers are ones that potentially will block for longish periods of time, causing delays which hold the threads up longer than is acceptable?

The most common culprit which demonstrates sluggish behaviour is the SMTPHandler: sending emails can take a long time, for a number of reasons outside the developer’s control (for example, a poorly performing mail or network infrastructure). But almost any network-based handler can block: Even a SocketHandler operation may do a DNS query under the hood which is too slow (and this query can be deep in the socket library code, below the Python layer, and outside your control).

One answer which pushes itself to the fore is to arrange things so that the time-consuming operation happens on a separate thread. This has been suggested to me once or twice, and suggestions have been made to perhaps provide threaded versions of the existing Handler classes or incorporate threading into some of the existing classes. That didn’t sound like a great idea to me, and for one reason or another I haven’t been able to address it until now.

I recently posted about an addition to Python 3.2 of a QueueHandler class, the addition of which came about as a result of Mike Bayer of SQLAlchemy fame pointing me to a Stack Overflow answer he had given, to a question about using logging with multiprocessing.

Mike’s solution was a specialized handler for use with multiprocessing which delegated to a RotatingFileHandler, which worked for his use case but was not general enough to put in the stdlib. So I came up with QueueHandler, which works not only with multiprocessing queues but also in-process thread-safe queues as implemented in the Queue module (renamed to queue in more recent Pythons). QueueHandler is also easy to subclass, for example to send logging events to a ZeroMQ socket, as described in this post.
QueueHandler also forms the basis of dealing with handlers that block. Before we look at one way to solve that problem, I should mention that the QueueHandler implementation which will be in 3.2 got a slight improvement: a new prepare method was added and the emit method was modified to use it. The docstrings give the reason for this minor refactoring:

    def prepare(self, record):
        Prepares a record for queuing. The object returned by this
        method is enqueued.
        The base implementation formats the record to merge the message
        and arguments, and removes unpickleable items from the record
        You might want to override this method if you want to convert
        the record to a dict or JSON string, or send a modified copy
        of the record while leaving the original intact.
        # The format operation gets traceback text into record.exc_text
        # (if there's exception data), and also puts the message into
        # record.message. We can then use this to replace the original
        # msg + args, as these might be unpickleable. We also zap the
        # exc_info attribute, as it's no longer needed and, if not None,
        # will typically not be pickleable.
        record.msg = record.message
        record.args = None
        record.exc_info = None
        return record

    def emit(self, record):
        Emit a record.

        Writes the LogRecord to the queue, preparing it first.
        except (KeyboardInterrupt, SystemExit):

The whole class is to be found here.

Moving on to how to solve the problem of blocking handlers, the answer is in part to attach only a QueueHandler to those loggers which are accessed from performance-critical threads. They simply write to their queue, which can be sized to a large enough capacity or initialized with no upper bound to their size. The write to the queue will typically be accepted quickly, though you will probably need to catch the queue.Full exception as a precaution in your code. If you are a library developer who has performance-critical threads in their code, be sure to document this (together with a suggestion to attach only QueueHandlers to your loggers) for the benefit of other developers who will use your code.

The second part of the solution is QueueListener, which has been designed as the counterpart to QueueHandler.  A QueueListener is very simple: it’s passed a queue and a handler, and it fires up an internal thread which listens to its queue for LogRecords sent from QueueHandlers (or any other source of LogRecords, for that matter). It should be relatively easy to subclass QueueListener to listen using other types of queue – for example, ZeroMQ sockets – though we won’t cover that in this post (it’ll be using a similar approach as was described here for our QueueHandler subclass ZeroMQSocketHandler). Here’s QueueListener:

import threading

class QueueListener(object):

    _sentinel = None
    def __init__(self, queue, handler):
        Initialise an instance with the specified queue and
        self.queue = queue
        self.handler = handler
        self._stop = threading.Event()
        self._thread = None

    def dequeue(self, block):
        Dequeue a record and return it, optionally blocking.

        The base implementation uses get. You may want to override this method
        if you want to use timeouts or work with custom queue implementations.
        return self.queue.get(block)

    def start(self):
        Start the listener.
        This starts up a background thread to monitor the queue for
        LogRecords to process.
        self._thread = t = threading.Thread(target=self._monitor)

    def _monitor(self):
        Monitor the queue for records, and ask the handler
        to deal with them.

        This method runs on a separate, internal thread.
        The thread will terminate if it sees a sentinel object in the queue.
        q = self.queue
        has_task_done = hasattr(q, 'task_done')
        while not self._stop.isSet():
                record = self.dequeue(True)
                if record is self._sentinel:
                if has_task_done:
            except queue.Empty:
        # There might still be records in the queue.
        while True:
                record = self.dequeue(False)
                if record is self._sentinel:
                if has_task_done:
            except queue.Empty:

    def stop(self):
        Stop the listener.
        This asks the thread to terminate, and then waits for it to do so.
        self._thread = None

Since the queue should only ever get LogRecords put into it, it seems reasonable to use None as a sentinel to terminate the thread. Of course, you can set a different sentinel if you wish.

The advantage of having a separate QueueListener class is that you can use the same instance to service multiple QueueHandlers. This is more resource-friendly than, say, having threaded versions of the existing handler classes, which would eat up one thread per handler for no particular benefit.

Here’s a simple snippet showing how to use QueueHandler and QueueListener together:

def main():
    q = queue.Queue(-1)
    qh = QueueHandler(q)
    h = logging.StreamHandler()
    ql = QueueListener(q, h)
    root = logging.getLogger()
    f = logging.Formatter('%(threadName)s: %(message)s')
    # The log output will display the thread which generated
    # the event (the main thread) rather than the internal
    # thread which monitors the internal queue. This is what
    # you want to happen.
    root.warning('Look out!')

if __name__ == '__main__':

This should be self-explanatory, but of course please feel free to post a comment if you need clarification of anything.

The whole script is here, and if you run it you should see something like the following:

MainThread: Look out!

Notice that QueueListener is not even especially logging-specific: You can pass it as a handler any object that has a handle method which takes a single argument, and that method will be passed any non-sentinel object which appears on the queue.

You should be able to paste QueueHandler and QueueListener into your own code, as you may well be using Python versions earlier than 3.2 :-)

The plan is to add QueueListener to logging.handlers, so before 3.2 enters beta, I’d be grateful for any comments or suggestions you have about any of this stuff.

Sunday 19 September 2010

Python Logging Functionality - Facts vs. Myths

tl; dr: Some inaccuracies and misunderstandings about how stdlib logging works have been expressed in the documentation and marketing presentations of a suggested alternative. On closer examination, certain statements which imply a lack of functionality or other shortcomings in stdlib logging have been shown to be inaccurate. You should feel confident that in using the stdlib logging package you are very unlikely to find it wanting, and that, if it seems too hard or not possible to achieve some result that you want to achieve, you should raise the issue on comp.lang.python or and be assured of prompt responses and resolutions. Now, you can read on if you want more details :-).

Recently Armin Ronacher, developer of Werkzeug and Jinja among other things, released a library for logging which, in his opinion, is preferable to the logging package provided as part of Python since the 2.3 release. Now preferences are a very personal thing, and since Armin has made worthy software contributions to the Python community, there will no doubt be many people who trust his judgement and follow where he leads. In general, choice in software is good, and people should be free to make up their own minds about the software they want to use.   However, in order to make an informed decision, people need accurate information on which to base that decision. For example, I chose to use argparse over the stdlib's optparse even before argparse landed in the stdlib; in making that choice, I looked at Steven Bethard's rationale as described here and was fortunate to have the time to be able to examine and evaluate each of his points for myself.

In the case of choosing whether to use Python logging or something else, people may or may not have the time to investigate in any depth the relative merits of the alternatives on offer. Armin has seemingly made it easier for busy people by including, in his library's documentation, reasons why you should use his offering in preference to the stdlib's logging package, and what he perceives to be problems with the code in the stdlib. This is what Steven did for argparse, but in the case of logging Armin has made a number of statements which are inaccurate or misleading. No doubt he has not meant to deliberately mislead anyone, but some of the things he has said have led me to conclude that perhaps Armin doesn't really understand some of the aspects of how stdlib logging works, and how you can use it in Web applications, desktop applications, casual utility scripts, and of course libraries which support all of these application development constituencies. And if Armin - a clearly talented developer - is making some egregious mistakes in his thinking about how stdlib logging works and what you can or can't do with it, then it's possible that the lack of understanding which he demonstrates is more widespread; as maintainer of the stdlib logging package, I feel there's a need to do something to rectify this. Hence, this post.

Isn't that what the stdlib documentation is for? Of course, at least partly, and while the stdlib logging documentation undergoes continual additions and revisions (which hopefully can be viewed as continuous improvement), it is a difficult medium in which to address certain points. Of necessity, the stdlib documentation for most modules is presented largely in a reference style, with only limited examples which might serve as tutorials. And even for sections where much effort has been expended in presenting using a tutorial style, there is always room for expositions which do not fit easily in the stdlib documentation: for example, a "cookbook" style presentation which tries to show how to solve common and less common problems which arise in considering how best to use a package. And then, who reads documentation, anyway? ;-)

Of course no software is perfect, nor can any design or implementation hope to please all of its users all of the time. I have always viewed stdlib logging as capable of being improved, not because there was anything especially wrong with it to start with, but rather that through the philosophy of continuous improvement you have the best chance of providing something which remains relevant and useful to as many users as possible. That, it seems to me, is a core Python philosophy, enshrined in the procedures, standards and practices of the PEP process.

You might be thinking that this post is a bit of an over-reaction on my part. Perhaps, but as well as the concern I expressed about there being a lack of understanding out there about how logging works, I have another concern. This is based on a strong view that for certain types of functionality, too much choice is not a good thing. For example, imagine if there were several competing implementations of regular expressions in Python, or implementations of pickling functionality, in common use. Some library developers would use one particular version, others a different one. So an application developer using several such libraries would not only have to pull in additional dependencies but also spend some time and thought on how to get the different libraries to work together. And so it is with logging, in my view; it's an infrastructure function and a lot of libraries out there already use it. Of course if it fell short of meeting developer requirements, then it should not be immune from being supplanted by a better alternative (as with optparse and argparse). However, I believe that stdlib logging still remains fit for purpose, and am willing to stand up and say so when the criticisms made of it are without merit or overstated.

So, let's review some of the statements in Armin's documentation and presentations which merit closer examination. It's a long list, hence the tl;dr at the top of this post, but there are even more points that I could have made if not for the lack of time :-(

Statements What they imply ("Myths") How it really is ("Facts") Conclusions
In the opening paragraph in the "Why you should use Logbook" section
Despite the existence of stdlib logging, you should use Logbook. Logbook is better than stdlib logging now, and is ready to be used. As stated in the same paragraph, Logbook is in the alpha stage of development and should be treated as a developer preview, and only supports Python 2.6 and 2.7. Logbook is at present suitable for evaluation use only.
In the "Advantages over Logging" section
If properly configured, Logbook’s logging calls will be very cheap and provide a great performance improvement over the standard library’s logging module. While we are not there yet, there will be some performance improvements in the upcoming versions when we implement certain critical code paths in C. stdlib logging is slow. You can evaluate these test scripts yourself: and In the output of these scripts, you need to look at the "log_simple" line on each for a meaningful comparison. It's not a scientific benchmark, but on my machine, with these scripts, the Logbook code runs around 30% slower than the stdlib code. Logbook is at present a fair bit slower than stdlib logging, so it is somewhat premature to boast about "great performance improvements".
While it's possible for Logbook to improve performance by writing performance-critical code in C, it's also possible for stdlib code to improve using the same approach. (Note, however, that building C extensions of third-party libraries from source under Windows can be a pain).
Anyway, according to the numbers shown by these test scripts, logging calls take of the order of tens of microseconds - not a real concern in most scenarios.
If you are having performance problems which you think are due to logging overhead, the recommendation is to profile and see where the bottlenecks really are. If they are found to be in logging, please post your findings on comp.lang.python or if you think that's appropriate.
Logbook also supports the ability to inject additional information for all logging calls happening in a specific thread. For example, this makes it possible for a web application to add request-specific information to each log record such as remote address, request URL, HTTP method and more. stdlib logging can't do this. I have already explained to Armin how to do this - see this post. stdlib logging CAN do this fairly easily.
The Logbook system is (besides the stack) stateless and unit testing it is very simple. If context managers are used, it is impossible to corrupt the stack, so each test can easily hook in custom log handlers. There's something inherently good about a shared stack, but having a shared dict is somehow worse. Unit testing with stdlib logging is not simple. Unit tests with stdlib logging can't hook in custom handlers. There's no convincing argument that the type of shared state (execution context) offered by Logbook is better than how stdlib works. You just have to take it on trust. Unit testing with stdlib logging is not hard, since you can add/remove handlers to/from loggers, close handlers in tearDown which you opened in setUp, and disable loggers you don't need. Unit tests with stdlib logging CAN hook in any handlers you want; in fact, Python's own regression tests use unittest to do testing of the logging package itself. From the evidence presented, there's no proof of any actual functional advantage of Logbook over logging. You just have to take it on trust.
In the "Design Principles" section
Logbook [...] supports paradigms we think are more suitable for modern applications than the traditional Java inspired logging system that can also be found in the Python standard library and many more programming languages. Having a design that was inspired by a Java library is somehow bad, just because of Java. Tha fact that it appears in many more programming languages damns it even further. Although the Java log4j system was an influence on stdlib logging, this is because it had some good abstract ideas, well worth using. Though we have a lot of very clever people in the Python community, not every clever person is a Pythonista; it's very parochial to think that good ideas can only come from the Python community. The ideas that emerged in log4j and other packages are the ideas of "what happened? where did it happen? how important is it? who wants to know?" and if you think about it, these ideas are hardly Java-specific. In fact, they are pretty central to the problem domain addressed by logging. So - "what happened?" is the details of the logging call, "where did it happen?" is the logger name, "how important is it?" is the level, and "who wants to know?" is the handler. Hardly that complicated, and pretty much a minimum requirement for any logging package that aspires to the name.
Also, anyone who bothers to look at log4j in detail will see that Python logging is not a mindless translation - it's very Pythonic. Beyond the basic abstractions of "What? Where? How Important? Who Needs To Know?", there's no real correspondence between the Java artefacts and the Python ones. Using David A. Wheeler's SLOCCount, log4j 1.2.15 = 168 source files, around 16K SLOC; Python 2.6 logging = 3 source files, < 1.5K SLOC. Almost every class in stdlib logging maps to a core concept of logging, other than the handler classes, which are geared towards specific types of audience for logging messages.
There's no indication that a Java influence has been detrimental to stdlib logging. All of the core concepts from stdlib logging are also present in Logbook: loggers ("where?"), levels ("how important?"), handlers ("who wants to know?") and the details passed to logging calls ("what happened?"). So if stdlib logging "suffers" from Java influences, so equally does Logbook.
It's also worth bearing in mind that Python logging works in Python 2.3 -> 2.7 and 3.0 -> 3.2, without the need to depend on specific features present only in the more recent versions of Python. And while a case can be made for certain programming styles like the with statement to have better support in stdlib logging, these are by no means essential to the basic business of logging; in any case support for new features can always be added as and when it's needed.
Logbook is unique in that it has the concept of logging channels but that it does not keep a global registry of them. In the standard library’s logging module a logger is attached to a tree of loggers that are stored in the logging module itself as global state. The shared state embodied in Logbook's process or thread context is somehow good, but the shared tree of loggers stored as "global" state in stdlib logging is somehow bad. One reason for the shared state in stdlib logging is that application developers often need explicit access to loggers used by libraries, in order to control the level of verbosity of logging from those libraries, in the context of the application being developed. This verbosity varies over the life of an application - for example, when a bug is discovered in an application in production, it's common to selectively turn the verbosity of different library loggers up and down, while diagnosing the problem. You can't do that when the logger used by a library is hidden completely from a user of that library.
This is a fundamental point which may not bite developers working in limited scenarios, but I've had reason to rely on this functionality time and again. For example: I sometimes need to turn the verbosity of SQLAlchemy's logging so that I can examine SQL statements being generated from the ORM. If I couldn't access SQLAlchemy's top-level logger, using the name "sqlalchemy" and the fact that because of the "global" registry, the SQLAlchemy code and my code are both referring to the same object when they say logging.getLogger('sqlalchemy')), I couldn't turn this verbosity up and down, as and when needed.
Strictly speaking, the tree of loggers in stdlib logging does not need to be global. As a look at logging's source code will show, the tree of loggers is held as an instance variable of an instance of a Manager class. A binding to this Manager instance is currently stored in the Logger class, and through this binding you can say that the tree of loggers is global. This state of affairs has been the situation since the first release of logging in the stdlib.
It would be well within the bounds of possibility to either expand the Manager class to be a context manager, or derive a context manager class from it. If this was done, then the tree of loggers need not be global: it would just be an attribute of the Manager instance, and there could be more than one Manager (or derived context manager) instance in a Python process. This improved context manager may well appear in the future, but at present there has not been a strong enough case made for the need for doing this. If such a case is made (for example, by someone generating a PEP or proto-PEP making the justification) then providing such a feature can be considerered, and it would not be a major undertaking (but not a trivial one, either).
There is nothing wrong with logging's design just because it uses a shared registry of loggers. In fact, the shared registry confers some positive, practical benefits in real-life use cases.
In logbook a logger is just an opaque object [...] the lifetime and availability of that object is controlled by the person creating that logger. The registry is necessary for the logging library to give the user the ability to configure these loggers. It's better for a library developer to keep their loggers completely encapsulated in the library, so application developers using that library have no access to it. The need to access the loggers in libraries comes up again and again in practice, particularly in production environments and long-running service processes where you can't just fire up a debugger, when a problem arises, to see what's causing it. With stdlib logging, features exist to turn the verbosity of library logging up and down, for different libraries at different times, without needing to stop the long-running processes. The stated advantage of Logbook over stdlib logging seems illusory. The logger registry actually solves practical problems in real-world environments where multiple libraries from multiple vendors are integrated into an application. It's not clear, from Logbook's documentation, whether you can tune the verbosity of logging in a library component which utilised Logbook for its logging functionality.
Logbook has a completely different concept of dispatching from loggers to the actual handlers which removes the requirement and usefulness of such a registry. The advantage of the logbook system is that it’s a cheap operation to create a logger and that a logger can easily be garbage collected to remove all traces of it. There's no utility in a shared registry between different components of an application. Creating a logger in stdlib logging is expensive. It's important to be able to remove all traces of a logger that you created. The preceding discussions try to show why a shared registry is useful. There's no example given in the Logbook documentation of how you might (as an application developer) tune the logging verbosity of a component you use. It's not clear that this is even possible in Logbook.
Creating loggers in the stdlib is not particularly expensive, and is a one-off operation; so amortized over many uses, the creation should actually be cheaper in the stdlib.
It's true that loggers can't be garbage collected to remove all traces of it, but not clear why that is really important, as the memory used by loggers is not significant in normal usage.
stdlib logging is designed the way it is for good reasons, not because its designer lacks imagination. The stated disadvantages of stdlib logging over Logbook are not proven, just stated as if they were.
Logbook moves the burden of delivering a log record from the log channel’s attached log to an independent entity that looks at the context of the execution to figure out where to deliver it. The loggers in stdlib logging are suffering from some kind of burden when delivering log records. In both stdlib logging and Logbook, records are delivered using handlers. Loggers become a way for an application developer to finely control the logging verbosity of their application, so that they are in control rather than library developers who, of necessity, cannot anticipate every circumstance about how their libraries are used and deployed. The context of execution is insufficient to determine how records generated in libraries should be delivered. In practice, you need a combination of the execution context and the wishes of the application developer regarding how library logging is to be configured.
In the "Context sensitive handler stack" section
Let’s take a GUI application [which] might fail or log messages. The typical default behaviour here would be to log into a logfile. Fair enough, that’s how these applications work. But what’s the point in logging if not even a single warning happened? The traditional solution with the logging library from Python is to set the level high (like ERROR or WARNING) and log into a file. When things break, you have a look at the file and hope it contains enough information. When you are in full control [...] with a stack based system like Logbook has, there is a lot more you can do. For example you could immediately after your application boots up instanciate a FingersCrossedHandler. This handler buffers all log records in memory and does not emit them at all. What’s the point? That handler activates when a certain threshold is reached. For example, when the first warning occurs you can write the buffered messages as well as the warning that just happened into a logfile and continue logging from that point. Because there is no point in logging when you will never look at that file anyways. This kind of functionality is only possible with a context sensitive handler stack, which only Logbook has. You can't do this with stdlib logging. You can do exactly this with stdlib logging, where from the earliest release there's been a MemoryHandler class which buffers up records in memory until a message with a specified threshold level is seen, when all the messages buffered so far are then forwarded to a target handler for processing. The supposed advantage of a context-sensitive handler stack in this scenario isn't any particular advantage at all.
But that alone is not the killer feature of a stack. In a GUI application there is the point where we are still initializing the windowing system. So a file is the best place to log messages. But once we have the GUI initialized, it would be very helpful to show error messages to a user in a console window or a dialog. So what we can do is to initialize at that point a new handler that logs into a dialog. When then a long running tasks in the GUI starts we can move that into a separate thread and intercept all the log calls for that thread into a separate window until the task succeeded. This is somehow a killer feature of a stack, which you can't do any other way. You can do this kind of processing perfectly easily with stdlib logging. There are examples of dialog- or window-based handlers (for Qt, say) in answers to questions on Stack Overflow. You can combine the provided stdlib logging handlers such as MemoryHandler and FileHandler together with your own window-specific handlers (dependent on which windowing system you are using) to achieve the sort of effect described. Another supposed advantage of a context sensitive handler stack, which isn't.
In Armin's Logbook presentation
In the "Why not logging?" slide
stdlib logging is not really suited for Web applications. Don't even bother trying to use stdlib logging with Web applications, it's not worth it. It's perfectly possible to use stdlib logging with Web application, as I showed Armin how to meet his requirements, in this post. This statement is just plain wrong and illustrates a lack of understanding of how to use the functionality of stdlib logging, rather than any lack of functionality in stdlib logging.
Logging's central registry of loggers makes unittesting a pain. It's too much trouble to use stdlib logging in your unittests. It doesn't need to be a pain. A future post on this blog will describe scenarios showing how to use logging in your unit tests.
It's probably possible to provide additional utility functions in stdlib logging to make some aspects of unit testing easier, but there are no real showstoppers; if Armin describes his problem scenario in as much detail as he described his Web application configuration problem, I will (hopefully) be able to suggest some ways of easing his pain.
Unless specific details are given as to why unit testing with stdlib logging is considered to be a pain (i.e. what you'd like to do but what you can't do) then this statement must be taken with a pinch of salt.
The same registry also causes issues when libraries want to start logging The registry is central to the issues caused by libraries doing logging There are documented things that library authors need to do when setting up logging for libraries. If library developers do not follow the recommendations then this might in fact cause problems, but that has nothing to do with the existence of a central registry (or at least, the connection has not been demonstrated). There's no evidence that the existence of a central registry is somehow responsible for how third-party libraries misuse logging.
You can't delete loggers once created. It's important that you be able to delete loggers once created. The reason why loggers are not deleted once created is that multiple threads (other than the one that wants to delete the logger) can have references to the logger. This is facilitated by having a central registry, but even in a system without a central registry, loggers could be passed between threads. In that scenario, you can't actually guarantee to delete the logger since there could be references to it in another thread.
It's not demonstrated that it's important to expunge all loggers completely from memory. The number of logger instances is related to the number of logical "areas" in an application - the idea of "where" in a component a particular event occurred. This is not so large that the amount of memory used by loggers becomes significant. And stdlib logging allows loggers to be disabled, which means that they cease to perform any logging (until re-enabled) and are as good as gone.
It's true that you can't delete loggers, but they can be easily disabled; they do not, under normal usage patterns, occupy large amounts of memory.
In the "Why does nobody like logging?" slide
Why does nobody like logging? Nobody likes logging. This is an opinion, not a fact. There are many people who have expressed their thanks to me for the existence of the logging package and feedback saying that it has been very useful to them.
Even Armin has told me that he liked logging "a lot", which is hard to reconcile with what he's now saying - since logging's basic design has been the same from the outset, and it seems to meet his functional needs even if it doesn't press his aesthetic hot buttons.
There's no basis in fact for this statement. It's marketing FUD.
The default configuration of stdlib logging for applications is bad. The statement speaks for itself. On the contrary, the default configuration for applications is consistent with the principle of least surprise and the Zen of Python's "explicit is better than implicit". In common with the Unix philosophy, software should not be excessively verbose except when it needs to be, or is asked to be. By default, stdlib logging will only output messages of severity WARNING or higher. (It's easy to change this default.)
In fact, in the default configuration of logbook 0.2, every DEBUG logging call in every library which uses Logbook in your application will print debug messages to stderr when your application runs, even if you specify that you want to see only messages of severity WARNING or greater. Perhaps you won't care if your application is a GUI application or a Web application, since you won't have a console, and perhaps logbook is expressly designed only for these types of application. For a console application, it's very likely you will get Too Much Information. Here is an example.
The default configuration of stdlib logging is quite sane; there's nothing wrong with it.
Useless default configuration for libraries The statement speaks for itself. Library developers are supposed to add a NullHandler to their top-level logger and to the level and propagation flag as needed, but to not add any other handlers to their logging configuration because that configuration is the prerogative of the application developer. However, it's not possible to prevent library authors from breaking these rules. By default, libraries aren't supposed to handle messages - only to log them - and the application developer is expected to configure any logging (though they don't have to; if they don't, no logging output should be generated). This is consistent with the principle of least surprise and Zen of Python point mentioned earlier.
Who sets up the logging config? Libraries sometimes call basicConfig. If library developers don't follow the guidelines set down for logging from libraries, that's somehow the fault of stdlib logging. There is documentation about how to configure logging in libraries, and while this may be lacking in some way, it's not immediately clear what that lack is. Perhaps more or better documentation is needed, but that won't eliminate completely the possibility that a library developer will misguidedly add handlers to the logging configuration, say by calling basicConfig().
logging.warn() and friends are not thread-safe. Stay away from stdlib logging if you want to use threads. If logging is correctly configured in an application, there should be no untoward behaviour by stdlib logging. In versions of Python earlier than 2.7.1 and 3.2, the code in basicConfig() omitted acquiring and releasing an internal lock, and you could demonstrate incorrect behaviour in stdlib logging by calling basicConfig() directly or indirectly from multiple threads; this has been fixed in 2.7.1 and 3.2, but nevertheless, it is wrong to do any logging from multiple threads before configuring handlers (if you want any logging output, that is) - because if no handlers have been configured before threads start running, any messages they log could be lost (because there are no handlers to deliver them).
So, good practice would mean that this problem shouldn't occur in practice, and even if it does, the worst consequence is that handlers are added multiple times, resulting in multiple messages output for the same event.
It's disingenuous to say that these functions are not thread-safe. Although the statement is technically correct, the unexpected behaviour only occurs if logging is not configured before multiple threads are started, and more than one of these threads calls basicConfig() either directly or via logging.warn() and friends.
Even then, the behaviour occurs only rarely, due to thread timing uncertainties.
In any case, the incorrect behaviour will not occur in Python versions >= 2.7.1 or >= 3.2, because the requisite lock acquisition/release code has now been added.