tag:blogger.com,1999:blog-8853457579684330224.post5984417199905766959..comments2024-03-14T00:57:33.034-07:00Comments on Plumber Jack: Using logging with multiprocessingUnknownnoreply@blogger.comBlogger10125tag:blogger.com,1999:blog-8853457579684330224.post-24259560455679236462012-04-12T23:02:03.282-07:002012-04-12T23:02:03.282-07:00It works on Python2.7/Windows7
Thank you!It works on Python2.7/Windows7<br />Thank you!Anonymoushttps://www.blogger.com/profile/02386268073602354435noreply@blogger.comtag:blogger.com,1999:blog-8853457579684330224.post-6077383769987153742010-10-25T08:16:43.639-07:002010-10-25T08:16:43.639-07:00Hey Vinay,
Yup I figured that out - thanks.Hey Vinay,<br /><br />Yup I figured that out - thanks.Steven Klasshttps://www.blogger.com/profile/00681756689019735665noreply@blogger.comtag:blogger.com,1999:blog-8853457579684330224.post-22980080494233084402010-10-23T10:06:14.387-07:002010-10-23T10:06:14.387-07:00@Steven: The "processName" attribute in ...@Steven: The "processName" attribute in the LogRecord doesn't work in Pythons earlier than 2.6.2; if your Apple-provided version is older than this, then you won't have "processName" - but you should still be able to use "process" to get you the pid of the process where the logging call occurred.Vinay Sajiphttps://www.blogger.com/profile/17232793593574502483noreply@blogger.comtag:blogger.com,1999:blog-8853457579684330224.post-30815954060333148882010-10-22T13:29:27.279-07:002010-10-22T13:29:27.279-07:00Hi Vinay -
I've started to play around with ...Hi Vinay - <br /><br />I've started to play around with this and the code you provide throws an exception. FWIW this is python 2.6 (as provided by Apple) on SL <br /><br />Worker started: Process-11<br />Traceback (most recent call last):<br /> File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/logging/handlers.py", line 74, in emit<br /> if self.shouldRollover(record):<br /> File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/logging/handlers.py", line 145, in shouldRollover<br /> msg = "%s\n" % self.format(record)<br /> File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/logging/__init__.py", line 637, in format<br /> return fmt.format(record)<br /> File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/logging/__init__.py", line 428, in format<br /> s = self._fmt % record.__dict__<br />KeyError: 'processName'<br />.<br />.<br />.<br /><br />One for each worker..<br /><br />ThanksSteven Klasshttps://www.blogger.com/profile/00681756689019735665noreply@blogger.comtag:blogger.com,1999:blog-8853457579684330224.post-27048717845772286042010-10-22T13:27:09.320-07:002010-10-22T13:27:09.320-07:00This comment has been removed by the author.Steven Klasshttps://www.blogger.com/profile/00681756689019735665noreply@blogger.comtag:blogger.com,1999:blog-8853457579684330224.post-64178570063605899712010-09-07T09:51:24.562-07:002010-09-07T09:51:24.562-07:00Yes; it can have issues on platforms where shared ...Yes; it can have issues on platforms where shared semaphores aren't included, good reminder. As for the queue based approach - you're right it's a good idea, wasn't disagreeing there :)Jessehttps://www.blogger.com/profile/07543015027323408421noreply@blogger.comtag:blogger.com,1999:blog-8853457579684330224.post-89570018880193383612010-09-07T09:19:24.911-07:002010-09-07T09:19:24.911-07:00@Jesse: I do realise that locks are available, but...@Jesse: I do realise that locks are available, but I thought they were contingent on having a non-zero HAVE_SEM_OPEN / sem_open() availability, which I thought were absent from some platforms. I believe there were problems on some FreeBSD versions; is that no longer the case?<br /><br />Also, is it better to implement this kind of processing using multiprocessing locks, or to use the message passing approach which I've used here? Since processes by default don't share memory etc. I would assume the message passing would be more performant; I don't want to give people reasons to complain about logging performance ;-)<br /><br />@Mike: Agreed about my example being more heavyweight, but the approach supports not just RotatingFileHandler but also TimedRotatingFileHandler, FileHandler and any user-defined subclasses thereof, optionally using fileConfig or dictConfig to configure. To my mind that flexibility justifies the slightly heavier approach.<br /><br />Of course in the demonstration I've used a listener process, but that's just happenstance, as you've surmised. I know it's perfectly valid to use a listener thread as you did in your post. I may expand on the demonstration to illustrate this; but I think we're into the realms of things which the application developer, rather than the library developer, is better placed to decide. For example, another alternative would be for one of the existing processes to do the listening, but exactly which one might depend on application specifics. It doesn't seem good policy to make these kind of decisions in the logging package itself.<br /><br />Re. the record.args, I agree with your analysis but I can't use that exact implementation in the stdlib, because logging supports arbitrary objects as messages, not just strings. I will address this for the QueueHandler implementation which I check into SVN.<br /><br />Guys, thanks for the feedback - much appreciated.Vinay Sajiphttps://www.blogger.com/profile/17232793593574502483noreply@blogger.comtag:blogger.com,1999:blog-8853457579684330224.post-90994869651750336652010-09-07T07:33:17.692-07:002010-09-07T07:33:17.692-07:00Also note a recent fix in my approach, which is th...Also note a recent fix in my approach, which is that I don't only blow away exc_info from the record, I also blow away record.args, by applying record.msg % record.args, record.args = None. This so that unpickleable things don't get sent to the multiprocessing queue and cause hard-to-track errors, and it also reduces the load on pickle as well as the message size sent over the pipe.mike bayerhttps://www.blogger.com/profile/01417862951114999907noreply@blogger.comtag:blogger.com,1999:blog-8853457579684330224.post-54340484083289047402010-09-07T07:23:34.179-07:002010-09-07T07:23:34.179-07:00Hey Vinay -
Nice, but seems like I should integra...Hey Vinay -<br /><br />Nice, but seems like I should integrate QueueHandler with my recipe at http://stackoverflow.com/questions/641420/how-should-i-log-while-using-multiprocessing-in-python/894284#894284 , as it is less heavyweight than the example above - presented as a fully functional handler of its own its configurable via fileConfig() alone, and also doesn't need to spawn a child process to handle events, instead just using a daemon thread, though that might just be an artifact of how you are demonstrating QueueHandler here.mike bayerhttps://www.blogger.com/profile/01417862951114999907noreply@blogger.comtag:blogger.com,1999:blog-8853457579684330224.post-10411808715844658712010-09-07T07:01:30.405-07:002010-09-07T07:01:30.405-07:00"There's no equivalent cross-platform syn..."There's no equivalent cross-platform synhronisation for processes in the stdlib" - Hey Vinay - we do have cross platform locks in the stdlib, they're included as part of the multiprocessing package.Jessehttps://www.blogger.com/profile/07543015027323408421noreply@blogger.com