多处理疯狂日志

2024-04-25 22:52:13 发布

您现在位置:Python中文网/ 问答频道 /正文

我试图在多处理场景中使用python的默认日志模块。 我读过:

  1. Python MultiProcess, Logging, Various Classes
  2. Logging using multiprocessing

以及其他关于多处理、日志记录、python类等的多篇文章。 在阅读完所有这些之后,我发现这段代码不能正确运行,因为它使用了python的logutils QueueHandler:

import sys
import logging
from logging import INFO
from multiprocessing import Process, Queue as mpQueue
import threading
import time

from logutils.queue import QueueListener, QueueHandler


class Worker(Process):

    def __init__(self, n, q):
        super(Worker, self).__init__()
        self.n = n
        self.queue = q

        self.qh = QueueHandler(self.queue)
        self.root = logging.getLogger()
        self.root.addHandler(self.qh)
        self.root.setLevel(logging.DEBUG)        
        self.logger = logging.getLogger("W%i"%self.n)


    def run(self):
        self.logger.info("Worker %i Starting"%self.n)

        for i in xrange(10):
            self.logger.log(INFO, "testing %i"%i)

        self.logger.log(INFO, "Completed %i"%self.n)


def listener_process(queue):
    while True:
        try:
            record = queue.get()
            if record is None:
                break
            logger = logging.getLogger(record.name)
            logger.handle(record)
        except (keyboardInterrupt, SystemExit):
            raise
        except:
            import sys, traceback
            print >> sys.stderr, 'Whoops! Problem:'
            traceback.print_exc(file=sys.stderr)

if __name__ == "__main__":

    mpq = mpQueue(-1)

    root = logging.getLogger()
    h = logging.StreamHandler()    
    f = logging.Formatter('%(asctime)s %(processName)-10s %(name)s %(levelname)-8s     %(message)s')
    h.setFormatter(f)
    root.addHandler(h)

    l = logging.getLogger("Test")
    l.setLevel(logging.DEBUG)

    listener = Process(target=listener_process,
                       args=(mpq,))
    listener.start()
    workers=[]
    for i in xrange(1):
        worker = Worker(i, mpq)
        worker.daemon = True
        worker.start()
        workers.append(worker)

    for worker in workers:
        worker.join()

    mpq.put_nowait(None)
    listener.join()

    for i in xrange(10):
        l.info("testing %i"%i)

    print "Finish"

如果代码被执行,输出会以某种方式重复如下行:

2013-12-02 16:44:46,002 Worker-2   W0 INFO         Worker 0 Starting
2013-12-02 16:44:46,003 Worker-2   W0 INFO         testing 0
2013-12-02 16:44:46,003 Worker-2   W0 INFO         testing 1
2013-12-02 16:44:46,003 Worker-2   W0 INFO         testing 2
2013-12-02 16:44:46,002 Worker-2   W0 INFO         Worker 0 Starting
2013-12-02 16:44:46,003 Worker-2   W0 INFO         testing 3
2013-12-02 16:44:46,003 Worker-2   W0 INFO         testing 0
2013-12-02 16:44:46,003 Worker-2   W0 INFO         testing 1
2013-12-02 16:44:46,003 Worker-2   W0 INFO         testing 4
2013-12-02 16:44:46,003 Worker-2   W0 INFO         testing 2
2013-12-02 16:44:46,003 Worker-2   W0 INFO         testing 3
2013-12-02 16:44:46,003 Worker-2   W0 INFO         testing 5
2013-12-02 16:44:46,003 Worker-2   W0 INFO         testing 4
2013-12-02 16:44:46,003 Worker-2   W0 INFO         testing 6
2013-12-02 16:44:46,003 Worker-2   W0 INFO         testing 5
2013-12-02 16:44:46,004 Worker-2   W0 INFO         testing 7
2013-12-02 16:44:46,003 Worker-2   W0 INFO         testing 6
2013-12-02 16:44:46,004 Worker-2   W0 INFO         testing 8
2013-12-02 16:44:46,004 Worker-2   W0 INFO         testing 7
2013-12-02 16:44:46,004 Worker-2   W0 INFO         testing 9
2013-12-02 16:44:46,004 Worker-2   W0 INFO         testing 8
2013-12-02 16:44:46,004 Worker-2   W0 INFO         Completed 0
2013-12-02 16:44:46,004 Worker-2   W0 INFO         testing 9
2013-12-02 16:44:46,004 Worker-2   W0 INFO         Completed 0
2013-12-02 16:44:46,005 MainProcess Test INFO         testing 0
2013-12-02 16:44:46,005 MainProcess Test INFO         testing 1
2013-12-02 16:44:46,005 MainProcess Test INFO         testing 2
2013-12-02 16:44:46,005 MainProcess Test INFO         testing 3
2013-12-02 16:44:46,005 MainProcess Test INFO         testing 4
2013-12-02 16:44:46,005 MainProcess Test INFO         testing 5
2013-12-02 16:44:46,006 MainProcess Test INFO         testing 6
2013-12-02 16:44:46,006 MainProcess Test INFO         testing 7
2013-12-02 16:44:46,006 MainProcess Test INFO         testing 8
2013-12-02 16:44:46,006 MainProcess Test INFO         testing 9
Finish

在其他问题中,建议多次添加处理程序,但是,如您所见,我只在main方法中添加一次streamhanlder。 我已经测试过将main方法嵌入到具有相同结果的类中。

编辑: 正如@max所建议的(或者我相信他所说的),我已经将worker类的代码修改为:

class Worker(Process):

    root = logging.getLogger()
    qh = None

    def __init__(self, n, q):
        super(Worker, self).__init__()
        self.n = n
        self.queue = q

        if not self.qh:
            Worker.qh = QueueHandler(self.queue)            
            Worker.root.addHandler(self.qh)
            Worker.root.setLevel(logging.DEBUG)

        self.logger = logging.getLogger("W%i"%self.n)

        print self.root.handlers

    def run(self):
        self.logger.info("Worker %i Starting"%self.n)

        for i in xrange(10):
            self.logger.log(INFO, "testing %i"%i)

        self.logger.log(INFO, "Completed %i"%self.n)

使用相同的结果,现在不会一次又一次地添加队列处理程序,但是仍然有重复的日志条目,即使只有一个工作进程。

编辑2: 我把密码改了一点。我更改了侦听器进程,现在使用一个QueueListener(这也是我在开始时的意图),将主代码移动到一个类中。

import sys

import logging
from logging import INFO
from multiprocessing import Process, Queue as mpQueue
import threading
import time

from logutils.queue import QueueListener, QueueHandler

root = logging.getLogger()
added_qh = False

class Worker(Process):

    def __init__(self, logconf, n, qh):
        super(Worker, self).__init__()
        self.n = n
        self.logconf = logconf

#        global root
        global added_qh

        if not added_qh:
            added_qh = True
            root.addHandler(qh)
            root.setLevel(logging.DEBUG)            

        self.logger = logging.getLogger("W%i"%self.n)

        #print root.handlers

    def run(self):
        self.logger.info("Worker %i Starting"%self.n)

        for i in xrange(10):
            self.logger.log(INFO, "testing %i"%i)

        self.logger.log(INFO, "Completed %i"%self.n)


class Main(object):

    def __init__(self):
        pass

    def start(self):

        mpq = mpQueue(-1)
        qh = QueueHandler(mpq)

        h = logging.StreamHandler()

        ql = QueueListener(mpq, h)

        #h.setFormatter(f)
        root.addHandler(qh)

        l = logging.getLogger("Test")
        l.setLevel(logging.DEBUG)

        workers=[]

        for i in xrange(15):
            worker = Worker(logconf, i, qh)
            worker.daemon = True
            worker.start()
            workers.append(worker)

        for worker in workers:
            print "joining worker: {}".format(worker)
            worker.join()

        mpq.put_nowait(None)

        ql.start()

        # listener.join()

        for i in xrange(10):
            l.info("testing %i"%i)

if __name__ == "__main__":


    x = Main()
    x.start()

    time.sleep(10)

    print "Finish"

现在它主要工作,直到我接触到一定数量的工人(~15),因为某种原因,主类在解join中被阻塞,而其他工人什么也不做。


Tags: testimportselfinfologgingdefrootlogger
3条回答

我要迟到了,所以你可能不再需要答案了。这个问题来自于这样一个事实:您已经在主进程中设置了一个处理程序,而在您的工作进程中,您正在添加另一个处理程序。这意味着在您的工作进程中,实际上有两个处理程序在管理您的数据,一个在将日志推送到队列中,另一个在写入流中。

只需在代码中添加一行self.root.handlers = [],就可以解决这个问题。从原始代码中,worker的__init__方法如下所示:

def __init__(self, n, q):
    super(Worker, self).__init__()
    self.n = n
    self.queue = q

    self.qh = QueueHandler(self.queue)
    self.root = logging.getLogger()
    self.root.handlers = []
    self.root.addHandler(self.qh)
    self.root.setLevel(logging.DEBUG)
    self.logger = logging.getLogger("W%i"%self.n)

现在输出如下:

python workers.py 
2016-05-12 10:07:02,971 Worker-2   W0 INFO         Worker 0 Starting
2016-05-12 10:07:02,972 Worker-2   W0 INFO         testing 0
2016-05-12 10:07:02,973 Worker-2   W0 INFO         testing 1
2016-05-12 10:07:02,973 Worker-2   W0 INFO         testing 2
2016-05-12 10:07:02,973 Worker-2   W0 INFO         testing 3
2016-05-12 10:07:02,973 Worker-2   W0 INFO         testing 4
2016-05-12 10:07:02,973 Worker-2   W0 INFO         testing 5
2016-05-12 10:07:02,973 Worker-2   W0 INFO         testing 6
2016-05-12 10:07:02,973 Worker-2   W0 INFO         testing 7
2016-05-12 10:07:02,973 Worker-2   W0 INFO         testing 8
2016-05-12 10:07:02,973 Worker-2   W0 INFO         testing 9
2016-05-12 10:07:02,973 Worker-2   W0 INFO         Completed 0
Finish

我想出了一个很简单的解决办法,就是使用monkeypatching。它可能并不健壮,我也不是日志模块的专家,但它似乎是适合我的情况的最佳解决方案。在尝试了一些代码更改(以允许从multiprocess.get_logger()传入现有的记录器)之后,我不喜欢代码更改了多少,于是想出了一个简单易懂的破解/解决方法:

(工作示例,包含多处理池)

import logging
import multiprocessing

class FakeLogger(object):
    def __init__(self, q):
        self.q = q
    def info(self, item):
        self.q.put('INFO - {}'.format(item))
    def debug(self, item):
        self.q.put('DEBUG - {}'.format(item))
    def critical(self, item):
        self.q.put('CRITICAL - {}'.format(item))
    def warning(self, item):
        self.q.put('WARNING - {}'.format(item))

def some_other_func_that_gets_logger_and_logs(num):
    # notice the name get's discarded
    # of course you can easily add this to your FakeLogger class
    local_logger = logging.getLogger('local')
    local_logger.info('Hey I am logging this: {} and working on it to make this {}!'.format(num, num*2))
    local_logger.debug('hmm, something may need debugging here')
    return num*2

def func_to_parallelize(data_chunk):
    # unpack our args
    the_num, logger_q = data_chunk
    # since we're now in a new process, let's monkeypatch the logging module
    logging.getLogger = lambda name=None: FakeLogger(logger_q)
    # now do the actual work that happens to log stuff too
    new_num = some_other_func_that_gets_logger_and_logs(the_num)
    return (the_num, new_num)

if __name__ == '__main__':
    multiprocessing.freeze_support()
    m = multiprocessing.Manager()
    logger_q = m.Queue()
    # we have to pass our data to be parallel-processed
    # we also need to pass the Queue object so we can retrieve the logs
    parallelable_data = [(1, logger_q), (2, logger_q)]
    # set up a pool of processes so we can take advantage of multiple CPU cores
    pool_size = multiprocessing.cpu_count() * 2
    pool = multiprocessing.Pool(processes=pool_size, maxtasksperchild=4)
    worker_output = pool.map(func_to_parallelize, parallelable_data)
    pool.close() # no more tasks
    pool.join()  # wrap up current tasks
    # get the contents of our FakeLogger object
    while not logger_q.empty():
        print logger_q.get()
    print 'worker output contained: {}'.format(worker_output)

当然,这可能不会涵盖使用logging的整个范围,但我认为这里的概念足够简单,可以快速、相对轻松地工作。而且应该很容易修改(例如lambda func丢弃了一个可能传递到getLogger的前缀)。

所有的Worker共享同一个根日志对象(在Worker.__init__中获得,getLogger调用始终返回同一个日志)。但是,每次创建Worker时,都会向该记录器添加一个处理程序(QueueHandler)。

因此,如果您创建了10个worker,那么您的根记录器上将有10个(相同的)处理程序,这意味着输出将重复10次。

相反,您应该将记录器设置为模块属性而不是实例属性,并在模块级别(而不是类级别)配置一次。

(实际上,记录器应该在程序级别配置一次)

相关问题 更多 >