Python多进程:同步类文件对象

7 投票
2 回答
6142 浏览
提问于 2025-04-16 16:35

我正在尝试制作一个像文件一样的对象,目的是在测试时把它分配给 sys.stdout/sys.stderr,以便提供可预测的输出。这个对象不需要快,只要可靠就行。目前我做的东西差不多能用,但我需要一些帮助来解决最后几个边缘情况的错误。

这是我现在的实现。

try:
    from cStringIO import StringIO
except ImportError:
    from StringIO import StringIO

from os import getpid
class MultiProcessFile(object):
    """
    helper for testing multiprocessing

    multiprocessing poses a problem for doctests, since the strategy
    of replacing sys.stdout/stderr with file-like objects then
    inspecting the results won't work: the child processes will
    write to the objects, but the data will not be reflected
    in the parent doctest-ing process.

    The solution is to create file-like objects which will interact with
    multiprocessing in a more desirable way.

    All processes can write to this object, but only the creator can read.
    This allows the testing system to see a unified picture of I/O.
    """
    def __init__(self):
        # per advice at:
        #    http://docs.python.org/library/multiprocessing.html#all-platforms
        from multiprocessing import Queue
        self.__master = getpid()
        self.__queue = Queue()
        self.__buffer = StringIO()
        self.softspace = 0

    def buffer(self):
        if getpid() != self.__master:
            return

        from Queue import Empty
        from collections import defaultdict
        cache = defaultdict(str)
        while True:
            try:
                pid, data = self.__queue.get_nowait()
            except Empty:
                break
            cache[pid] += data
        for pid in sorted(cache):
            self.__buffer.write( '%s wrote: %r\n' % (pid, cache[pid]) )
    def write(self, data):
        self.__queue.put((getpid(), data))
    def __iter__(self):
        "getattr doesn't work for iter()"
        self.buffer()
        return self.__buffer
    def getvalue(self):
        self.buffer()
        return self.__buffer.getvalue()
    def flush(self):
        "meaningless"
        pass

... 还有一个快速的测试脚本:

#!/usr/bin/python2.6

from multiprocessing import Process
from mpfile import MultiProcessFile

def printer(msg):
    print msg

processes = []
for i in range(20):
    processes.append( Process(target=printer, args=(i,), name='printer') )

print 'START'
import sys
buffer = MultiProcessFile()
sys.stdout = buffer

for p in processes:
    p.start()
for p in processes:
    p.join()

for i in range(20):
    print i,
print

sys.stdout = sys.__stdout__
sys.stderr = sys.__stderr__
print 
print 'DONE'
print
buffer.buffer()
print buffer.getvalue()

这个方法在95%的情况下都能完美运行,但它有三个边缘情况的问题。我需要在一个快速的循环中运行测试脚本才能重现这些问题。

  1. 3%的情况下,父进程的输出没有完全反映出来。我猜这是因为数据在队列刷新线程赶上之前就被消费掉了。我还没想到一个不导致死锁的方式来等待这个线程。
  2. 0.5%的情况下,会出现来自 multiprocess.Queue 实现的追踪错误。
  3. 0.01%的情况下,进程ID(PID)会回绕,因此根据PID排序会出现错误的顺序。

在最糟糕的情况下(概率:七千万分之一),输出可能会是这样的:

START

DONE

302 wrote: '19\n'
32731 wrote: '0 1 2 3 4 5 6 7 8 '
32732 wrote: '0\n'
32734 wrote: '1\n'
32735 wrote: '2\n'
32736 wrote: '3\n'
32737 wrote: '4\n'
32738 wrote: '5\n'
32743 wrote: '6\n'
32744 wrote: '7\n'
32745 wrote: '8\n'
32749 wrote: '9\n'
32751 wrote: '10\n'
32752 wrote: '11\n'
32753 wrote: '12\n'
32754 wrote: '13\n'
32756 wrote: '14\n'
32757 wrote: '15\n'
32759 wrote: '16\n'
32760 wrote: '17\n'
32761 wrote: '18\n'

Exception in thread QueueFeederThread (most likely raised during interpreter shutdown):
Traceback (most recent call last):
  File "/usr/lib/python2.6/threading.py", line 532, in __bootstrap_inner
  File "/usr/lib/python2.6/threading.py", line 484, in run
      File "/usr/lib/python2.6/multiprocessing/queues.py", line 233, in _feed
<type 'exceptions.TypeError'>: 'NoneType' object is not callable

在 python2.7 中,异常稍微有点不同:

Exception in thread QueueFeederThread (most likely raised during interpreter shutdown):
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 552, in __bootstrap_inner
  File "/usr/lib/python2.7/threading.py", line 505, in run
  File "/usr/lib/python2.7/multiprocessing/queues.py", line 268, in _feed
<type 'exceptions.IOError'>: [Errno 32] Broken pipe

我该如何解决这些边缘情况呢?

2 个回答

0

我发现用Python 2.7时,遇到的multiprocessing问题比用Python 2.6时少得多。不过,我用来解决“Exception in thread QueueFeederThread”这个问题的方法是,在每个使用Queue的进程里稍微sleep一下,可能是0.01秒。虽然说用sleep并不是最理想的选择,甚至可能不太可靠,但我发现这个时间设置在实际操作中效果还不错。你也可以试试0.1秒。

9

这个解决方案分为两个部分。我已经成功地运行了测试程序20万次,输出结果没有任何变化。

第一部分比较简单,我使用了multiprocessing.current_process()._identity来对消息进行排序。虽然这不是官方文档中提到的功能,但它是每个进程的一个独特且确定的标识符。这样就解决了进程ID(PID)循环的问题,避免了输出顺序混乱。

第二部分的解决方案是使用multiprocessing.Manager().Queue(),而不是直接使用multiprocessing.Queue。这样可以解决上面提到的第二个问题,因为管理器是在一个单独的进程中运行的,这样就避免了一些使用拥有进程的队列时可能出现的特殊情况。至于第三个问题,使用这种队列后,队列会被完全消耗,供给线程会在Python开始关闭之前自然结束,这样就不会出现关闭标准输入(stdin)时的问题。

撰写回答