Python递增计数以产生意外结果

2024-05-23 13:19:08 发布

您现在位置:Python中文网/ 问答频道 /正文

在下面的代码中,我希望print('q.count' , q.count)是2,因为count是一个变量,使用q = QueueFun()初始化一次,然后在read_queue方法中递增,而不是print('q.count' , q.count)打印0。在多进程之间共享计数器的正确方法是什么

完整代码:

from multiprocessing import Process, Queue, Pool, Lock

class QueueFun():

    def __init__(self):
        self.count = 0
        self.lock = Lock()

    def write_queue(self, work_tasks, max_size):
        for i in range(0, max_size):
            print("Writing to queue")
            work_tasks.put(1)

    def read_queue(self, work_tasks, max_size):
        while self.count != max_size:
            self.lock.acquire()
            self.count += 1
            self.lock.release()
            print('self.count' , self.count)
            print('')
            print('Reading from queue')
            work_tasks.get()

if __name__ == '__main__':
    q = QueueFun()
    max_size = 1
    work_tasks = Queue()

    write_processes = []
    for i in range(0,2):
        write_processes.append(Process(target=q.write_queue,
                                 args=(work_tasks,max_size)))
    for p in write_processes:
        p.start()

    read_processes = []
    for i in range(0, 2):
        read_processes.append(Process(target=q.read_queue,
                                 args=(work_tasks,max_size)))
    for p in read_processes:
        p.start()

    for p in read_processes:
        p.join()
    for p in write_processes:
        p.join()

    print('q.count' , q.count)

Tags: inselfforreadsizequeuecountprocess
1条回答
网友
1楼 · 发布于 2024-05-23 13:19:08

与线程不同,不同的进程有不同的地址
空间:它们彼此不共享内存。写作
在一个过程中更改变量不会更改(非共享)
另一个过程中的变量

在最初的示例中,最后的计数是0,因为
主流程从未改变它(无论其他流程如何
衍生进程(没有)

使用队列在进程之间进行通信可能更好。
如果确实需要,可以使用值或数组:

17.2.1.5. Sharing state between processes

As mentioned above, when doing concurrent programming it is usually best to avoid using shared state as far as possible. This is particularly true when using multiple processes.

However, if you really do need to use some shared data then multiprocessing provides a couple of ways of doing so.

Shared memory Data can be stored in a shared memory map using Value or Array.
...
These shared objects will be process and thread-safe.

multiprocessing.Value

Operations like += which involve a read and write are not atomic.

问题代码的稍微修改版本:

from multiprocessing import Process, Queue, Value

class QueueFun():
    def __init__(self):
        self.readCount = Value('i', 0)
        self.writeCount = Value('i', 0)
    
    def write_queue(self, work_tasks, MAX_SIZE):
        with self.writeCount.get_lock():
            if self.writeCount != MAX_SIZE:
                self.writeCount.value += 1
                work_tasks.put(1)
    
    def read_queue(self, work_tasks, MAX_SIZE):
        with self.readCount.get_lock():
            if self.readCount.value != MAX_SIZE:
                self.readCount.value += 1
                work_tasks.get()

if __name__ == '__main__':
    q = QueueFun()
    MAX_SIZE = 2
    work_tasks = Queue()
    
    write_processes = []
    for i in range(MAX_SIZE):
        write_processes.append(Process(target=q.write_queue,
                                args=(work_tasks,MAX_SIZE)))
    for p in write_processes: p.start()
    
    read_processes = []
    for i in range(MAX_SIZE):
        read_processes.append(Process(target=q.read_queue,
                                args=(work_tasks,MAX_SIZE)))
    for p in read_processes: p.start()
    
    for p in read_processes: p.join()
    for p in write_processes: p.join()
    
    print('q.writeCount.value' , q.writeCount.value)
    print('q.readCount.value' , q.readCount.value)

注意:从多个进程打印到标准输出,
可能导致输出混淆(不同步)

相关问题 更多 >