我希望能够在进程启动后创建新的multiprocessing.Value
或multiprocessing.Array
。比如这个例子:
# coding: utf-8
import multiprocessing
shared = {
'foo': multiprocessing.Value('i', 42),
}
def job(pipe):
while True:
shared_key = pipe.recv()
print(shared[shared_key].value)
process_read_pipe, process_write_pipe = multiprocessing.Pipe(duplex=False)
process = multiprocessing.Process(
target=job,
args=(process_read_pipe, )
)
process.start()
process_write_pipe.send('foo')
shared['bar'] = multiprocessing.Value('i', 24)
process_write_pipe.send('bar')
输出:
42
Process Process-1:
Traceback (most recent call last):
File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
self.run()
File "/usr/lib/python3.5/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/home/bux/Projets/synergine2/p.py", line 12, in job
print(shared[shared_key].value)
KeyError: 'bar'
Process finished with exit code 0
这里的问题是:shared
dict在启动时被复制到process
。但是,如果我在shared
dict中添加一个键,进程将看不到它。这是如何开始的process
可以被告知存在新的multiprocessing.Value('i', 24)
?你知道吗
不能给它思想管道,因为:
Synchronized objects should only be shared between processes through inheritance
你知道吗?你知道吗
看起来您假设
shared
变量可由两个线程访问。两个线程只能访问共享的[“foo”]变量。你需要共用一本字典。你知道吗下面是一个例子:Python multiprocessing: How do I share a dict among multiple processes?
相关问题 更多 >
编程相关推荐