使用多个进程的Python多处理错误

2024-05-15 08:28:25 发布

您现在位置:Python中文网/ 问答频道 /正文

我尝试(失败)使用多处理来并行化一个循环。 下面是我的Python代码:

from MMTK import *
from MMTK.Trajectory import Trajectory, TrajectoryOutput, SnapshotGenerator
from MMTK.Proteins import Protein, PeptideChain
import numpy as np

filename = 'traj_prot_nojump.nc'

trajectory = Trajectory(None, filename)
universe = trajectory.universe
proteins = universe.objectList(Protein)
chain = proteins[0][0]

def calpha_2dmap_mult(t = range(0,len(trajectory))):
    dist = []
    global trajectory
    universe = trajectory.universe
    proteins = universe.objectList(Protein)
    chain = proteins[0][0]
    traj = trajectory[t]
    dt = 1000 # calculate distance every 1000 steps
    for n, step in enumerate(traj):
        if n % dt == 0:
            universe.setConfiguration(step['configuration'])
            for i in np.arange(len(chain)-1):
                for j in np.arange(len(chain)-1):
                    dist.append(universe.distance(chain[i].peptide.C_alpha,
                                                  chain[j].peptide.C_alpha))
    return(dist)

dist1 = calpha_2dmap_mult(range(1000,2000))
dist2 = calpha_2dmap_mult(range(2000,3000))

# Multiprocessing
from multiprocessing import Pool, cpu_count

pool = Pool(processes=2)
dist_pool = [pool.apply(calpha_2dmap_mult, args=(t,)) for t in [range(1000,2000), range(2000,3000)]]

print(dist_pool[0]==dist1)
print(dist_pool[1]==dist2)

如果我尝试Pool(processes = 1),代码会按预期工作,但只要我请求多个进程,代码就会崩溃并出现以下错误:

^{pr2}$

如果有人提出建议,我们将不胜感激;—)


Tags: 代码infromimportchainfordistrange
3条回答

以下是允许使用多个进程的新脚本(但没有性能改进):

from MMTK import *
from MMTK.Trajectory import Trajectory, TrajectoryOutput, SnapshotGenerator
from MMTK.Proteins import Protein, PeptideChain
import numpy as np
import time

filename = 'traj_prot_nojump.nc'


trajectory = Trajectory(None, filename)
universe = trajectory.universe
proteins = universe.objectList(Protein)
chain = proteins[0][0]

def calpha_2dmap_mult(trajectory = trajectory, t = range(0,len(trajectory))):
    dist = []
    universe = trajectory.universe
    proteins = universe.objectList(Protein)
    chain = proteins[0][0]
    traj = trajectory[t]
    dt = 1000 # calculate distance every 1000 steps
    for n, step in enumerate(traj):
        if n % dt == 0:
            universe.setConfiguration(step['configuration'])
            for i in np.arange(len(chain)-1):
                for j in np.arange(len(chain)-1):
                    dist.append(universe.distance(chain[i].peptide.C_alpha,
                                                  chain[j].peptide.C_alpha))
    return(dist)

c0 = time.time()
dist1 = calpha_2dmap_mult(trajectory, range(0,11001))
#dist1 = calpha_2dmap_mult(trajectory, range(0,11001))
c1 = time.time() - c0
print(c1) 


# Multiprocessing
from multiprocessing import Pool, cpu_count

pool = Pool(processes=4)
c0 = time.time()
dist_pool = [pool.apply(calpha_2dmap_mult, args=(trajectory, t,)) for t in
             [range(0,2001), range(3000,5001), range(6000,8001),
              range(9000,11001)]]
c1 = time.time() - c0
print(c1)


dist1 = np.array(dist1)
dist_pool = np.array(dist_pool)
dist_pool = dist_pool.flatten()
print(np.all((dist_pool == dist1)))

在没有(70.1s)或多处理(70.2s)的情况下,计算距离所花费的时间是“相同的”!我也许不期望因子4有所改善,但我至少期待一些改进!在

我怀疑是因为这个:

trajectory = Trajectory(None, filename)

你只打开一次文件,从一开始。您可能只需要将文件名传递到多处理目标函数中,然后在那里打开它。在

如果您在osx或任何其他类似Unix的系统上运行此代码,那么多处理使用forking来创建子进程。在

分叉时,文件描述符与父进程共享。据我所知,轨迹对象包含对文件描述符的引用。在

要解决这个问题,你应该

trajectory = Trajectory(None, filename)

要确保每个子进程在多进程内单独打开。在

相关问题 更多 >