我的主要目标是在另一个python脚本(调用方脚本)中subprocess
运行一个外部python脚本(客户端脚本)。调用方脚本的控制台显示来自客户机脚本的所有输出,除了tqdm输出之外,因此它不是通过subprocess
显示输出的一般问题,而是与subprocess
交互tqdm
相关的特定问题
我的第二个目标是想了解它:)。因此,我们非常感谢经过深思熟虑的解释
客户端脚本(train.py)包含几个TQM调用。到目前为止,我还没有看到不同TQM参数配置之间的输出有多大差异,所以让我们使用最简单的一种
在train.py
中:
...
from tqdm import tqdm
with tqdm(total = 10, ncols = 80,
file=sys.stdout, position = 0, leave = True,
desc='f5b: pbar.set_postfix') as pbar:
for i in range(10):
pbar.update(1)
postfix = {'loss': '{0:.4f}'.format(1+i)}
pbar.set_postfix(**postfix)
sleep(0.1)
调用方脚本experiment.py
执行函数execute_experiment
,该函数通过参数command_list
调用train.py
:
def execute_experiment(command_list):
tic = time.time()
try:
process = subprocess.Popen(
command_list, shell=False,
encoding='utf-8',
bufsize=0,
stdin=subprocess.DEVNULL,
universal_newlines=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
# Poll process for new output until finished
# Source: https://stackoverflow.com/q/37401654/7769076
while process.poll() is None:
nextline = process.stdout.readline()
sys.stdout.write(nextline)
sys.stdout.flush()
except CalledProcessError as err:
print("CalledProcessError: {0}".format(err))
sys.exit(1)
except OSError as err:
print("OS error: {0}".format(err))
sys.exit(1)
except:
print("Unexpected error:", sys.exc_info()[0])
raise
if (process.returncode == 0):
toc = time.time()
time1 = str(round(toc - tic))
return time1
else:
return 1
对train.py的上述代码的脚本调用返回输出,但TQM输出在0秒后停止,如下所示:
f5b: pbar.set_postfix: 0%| | 0/10 [00:00<?, ?it/s]
f5b: pbar.set_postfix: 10%|█▊ | 1/10 [00:00<00:00, 22310.13it/s]
对train.py的原始代码的脚本调用返回所有输出,除了tqdm输出:
Training default configuration
train.py data --use-cuda ...
device: cuda
...
shell = False
:正如python脚本调用python脚本一样。当shell=True
时,根本不调用客户机脚本bufsize=0
:防止缓冲train.py
调用之前加上sys.executable
,以确保在本地计算机上调用相应conda环境的python解释器李>tqdm.set_postfix
是否阻止向上游传递进度条输出?我知道调用tqdm.set_description
时会发生这种情况,例如:
pbar.set_说明('已处理:%d'(1+i))
此代码包含以下内容:
def train(self, dataloader, max_batches=500, verbose=True, **kwargs):
with tqdm(total=max_batches, disable=not verbose, **kwargs) as pbar:
for results in self.train_iter(dataloader, max_batches=max_batches):
pbar.update(1)
postfix = {'loss': '{0:.4f}'.format(results['mean_outer_loss'])}
if 'accuracies_after' in results:
postfix['accuracy'] = '{0:.4f}'.format(
np.mean(results['accuracies_after']))
pbar.set_postfix(**postfix)
# for logging
return results
调用的顺序是experiment.py
>train.py
>nested.py
train.py
通过以下方式调用nested.py
中的train函数:
对于范围内的历元(args.num_历元):
results_metatraining = metalearner.train(meta_train_dataloader,
max_batches=args.num_batches,
verbose=args.verbose,
desc='Training',
# leave=False
leave=True
)
尝试过的替代方案没有成功:
### try2
process = subprocess.Popen(command_list, shell=False, encoding='utf-8',
stdin=DEVNULL, stdout=subprocess.PIPE)
while True:
output = process.stdout.readline().strip()
print('output: ' + output)
if output == '' and process.poll() is not None: # end of output
break
if output: # print output in realtime
print(output)
else:
output = process.communicate()
process.wait()
### try6
process = subprocess.Popen(command_list, shell=False,
stdout=subprocess.PIPE, universal_newlines=True)
for stdout_line in iter(process.stdout.readline, ""):
yield stdout_line
process.stdout.close()
return_code = process.wait()
print('return_code' + str(return_code))
if return_code:
raise subprocess.CalledProcessError(return_code, command_list)
### try7
with subprocess.Popen(command_list, stdout=subprocess.PIPE,
bufsize=1, universal_newlines=True) as p:
while True:
line = p.stdout.readline()
if not line:
break
print(line)
exit_code = p.poll()
我认为readline正在等待“\n”,TQM没有创建新行,也许这会有所帮助(我没有尝试):
相关问题 更多 >
编程相关推荐