<p>如果您有Mac或Linux(或Windows Linux子系统),您可以添加大约10行代码来与<code>ray</code>并行执行此操作。如果您通过<a href="http://ray.readthedocs.io/en/latest/installation.html#trying-the-latest-version-of-ray" rel="nofollow noreferrer">latest wheels here</a>安装ray,那么您可以在运行脚本时进行最小的修改,如下所示,使用HyperOpt执行并行/分布式网格搜索。在高级别上,它使用<code>fmin</code>运行tpe.建议并以并行的方式在内部创建一个试验对象。在</p>
<pre><code>from numpy import random
from pandas import DataFrame
from hyperopt import fmin, tpe, hp, Trials
def calc_result(x, reporter): # add a reporter param here
huge_df = DataFrame(random.randn(100000, 5), columns=['A', 'B', 'C', 'D', 'E'])
total = 0
# Assume that I MUST iterate
for idx_and_row in huge_df.iterrows():
idx = idx_and_row[0]
row = idx_and_row[1]
# Assume there is no way to optimize here
curr_sum = row['A'] * x['adjustment_1'] + \
row['B'] * x['adjustment_2'] + \
row['C'] * x['adjustment_3'] + \
row['D'] * x['adjustment_4'] + \
row['E'] * x['adjustment_5']
total += curr_sum
# In real life I want the total as high as possible, but for the minimizer, it has to negative a negative value
# total_as_neg = total * -1
# print(total_as_neg)
# Ray will negate this by itself to feed into HyperOpt
reporter(timesteps_total=1, episode_reward_mean=total)
return total_as_neg
space = {'adjustment_1': hp.quniform('adjustment_1', 0, 1, 0.001),
'adjustment_2': hp.quniform('adjustment_2', 0, 1, 0.001),
'adjustment_3': hp.quniform('adjustment_3', 0, 1, 0.001),
'adjustment_4': hp.quniform('adjustment_4', 0, 1, 0.001),
'adjustment_5': hp.quniform('adjustment_5', 0, 1, 0.001)}
import ray
import ray.tune as tune
from ray.tune.hpo_scheduler import HyperOptScheduler
ray.init()
tune.register_trainable("calc_result", calc_result)
tune.run_experiments({"experiment": {
"run": "calc_result",
"repeat": 20000,
"config": {"space": space}}}, scheduler=HyperOptScheduler())
</code></pre>