在GPU上执行外部优化器

2024-04-25 08:56:21 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在考虑在我的程序中使用SciPy优化器^{}。一个示例用例是

vector = tf.Variable([7., 7.], 'vector')

# Make vector norm as small as possible.
loss = tf.reduce_sum(tf.square(vector))

optimizer = ScipyOptimizerInterface(loss, options={'maxiter': 100})

with tf.Session() as session:
    optimizer.minimize(session)

# The value of vector should now be [0., 0.].

因为ScipyOptimizerInterfaceExternalOptimizerInterface的子代,所以我想知道数据的处理是在哪里完成的。它是在GPU上还是在CPU上?既然您必须在TensorFlow图中实现函数,我假设至少函数调用和渐变是在GPU上完成的(如果有的话),但是更新所需的计算呢?我应该如何使用这些类型的优化器来提高效率?提前感谢您的帮助!你知道吗


Tags: 程序norm示例makegpusessiontfas
1条回答
网友
1楼 · 发布于 2024-04-25 08:56:21

基于code on github,不,这只是一个最终调用scipy的包装器,因此更新在CPU上,不能更改。你知道吗

但是,您可以在tensorflow/probability中找到一个native implementation,这是他们的示例:

minimum = np.array([1.0, 1.0])  # The center of the quadratic bowl.
scales = np.array([2.0, 3.0])  # The scales along the two axes.

# The objective function and the gradient.
def quadratic(x):
    value = tf.reduce_sum(scales * (x - minimum) ** 2)
    return value, tf.gradients(value, x)[0]

start = tf.constant([0.6, 0.8])  # Starting point for the search.
optim_results = tfp.optimizer.bfgs_minimize(
      quadratic, initial_position=start, tolerance=1e-8)

with tf.Session() as session:
    results = session.run(optim_results)
    # Check that the search converged
    assert(results.converged)
    # Check that the argmin is close to the actual value.
    np.testing.assert_allclose(results.position, minimum)
    # Print out the total number of function evaluations it took. Should be 6.
    print ("Function evaluations: %d" % results.num_objective_evaluations)

相关问题 更多 >