同调中的张量?

2024-06-17 15:17:41 发布

您现在位置:Python中文网/ 问答频道 /正文

我想用sympy来构建一个cython,它可以用来评估损失/雅各布/黑森。损失函数(由于SO上缺少latex而定义为python函数)如下所示

def loss(omega, tau, x, w):
    # omega: [j, 3, 3] a stack of rotations
    # tau  : [j, 3]    a stack of translations
    # x    : [i, j, 3] a matrix of 3d image points
    # w    : [i ,3]    a stack of corresponding 3d world points
    total_loss = 0.
    for a in range(x.shape[0]):
        for b in range(x.shape[1]):
            z = np.matmul(omega[b], w[a]) + tau[b]
            z /= np.linalg.norm(z)
            total_loss += np.dot(x[a, b], z)
    return total_loss

这是equation 7来自运动纸的这个结构。在

当只考虑单个点和转换时,我可以在sypy中构建适当的函数i=j=,如下所示:

^{pr2}$

但是我希望能够构建一个函数,它将处理i和{}的变量数。对于IndexedBasesympy对象,除了最基本的示例之外,我似乎什么也做不了。如何使用IndexedBase对象来构建适当的表达式?在


Tags: of对象函数inforstacknprange
1条回答
网友
1楼 · 发布于 2024-06-17 15:17:41

我最终放弃了辛普森,因为它的麻烦大于它的价值。我建议你用微分来结束。下面是创建方程(7)的hessian的脚本。在

import tensorflow as tf
from tensorflow.python.ops.parallel_for.gradients import jacobian


def flatten_and_order_variables(omega, tau, w):
    flattened_variables = tf.concat(
        [tf.reshape(w, [-1]),
         tf.reshape(omega, [-1]),
         tf.reshape(tau, [-1])],
        axis=0)

    flattened_variables = tf.Variable(flattened_variables)
    return flattened_variables


def build_graph(cameras, points):
    graph = tf.Graph()
    with graph.as_default():

        # inputs
        x = tf.placeholder(tf.float32, [cameras, points, 3], 'x')
        omega = tf.placeholder(tf.float32, [cameras, 3, 3], 'omega')
        tau = tf.placeholder(tf.float32, [cameras, 3], 'tau')
        w = tf.placeholder(tf.float32, [points, 3], 'w')
        camera_params = tf.concat([omega, tau[:, :, None]], axis=2)
        flat_inputs = tf.concat([tf.reshape(camera_params, [-1]), tf.reshape(w, [-1])], axis=0)

        flat_variables = tf.Variable(tf.zeros([cameras*12 + 3*points], tf.float32))
        assign = flat_variables.assign(flat_inputs)

        with tf.control_dependencies([assign]):
            camera_params = tf.reshape(flat_variables[:cameras*12], [cameras, 3, 4])
            omega_var = camera_params[:, :, :3]
            tau_var = camera_params[:, :, -1]
            w_var = tf.reshape(flat_variables[cameras*12:], [points, 3])

            # define loss
            z = tf.einsum('jab,ib->jia', omega_var, w_var) + tau_var[:, None]
            z = z / tf.linalg.norm(z, axis=2, keepdims=True)
            loss = tf.reduce_sum(x*z)

            # get jacobi
            jac = jacobian(loss, flat_variables)
            hess = jacobian(jac, flat_variables)
            tf.identity(loss, 'loss')
            tf.identity(hess, 'hessian')
            tf.identity(jac, 'jacobian')
    return graph


def build_feed_dict(graph, omega, tau, w, x):
    omega_p = graph.get_tensor_by_name('omega:0')
    tau_p = graph.get_tensor_by_name('tau:0')
    w_p = graph.get_tensor_by_name('w:0')
    x_p = graph.get_tensor_by_name('x:0')
    return {
        omega_p: omega,
        tau_p: tau,
        w_p: w,
        x_p: x
    }


def get_outputs(graph):
    loss = graph.get_tensor_by_name('loss:0')
    jac = graph.get_tensor_by_name('jacobian:0')
    hessian = graph.get_tensor_by_name('hessian:0')
    return loss, hessian, jac


def get_hessian(graph, omega, tau, w, x):
    with tf.Session(graph=graph) as sess:
        sess.run(tf.global_variables_initializer())
        feed_dict = build_feed_dict(graph, omega, tau, w, x)
        loss, hess, jac = get_outputs(graph)
        net_out = sess.run([loss, hess, jac], feed_dict)
    return net_out

相关问题 更多 >