使用深度值将2D点投影到3D空间。Maya Python API

2 投票
1 回答
1272 浏览
提问于 2025-04-18 06:28

我正在尝试从一个二维点投影出一个三维点。我想给它一个深度值来进行投影。有没有人能给我一些在Maya中实现的例子?

谢谢!

这是我目前能做到的最好效果:

def screenToWorld(point2D=None,
                  depth=None,
                  viewMatrix=None,
                  projectionMatrix=None,
                  width=None,
                  height=None):
    '''
    @param point2D - 2D Point.
    @param viewMatrix - MMatrix of modelViewMatrix (World inverse of camera.)
    @param projectionMatrix - MMatrix of camera's projectionMatrix.
    @param width - Resolution width of camera.
    @param height - Resolution height of camera.
    Returns worldspace MPoint.
    '''
    point3D = OpenMaya.MPoint()
    point3D.x = (2.0 * (point2D[0] / width)) - 1.0
    point3D.y = (2.0 * (point2D[1] / height)) - 1.0

    viewProjectionMatrix = (viewMatrix * projectionMatrix)

    point3D.z = viewProjectionMatrix(3, 2)
    point3D.w = viewProjectionMatrix(3, 3)
    point3D.x = point3D.x * point3D.w
    point3D.y = point3D.y * point3D.w
    point3D = point3D * viewProjectionMatrix.inverse()

    return point3D

如你所见,它并没有使用深度值。我不太确定如何通过投影矩阵和视图矩阵来把它加进去。

任何帮助都非常感谢!
-克里斯

1 个回答

0

我觉得我找到了一个解决办法:

import maya.OpenMaya as OpenMaya

def projectPoint(worldPnt, camPnt, depth):
    '''
    @param worldPnt - MPoint of point to project. (WorldSpace)
    @param camPnt - MPoint of camera position. (WorldSpace)
    @param depth - Float value of distance.
    Returns list of 3 floats.
    '''
    #Get vector from camera to point and normalize it.
    mVec_pointVec = worldPnt - camPnt
    mVec_pointVec.normalize()

    #Multiply it by the depth and the camera offset to it.
    mVec_pointVec *= depth
    mVec_pointVec += OpenMaya.MVector(camPnt.x, camPnt.y, camPnt.z)

    return [mVec_pointVec.x, mVec_pointVec.y, mVec_pointVec.z]

其实我并不需要把它先转换成二维再转回三维。我只需要从相机的方向延伸这个向量就可以了。

撰写回答