You can get depth image at OpenGL or Vulkan. This article assumes VK_FORMAT_D32_SFLOAT of Vulkan is used as format of depth buffer..
In this case, pixel value of depth image is z-value at NDC space. So, you can reconstruct it's position in world space by using inverse of transformation matrix including perspective matrix.
Then, how can you get this depth image at Blender?
First of all, to get depth value at Blender, you have to make Output-File-Node of OpenEXR image format, and connect it to Depth (or Z) attribute.
But, value stored in this image is not depth(Z) value in NDC space. It's depth value at your view space. So, it's good practice clamping value between near-Z and far-Z.
That is value is between near-Z and far-Z. To get real z-value in NDC space, you can use your projection matrix(usually perspective matrix).
You don't need to know (x, y) value corresponding to this depth value - you can easily capture this by having a look perspective matrix multiplication.
In summary, you have to follow below steps.
- Getting OpenEXR depth image of view space at Blender.
- Convert it by using projection matrix to get depth value at NDC space.

    : You can use even Python: cv2, struct.(un)pack, torchvision, PIL and so on.

- Use (u,v) value and depth value of image to reconstruct position in view space.

+ Recent posts