Experimenting with HTC Vive camera tracking using Unreal scene
Note that this documentation only presents a helper compound that enables performing your own experiments with HTC Vive. Aximmetry does not officially support using HTC Vive as a camera tracking system. We strive to create the means for your experiments in Aximmetry, but we cannot provide any support for these and we cannot guarantee any result.
Also note that this environment only provides a basic image composition method, where the camera image is simply overlayed onto the virtual background. No talent covering, reflections and shadows can be achieved, these features will only be available in the future releases of the Aximmetry DE editions.
This documentation assumes that you are already familiar with how to add and handle Unreal scene to your Aximmetry compound. If not please see these: Getting started with the Unreal Engine based DE edition and How to install and work with the Unreal Engine based DE edition
Contents
Tracking-specific control board
You can find a compound supporting camera tracking with Unreal at [Common_Studio]:Compounds\TrackedCam_Unreal\TrackedCam_Unreal_Prev_3-Cam.xcomp
Drag and drop this compound into your scene (instead of the VirtualCam one presented by the previous documentation. You won’t need that.)
Connect the Camera pins:
Wire back Unreal output to Rendered:
Expose Preview and Out.
You’ll see the Unreal scene with the camera being in the origin.
Setting up camera input and keying
Use the INPUT and KEYER panels for this. It works exactly the same way as you do with VirtualCam, please see the relevant documentation.
As you may notice the CROP panel is missing. The reason is that with a moving camera you cannot use a static cropping. Instead, a 3D studio mask is provided, see below.
Adding HTC Vive tracking
Switch to Studio mode in order to easily see any camera movements.
On the INPUT panel find the Tracking Device property and select the tracker mounted on your camera:
At this points you alread have some tracking. If you move your Vive tracker the virtual camera will move according to it.
Adjusting the tracking delay
Place a talent or any object into the green.
On the INPUT panel adjust Tracking Delay while you move your camera back and forth. It’s expressed in frames. The value is correct if the starting and stopping of the motion of the real object and the virtual pattern happens the same time on the screen.
Calibrating the tracker position
Please make sure that your Vive system is properly calibrated so that the origin is on the floor.
At this point you must have mounted your Vive tracker onto your camera. Since the position and most likely the exact orientation of the tracker differ from the position and orientation of the actual camera sensor, you need to do some adjustment. Select the ORIGIN panel.
Adjust Delta Head Transf according to the position and rotation of the tracker relative to the camera sensor. The rotation is the most sensitive parameter, try to be as precise as you can. This is the hardest part of the process.
Calibrating the camera zoom
Since in this environment you cannot provide zoom encoder data, you’ll have to use a fixed camera zoom of your choice. When you’ve set it up you have to synchronize the virtual camera zoom to it.
Switch to Studio mode, because watching the virtual patterns and/or markers is the easiest way to do this.
You can use either a talent or any object placed into the green.
Select the INPUT panel, and turn on Manual Lens.
You have to adjust Manual Zoom, while your rotate your camera left to right back and forth.
When you experience the least gliding between the real object and the virtual pattern, that will be the right value.
Note that this is a rudimentary method, and also it does not take lens distortion into account, but in this experimental environment it might be enough.
Studio setup
If you only have a limited size green you need 3D masks that moves and rotates with the camera to always ensure the cropping of the out-of-green parts of your studio.
Switch to Studio mode.
In this mode you’ll see a box-shaped schematic studio model overlayed on the camera input image. You can use the properties of the STUDIO panel to mark the green and non-green areas of your physical studio on this schematic model.
Via Base Cam Transform you can align the orientation of your physical room to the walls of the schematic model.
By adjusting Front Wall, Left Wall etc. you can set a size for the model to approximate the walls of your studio.
With the Green * properties you can approximate the green surface on the front and side walls and on the floor. The numbers represents the distance of the edges of the green from the corners.
You do not have to be absolute precise to the millimeters. The point is that wherever you move or rotate the camera the green surface of the model always remain in the boundaries of the real green screen.
The result should look something like this:
As you can see you can also use two virtual markes to mark certain features of the physical room, thus you can check the quality of the tracking more precisely. Also you can set the opacity of the schematic model.
Aligning the virtual set
Having set up the studio mask you can switch to Final mode to adjust the final composite image.
On the SCENE panel you can align the orientation of virtual set and the talent using the Base Cam Transf property.
IMPORTANT: do not confuse this property with the one on the STUDIO panel with the same name. The latter aligns the schematic studio model with the studio room, while this one places the talent within the virtual set.