Note that this documentation assumes that you are already familiar with the standard-setting up
of a virtual studio system with virtual cameras. If not, please watch the documentation: Setting Up Virtual Sets with VR Cameras
In this documentation, we describe how to transfer and calibrate a person's image into a 3D virtual space, using real-world tracked cameras.
Inputs and Outputs
Setting up the camera and other SDI inputs and specifying the outputs is done exactly the same way as with the Setting Up Virtual Sets with VR Cameras.
Defining camera tracking devices
By default, no camera tracking devices are listed in the system. First, you have to select what brand you want to use and with what network configurations.
Go to Device Mapper and click Manage Devices at the bottom.
Scroll down to the Camera Tracking section and select the device brand you plan to use.
Firstly specify the Data rate, the rate the tracking system sends position information at. For e.g. if you work with a 50i based studio system the rate most likely will be 50. Some systems send data at a multiplied rate though, for e.g. 200.
Then you have to add the device or devices that are connected to your machine by clicking Add.
The different brands require different network setups.
For e.g. if you use Mo-sys, you can select between serial port and UDP-based solutions, and in both cases, you have to specify an input port and a Camera ID. The latter is used because data from multiple cameras can be received through a single port simultaneously.
In the case of Stype only a UDP port has to be specified:
Ncam is a TCP based system, therefore you have to specify both a server address and a port:
If you use multiple tracked cameras you have to add several devices with different ports / Camera IDs.
Thoma is a special case, you only have to specify a UDP port, but it will appear as 4 different devices in the system since a Thoma system sends a packed data of 4 cameras simultaneously.
After adding all necessary tracking devices click OK. The Composer will restart.
Mapping camera tracking devices
Go to Device Manager / Camera Tracking. Now you’ll see all the tracking devices you’ve added previously in the dropdown list.
Suppose we’ve added three Stype devices.
The tracking device that is mounted on the camera that is defined as video input #1 in Aximmetry has to be defined as the tracking device #1. Same goes for #2, #3 etc.
Some tracking systems, like Stype, Mo-sys, Trackmen, NCam provide their own lens calibration system, and always send all the FOV and lens distortion data along with the camera position, according to the current zoom state of the lens. In this case, you do not have to select any further options.
Other tracking systems only provide camera position data. In this case, Aximmetry’s own Camera Calibrator application has to be used to produce the lens data for each camera. This is discussed in a separate documentation. When all the data is produced, you can select an appropriate lens data file for each tracking device using the Mode column in the Device Mapper.
If you finished mapping the devices you can click Start.
In your compound, you have to use one of the TrackedCam_xxxx.xcomp modules to access the features related to tracked cameras.
You can find an example in the News Room - TrackedCam_3-Cam.xcomp stock scene.
Its control boards are somewhat similar to the ones used with virtual cameras. But you need an additional setup and also the billboards work very differently. Let’s see.
If you’ve followed the Device Mapper setup we described above then for each INPUT you leave the default Device selections both for Camera and Tracking. It’s Mapped #1, #2, etc for each input.
What needs your attention is the Delay values. The camera video frames and the corresponding tracking data packets arrive at different times to the rendering system, depending on the hardware you’re using. You have to compensate for the difference manually, by watching the output and adjusting the Tracking Delay value while you (or your colleague) make sudden movements with the camera.
The delay value is measured in frames meaning the current video frame time of the system, for e.g. 1/25 seconds if you use a 50i system.
IMPORTANT: the delay value can be fractional (for e.g. 2.5) since there is no guarantee that the difference can be expressed in whole frames.
Ideally, both the cameras and the tracking devices are genlocked which ensures that the delay you’ve set remains constant - but the delay value itself still can be fractional.
The example above is suitable if the tracking data arrives earlier than the corresponding video frame. In most cases, this applies.
But in the case of some tracking systems, the opposite is true. In this case, you have the following options.
a) You can specify a negative value for Tracking Delay to a certain degree.
The threshold is determined by the global In-to-out latency of the system you’ve set in Preferences. If you cross the negative threshold, you’ll get a Cannot keep latency error message in the log and your tracking will stutter.
b) In this case you can increase the global In-to-out latency until you can reach the desired negative value for Tracking Delay.
c) You can also increase the Camera Delay of the individual input.
NOTE that in this case, the particular camera image will be late compared to the other cameras, so this is not an ideal solution.
If you use standalone zoom/focus encoders that are independent of the camera tracking system specify the device representing them in the Zoom Device property. You can also define an independent delay for this device.
Another usage of the Zoom Device property is when you want to use a PTZ camera but you also want to be able to move it around. In this case, you can use an independent tracking system for the positional data and the PTZ's own reported data for rotation and zoom+focus. Specify them in the Tracking Device and Zoom Device respectively. The lens calibration data still have to be selected in the Calibration Profile property.
NOTE: Some PTZ cameras send the zoom/focus data with different delay than the rotation data. In this case the same device can be selected both for Tracking Device and Zoom Device, then different delays can be specified in Tracking Delay and Zoom Delay.
On the MONITOR panels, you can find the same Input and Keyed modes you’ve already familiar with from the VirtualCam boards. Keying is done exactly in the same manner.
The only difference is that there’s no Cropped mode. The reason for this is for a tracked system a simple 2D cropping is not suitable. You need 3D masks that move and rotate with the camera to always ensure the cropping of the out-of-green parts of your studio.
And exactly this purpose is served by the Studio mode.
In this mode, you’ll see a box-shaped schematic studio model overlayed on the camera input image. You can use the properties of the STUDIO panel to mark the green and non-green areas of your physical studio on this schematic model.
Via Base Cam Transform you can align the orientation of your physical room to the walls of the schematic model.
By adjusting Front Wall, Left Wall, etc. you can set a size for the model to approximate the walls of your studio.
With the Green xxxx properties you can approximate the green surface on the front and side walls and on the floor. The numbers represent the distance of the edges of the green from the corners.
You do not have to be absolutely precise to the millimeters. The point is that wherever you move or rotate the camera the green surface of the model always remains in the boundaries of the real green screen.
The result should look something like this:
As you can see you can also use two virtual markers to mark certain features of the physical room, thus you can check the quality of the tracking more precisely. For example, when rotating your camera equipment you should see the virtual markers stay in place compared to the real-world camera input.
Also, you can set the opacity of the schematic model.
You can also specify more abstract non-green areas in the flow graph by importing and connecting 3D models to the Additional Mask pin of the TrackedCam_xxxx.xcomp module.
These models for example can mask out a chair in front of a green screen or specify the boundaries of a complex green wall.
Aligning the virtual set
Having set up the studio mask you can switch to Final mode to adjust the final composite image.
On the SCENE panel, you can align the orientation of the virtual set and the talent using the Base Cam Transf property.
IMPORTANT: do not confuse this property with the one on the STUDIO panel with the same name. The latter aligns the schematic studio model with the studio room, while this one places the talent within the virtual set.
IMPORTANT: make sure that the talent is scaled proportionately relative to the virtual studio when you place it. In case of a virtual set with tracked camera this is the time when you can "scale" the talent by its position. Once you're done with this, you can "Use Billboards" (see later)
As you can see the image above has ugly black bands on the edges. The reason for this is that the lens distortion data coming from the tracking system is applied to the virtual background to match it with the distortion of the camera image. In this case, the distortion pulls the image towards the center.
To eliminate the black edges we have to render the image in a larger resolution and apply the distortion on that larger image. The property that controls this is the Edge Expand. Increase it gradually by hundredths until the borders are gone. Do not use an unnecessarily large value, because it can have a serious performance impact.
Note that the different zoom states have different distortions, so you have to check Edge Expand for all zoom states.
To blend the talent into the background smoothly use the LIGHT WRAP panels.
By default, the talent image is simply overlayed on the virtual background without any modification. This is the most basic usage. It’s perfect if you do not need any coverage of the talent by the virtual objects.
Talent coverage: using the billboards
What if we want to place the talent behind a table for example? In this case, we somehow have to produce a mask from the table to cut out the talent image.
If you use one or more billboards this effect is provided automatically.
Billboards here behave very differently from the ones used in VirtualCam. They only serve as 3D masks that cut out a part from the talent image.
To use them turn on Use Billboards on the SCENE panel.
Each camera input can have a maximum of three billboards. The reason for this is that the camera can show multiple people at once. If for e.g., we want them to stand around a table we need a more complex mask that can be provided by smart usage of multiple billboards.
But now let’s start with a single-billboard case. Switch on one billboard and switch off all the others for INPUT 1.
By also switch on the lightbulb icon you can see the actual extent and position of the billboard which eases its placing.
Note that it is possible that initially, you do not see the talent at all. It is because the billboard is located somewhere offscreen in the scene.
Therefore we suggest you use the Lock To Camera option first. It always keeps the billboard in front of the camera at a fixed distance specified by the Locked Distance property. Adjust the distance, the size, and optionally the shift and rotation of the billboard via the Locked * properties.
When you finished placing the billboard with this method turn off the Lock To Camera. Now you can move the camera, the billboard will stick at the last position. If you need further refinement, use the Transformation property or the arrow handles in the Scene Editor as usual.
When placing the billboard leave some room in front of the feet so that the talent can walk around.
Now if we want to put the talent behind a table we have to move the billboard there. Turn your camera towards the table (or rotate the SCENE Base Cam Transf depending on how you want to use the scene). Use the Lock To Camera method described above to place the billboard right behind the table. Then turn off the Lock To Camera.
You might want to align the billboard precisely to the edge of the table. For this, you can use the usual scene editing methods, like arrow handles in the Scene Editor or the Transformation property. For a better view, you might want to switch to Free Camera mode when doing this.
For this scenario make sure Look At Camera is turned off.
After making the alignment switch back to the tracked view.
Now if you switch off the lightbulb icon on the BILLBOARD you can see the final result.
Let’s put back the talent and the billboard to the original position.
As you can see the billboard produces a mirror image on the floor. But it’s not quite right since the bottom edge of the billboard is way before the talent’s feet. In order to compensate for this adjust the Mirror Offset Z property.
When setting up the offset you have to take into consideration that the talent will walk forward and backward a bit, so you have to find a good middle value.
Lighting and shadow
The compositing modes described so far only perform masking of the talent without any other modification of the talent image.
If you need virtual lighting and/or shadow of the talent you have to switch on Allow Virtuals on the SCENE panel.
NOTE that Use Billboards have to be turned on as well since the lighting effects are based on the billboards.
Now you can point a shadow casting light source on the talent. You also need to properly set up the Luminosity, Ambient, Diffuse properties of the ADJUSTER. Please watch this tutorial section if you’re not familiar with the talent lighting basics.
You can also adjust the shadow position relative to the billboard in order to align it to the feet.
But it is always better not to directly light the feet to hide the alignment problems, especially if the talent walks around.
Please only use the Allow Virtuals mode if you absolutely need the virtual lighting, because it has a drawback: it can make the talent image a bit blurry because of the back and forth lens distortion applied during the render.
Automatic positioning of the billboard
If you want the talent to be able to walk around virtual objects or to be able to walk greater distances while displaying the reflection to correct positions, the methods above are insufficient. You need means to have the billboard follow the talent's actual position in 3D.
Using a tracker
The most reliable way to do this is attaching a tracker device on the talent, e.g. a Stype Follower, an Antilatency Tag, or a Vive Tracker, etc. The most ideal is if the tracker is an integral part of the same system you use for camera tracking (e.g. Stype Red Spy and Stype Follower), otherwise, you have to align their coordinate systems yourself.
In order to use this functionality select Tracking for Auto Position mode and specify the tracking device.
The system will only adjust the X, Z coordinates of the billboard according to the tracker position. The Y coordinate and all the rotation will be kept intact, therefore you still have to specify them manually. Alternatively, you can use the Look At Camera option if it suits your scenario. In order to avoid cutting the feet of the talent, you can specify an Auto Pos Offset to bring the billboard a bit forward relative to the actual tracked position. The Mirror and Shadow Offset parameters work as usual, but of course, they are also relative to the tracked position. This way the reflection and the shadow can follow the talent automatically.
Using an optical algorithm
If you cannot use any tracking solution for the task you can try the Optical mode.
It determines the talent's position by finding the bottom of his feet within the camera image. Proper keying is vital for this algorithm.
Please note that this option does not work perfectly in all cases. For example, if the talent switches between standing on the front or back foot regularly, this might not be a satisfactory solution. Please always test the functionality.
Additional virtual camera movements
You can add virtual camera movements on top of the real camera position.
The motion paths are applied relative to the actual position and rotation of the tracked real camera. Therefore you can combine the two kinds of movement. For e.g. you can rotate the physical camera while you’re applying a closing virtual motion thus achieving an interesting flying curve you can control manually.
To add virtual camera movements use the Camera VR Path and Edit Camera Path panels.
The use of these panels is exactly the same as with the VirtualCam control boards. They are detailed in the VR Camera documentation.
You have to be careful. Do not use paths that move the camera sideways too much because the talent image will become distorted noticeably.