Start typing to search...

How to Record Camera Tracking Data



We differentiate between two kinds of tracking data recording serving different purposes. They can be used independently of each other in any combination.

Input recording

Its purpose is to enable offline rendering of the final show using pre-recorded camera images and tracking data. 

In Aximmetry you can make offline rendering. There are some cases when your workstation cannot make a live render with HQ video input. In that case, you can make a low-resolution recording in Aximmetry with lens distortion and tracking data while your camera records the video in high resolution. Later you can render the high-resolution video and your scene offline.

Video recorded: the raw, unmodified camera input image.

Tracking data recorded: the raw, unmodified data coming from the camera tracking system and also the full lens distortion data either coming from the tracking system or calculated from Aximmetry's own calibration file. They are stored in an Aximmetry-specific file format (.xdata).

All the inputs or a selection of them can be recorded individually.

Final composite recording

Its purpose is to enable further 3D post-production work with third-party rendering software using the final composite output of Aximmetry, both image and tracking data.

Video recorded: the final composite of the virtual background and the keyed camera images.

Tracking data recorded: the final camera position and focal length data which are already modulated by the Base Cam Transform and ORIGIN settings and the VR path motions. It's stored in an FBX file in order to be able to use it in any third-party system. Please note that currently the lens distortion data cannot be recorded with this method. Important that this feature can be used with VirtualCam* compounds as well to record virtual camera motions.

IMPORTANT: Aximmetry can export FBX but cannot import or work with FBX. We recommend using FBX format if you want to use 3rd party software for post-processing.

Input recording

This functionality is added to the INPUTS control board of each TrackedCam*.xcomp.

RecordingTrackingData Image1.png


The pairing of the camera video frames with the corresponding tracking data frames is based on timecode. Therefore it is mandatory to provide the system with a running time code that is compatible with the current system frame rate.

Using individual timecode

By default, each recording has its own timecode that starts from zero when the recording begins. This is the simplest way to provide synchronization.

Using input timecode

However, you may want to use a timecode the camera provides via SDI. It can be either the running internal timecode of the camera or can be taken from a central timecode generator attached to the camera.

This way you even have the freedom to record the image on the camera itself for the best possible image quality, then you'll still be able to pair it with the tracking data recorded in Aximmetry.

For this turn on the Use Input TC option on the RECORD panel:

RecordingTrackingData Image2.png    RecordingTrackingData Image3.png


Select the RECORD panel.

Select which inputs you want to record. Please take into consideration that recording too many inputs simultaneously may impact the system's performance.

RecordingTrackingData Image5.png

You can specify the usual parameters of the recording format. For all cameras, the same settings will be used, but the file names will be suffixed individually based on the tags you provide (Output Suffix X). Furthermore, all takes will have an individual index number. Since these recordings will be re-used later we recommend using a higher bitrate (also depending on your storage capacity).

RecordingTrackingData Image6.png

Press the red circle button to start the recording.

RecordingTrackingData Image7.png

On the preview screen, you'll see a red counter displaying the recording time. Also, the indices of the currently recording inputs are indicated.

RecordingTrackingData Image8.png

Use the white stop button to stop the recording. On stopping the last recorded file(s) will be automatically entered into the Input X properties of the PLAYBACK panel thus enabling the immediate checking of the recordings.

RecordingTrackingData Image9.png

Regarding the playback see later in this documentation.

Please note that besides the video file, a .xdata file with the same name containing the tracking data is also created.

Recording high-resolution video on the camera

As we discussed above you might want to record the image on the camera itself for the highest possible quality and only record the tracking data with Aximmetry. For that choose the Tracking Only option:

RecordingTrackingData Image10.png

Make sure your camera sends a running timecode through SDI and also uses the same timecode for the recording.
Turn on Timecode Master in the INPUT  box to use this timecode for the recording. In this picture, Timecode Master is turned on in the INPUT 1 panel, but you may want to use a different input's timecode.

Turn on Use Input TC in the Record panel to use this external timecode.

RecordingTrackingData Image11.png

Please note that in Tracking Only mode you still get a recorded video, but it's created with highly degraded quality for the minimal performance impact. It only serves as a placeholder. Later when you want to play back the material, replace this placeholder by copying the recorded file from your camera's SD card over it. Make sure that the file names of the replaced video and its corresponding tracking data file (.xdata) match (see later: Playback).


All inputs are automatically recorded along with their audio. By default, the audio is taken from the same SDI/HDMI input the video is coming from. In order to use a separate sound card or any USB device for receiving audio turn on the Use Audio Device option for the given input and select the corresponding system audio device:

RecordingTrackingData Image12.png    

To hear the incoming audio turn on the speaker button on the AUDIO panel and in the Monitor Device property specify which Windows audio output device you want to hear the audio on (usually the default Primary Driver will be fine):

 RecordingTrackingData Image15.png

In order to synchronize the incoming video and audio signals use the Audio Dev Delay property. It is expressed in seconds.

RecordingTrackingData Image16.png  RecordingTrackingData Image17.png

Here you can also adjust the Audio Level. It's relative and expressed in decibels, 0 means no change.


In order to playback a recording, select the video file(s) on the PLAYBACK panel. You can specify a file for each input separately. If one of the Input X properties is left blank the corresponding input will continue to receive the live signal even in playback mode.

RecordingTrackingData Image18.png    RecordingTrackingData Image19.png

As we mentioned earlier, after each recording session the last recorded file(s) are automatically entered into these properties.

Please note that here you select a video file, but beside it, a corresponding file in .xdata file format with the same name must exist as well.

If you wish to replace the low-resolution video recorded with Aximmetry with a high-resolution video recorded on the camera you should take the following steps:

  • Copy the earlier recorded video from the camera to the computer.
  • Copy the earlier recorded .xdata to the location where you have pasted the high-resolution video.
  • Ensure that both the video and the .xdata file have the same name (you can rename the .xdata to match the video's name or vice-versa).
  • You should see something like this:
  • Select the high res video on the PLAYBACK panel.

Turn on the Play button on the panel to put the system into playback mode:

The preview screen will indicate the playback mode and also the indices of the inputs that are actually involved in the playback (the other inputs continue to receive their live signal).

RecordingTrackingData Image21.png

By default, you will only see the first frame of the recording frozen. The reason for this is that the system expects you to use a Sequencer to play back the video and tracking because this way it is very easy to add any synchronized animation to your show via additional sequence tracks. So please add a Sequencer and connect it this way:

RecordingTrackingData Image22.png

If you use recordings for multiple inputs connect the Sequencer for all of them:

RecordingTrackingData Image23.png

If you open the Sequence Editor by double-clicking the Sequencer you will see a track for each video file you specified on the PLAYBACK panel.

RecordingTrackingData Image24.png

Now you can Play/Stop or seek the sequencer, as usual, to see the recorded material along with the tracking.

RecordingTrackingData Image25.png

As we mentioned you can add any additional animation tracks that will play in sync with the recorded material.

RecordingTrackingData Image26.png

RecordingTrackingData Image27.png

On using the Sequencer in general please see this documentation.

Combining playback mode with recording

You can leave the playback mode on even while you are recording.

Firstly this way you can instantly see the results when you stop recording since the last recorded file is automatically selected for playback.

Secondly, the files selected for playback on the inputs you didn't select for recording still remain in playback. To be more clear let's suppose you have a recording for all the inputs:

RecordingTrackingData Image28.png

Now you start a new recording with the following settings:

RecordingTrackingData Image29.png

In this case, the current live feed on Input 3 will be recorded while the previously recorded material for Input 1 and 2 will be played back simultaneously, and can be seen e.g. in MATRIX mode along with the live feed.
Please note that by the construction of the compound, recording the playback is impossible.

RecordingTrackingData Image30.png

You can also instruct the system to restart the Sequencer automatically each time you start or stop a recording by making the following connection. This helps both the above usage scenarios.

RecordingTrackingData Image31.png

Playback time display

If you make the following connection you will also get a display of the current playback position.

RecordingTrackingData Image32.png

RecordingTrackingData Image33.png

Please note that the displayed time is the position of the Sequencer, not the video file itself. The two can differ, because you may shift the video track within the sequencer to better suit your animation timeline.

Tracking data

When you're recording a .xdata file is also created. It contains the full tracking data, including the camera lens data. If you used an Aximmetry lens file created with Camera Calibrator then the .xdata will contain the final calculated FOV, lens distortion, etc. data for each frame. It means that when you play it back you'll be guaranteed to see exactly the same result you got when you did the recording.

However, the raw zoom/focus encoder positions are recorded as well. It means you have the freedom to improve/adjust the lens calibration even after you have recorded the material. In order to use an improved or different lens file during playback simply turn off External Lens Data and head to the Calibration Profile pin to specify the modified lens file.

RecordingTrackingData Image34.png

You can also use the Manual Lens mode to override the recorded lens data. You can also find it in the INPUT# Pin values. After turning it "On", you can manually adjust the Zoom and Focus values.

Offline rendering

As you read earlier, you can make offline rendering in Aximmetry. This is needed when your scene is too complex for your workstation to render it in real-time without fps drops. Offline rendering usually takes more time, because it is designed to render the scene at its most detailed form and at the desired framerate.

To do this first prepare your playback based on the above.

Next, place a video recorder module in your Flow Editor and connect the camera compound's output to its Video pin. Then connect the Sequencer's Rendering output pin to the Video Recorder's Recording pin. This will ensure your recorder starts recording when you click on the Render button in Sequencer

This is the Render button in the Sequencer

We recommend you use the Render feature of the sequencer. Using it lets you keep separate control of the Sequencer and Video Player.
When you click on the Render button the Sequencer plays the whole sequence and stops it at the end. 

Simultaneously the Render output will jump to logical '1' and switch the Recording input of the Video Recorder to '1'. Then the recording will start as well. At the end of the sequence, the value of the Render output pin will fall back to '0' switching the Recording input's value to '0' as well and the recording will stop.

The next step is to set the rendering frame rate in the Video Recorder module.

In the Video Recorder's Pin values, find the Frame Rate pin and set the desired video frame rate:

NOTE: The default Frame Rate pin value is Realtime. In the Realtime setting, the recorder will adhere to the rendering Frame Rate.
For offline rendering, a specific frame rate must be selected instead of using the Realtime value. Specific frame rates enable Aximmetry to generate frames as rapidly as possible for the video.

Click on the Render button in the Sequencer.

Aximmetry will render your scene with the desired frame rate. It will wait until the actual frame is calculated then jumps to the next frame. That's why offline rendering might take more or less time, than live rendering.

Double sampling

Let's say that you need a 1080p video output and you want to give the maximum quality to it. For this case, you can use double sampling. This is just an example you can use any resolution you want.

This means that you set Aximmetry render to double the resolution that you need. In this example this is UHD

To set the render's resolution go to Edit then Preferences menu


In Preferences, select Rendering then choose UHD from the Frame Size's dropdown menu

Now Aximmetry will render your scene in UHD

Place a Video Recorder in your compound. Pull a wire to connect the Sequencer's Rendering pin to the Video Recorder's Recording pin.

In the Video Recorder's pin values turn off Keep in size, then set the frame size you want for the recording. In the example, it is 1920 and 1080

Now if you click on the Render button in the Sequencer the Video Recorder starts recording the downscaled version of the UHD rendered scene. Because of the higher quality source, the sampling the recorder makes will result in a better quality 1080p video.

Final composite recording

For this scenario, the tracking data recording functionality is added to the Record_* compound you normally use to record the final image.

The path for this compound is: [Common_Studio]:Compounds\Record\Record_3-Audio.xcomp

If you receive audio from multiple inputs simultaneously then connect the Cam Audio 2, 3 pins as well.

On using the Record compound in general please see this tutorial.

Do not forget to connect the Record Data pin. It carries the tracking and other information to be recorded.

By default the compound only records the video and audio, this is the most common usage. In order to record the tracking data as well, turn on the TRACKING panel:

On the panel, you can specify whether you want to generate the FBX file in ASCII or binary format. The default is the latter, it's more compact.

RecordingTrackingData Image37.png    RecordingTrackingData Image38.png

IMPORTANT: We can only generate FBX files. We cannot work with it directly in Aximmetry.


The pairing of the final composite video frames with the corresponding tracking data frames is based on timecode. Therefore it is mandatory to provide the system with a running time code that is compatible with the current system frame rate.

The tracking data is recorded into an FBX file in the form of several camera animation curves. The curve key timestamps will correspond to the frame's timecode.

Using individual timecode

By default, each recording has its own timecode that starts from zero when the recording begins. This is the simplest way to provide synchronization.

Using the system Master Timecode

Aximmetry has its own internal timecode that starts from zero when you start Composer. You can use this timecode as well in order to differentiate between different takes of recording. To use it turn on Use Master TC on the RECORDER control panel:

RecordingTrackingData Image39.png    RecordingTrackingData Image40.png

Using input timecode

You may want to use a timecode provided by one of the cameras itself via SDI. It can be either the running internal timecode of the camera or can be taken from a central timecode generator attached to the camera.

The final composite may contain cuts between multiple camera inputs. But the source of the timecode has to be constant, so you have to decide which camera input's timecode will be used. Then turn on Timecode Master for that single input:

RecordingTrackingData Image41.png    RecordingTrackingData Image42.png

This option means that the Aximmetry internal Master Timecode will be overwritten by the incoming timecode of the camera input, in other words, the camera timecode becomes the Master Timecode in Aximmetry. By combining this with the Use Master TC option discussed previously you effectively get the camera timecode passed to the final recording.


Use the RECORDER panel's properties to set up the output video format as usual. Then use the two buttons of the panel to start and stop the recording.

RecordingTrackingData Image43.png

The time indicator will be displayed on the right side of the preview screen to differentiate from the input recording one.

RecordingTrackingData Image44.png

Camera FOV and lens distortion

The resulting FBX file will contain a camera object with aperture properties and an animated Focal Length that gives the same aspect ratio and field of view as the Aximmetry output image.

But please note that currently the lens distortion data cannot be recorded this way.

Using with VirtualCam

It is worth noting that this recording functionality works together with any VirtualCam compound as well. In this case, the FBX will contain the animated virtual camera positions.

Combined usage

The input recording and the final composite recording functionality can be used independently of each other in any combination.

E.g. you can start a recording for both of them simultaneously.

But what is more useful is that you can play back a pre-recorded input while recording the final output along with the tracking data for further post-processing.

Article content

Loading spinner icon