Search
Start typing to search...

Using LED walls for virtual production

Author:

Back to Introduction to LED Wall Production

Introduction

How the Term “LED wall” is Used in Aximmetry

On the Aximmetry control boards the term “LED Wall” means a contiguous segment of your LED wall structure that can be either described as

  • a flat rectangle
  • or a curved rectangle (where the curve is an arc of a regular circle)

Usually, an LED wall structure can be divided into 3-4 segments that fall into the categories above. E.g. typical setups are:

  • a corner of 3 LED walls: 2 for walls at a right angle and 1 for floor
  • 1 wall (flat or curved) for the main front display, 2 for both sides in right angles, and 1 for the ceiling (the latter 3 ones usually only serve as ambient light/reflection sources).

These are only examples, you can use any configuration.

If a curved rectangle is too long (e.g. a 270° circular display) then you have to split it into 3-4 segments in order to be rendered correctly.

You might also have to further split an LED wall (typically the front one) if you need a very high-resolution rendering for it (e.g. 2 x 4K). In this case, you might decide to split the GPU load among two or more PCs.

Please note that these segments do not necessarily have to be separated in terms of physical connection (HDMI or DP). Depending on the setup of your LED wall processor you can even transmit the entire image via a single HDMI/DP connection. The imaginary separation is only needed by Aximmetry in order to be able to render all the parts of the LED wall correctly. The only cases you need actual multiple connections are when you use multiple PCs for rendering, or when the bandwidth of a single HDMI/DP connection is not enough.

Finally an “LED wall” actually can be a flat TV or a projector if that suits your scenario better.

In the rest of this documentation, we use the term “LED wall” in the sense described above.

Render Passes, Single- vs. Multi-Machine Configuration

In order to provide the correct image content for all LED walls, Aximmetry performs the following render passes:

  • "Frustum" rendering: the image portion that the camera currently actually sees on the LED wall. This should be done in the highest necessary resolution and quality.
  • "Fill" rendering: one pass for each LED wall, they provide the content for the parts that the camera does not see currently. Since the main purpose of these image parts is to provide an ambient lighting and reflection environment they can be rendered in a reduced quality.

Depending on the number of LED walls and the resolution reduction of the Fill passes you might find a single PC enough to render all the content.

But if you need very high-resolution rendering for all the Fill rendering you might need multiple PCs to perform that. In this case, you have to specify which LED wall is rendered by which PC. Please note that the "Frustum" pass always has to be performed by all the PCs.

A Freeze option is also available for the Fill rendering which allows using a single machine even if you need high-resolution Fill content.

We'll discuss the setup of these cases later.

Camera Inputs

You can shoot the scene using single or multiple physical cameras.

Note that when using LED walls the signal coming out from a camera is already the final composition of the real foreground and the virtual background. Therefore you can record the picture on the camera itself or lead the signal directly into your studio equipment for broadcasting/recording.

However, there are a few reasons why it is still recommended to wire the signal back to the Aximmetry PC:

  • It is much easier to set up the position of the LED walls while watching the camera picture. This is the preferred way, see below.
  • You might want to record the final content with Aximmetry.
  • You might want to make adjustments or post-processing the image.
  • You might want to composite further elements on the image like overlays, PIP, text, channel logo, etc.
  • When you use the Green Frustum mode (see below) you might still want to see a preview of what the final image will look like.
  • You use real-time Digital Extension (see below)
  • You use multiple cameras. Please consider that even if you record all camera signals separately, you won’t be able to switch between them arbitrarily during a later editing session, because LED walls can only display the virtual background from the perspective of one selected camera at a time. Aximmetry UI provides the means for switching the camera input and the background projection in sync.

NOTE: It is recommended to use very different settings in your physical camera when shooting LED walls compared to any other scenario.
For example, you should turn off all automatic settings in your camera. Like, turn off auto exposure and autofocus, as these effects should be created by your virtual scene and not by your physical camera.

Camera Tracking

Each camera must have spatial information. It is normally provided by a camera tracking system, but it also can be specified manually if you want a fixed camera.

On the Aximmetry control boards, you can switch between the cameras. The virtual background is always projected onto the LED walls from the tracked position of the currently selected camera. 

Note that the final image will only be correct when it is seen from the perspective of the selected camera. When it is seen from the angle of other cameras or from the studio personnel it might look weird, but this is normal.

The Example LED Wall Configuration

Throughout this documentation, we will use the following arrangement as an example. On every screenshot, the visible settings will be in accordance with this arrangement. It is very likely, however, that you will use a different arrangement/setup.

LED Wall Properties

Front wall: 8m x 3.5m, curved, 2816 x 1232 pixels

2 rear walls: 2.5m x 3m, flat, 520 x 624 pixels each

UsingLedWallsForVirtualProduction Image1.png

Single-Machine Output Frame Arrangement

The LED wall processor receives the image of all three walls via a single DP cable within a 3856 x 1232 pixel frame in the following arrangement:

UsingLedWallsForVirtualProduction Image2.png

Multi-Machine Output Frame Arrangement

In order to demonstrate a multi-machine configuration we split the front wall into two halves, 1408 x 1232 pixels each. The right half will be rendered by the central machine, the left half by a second machine, and the 2 rear walls by a third machine.

UsingLedWallsForVirtualProduction Image3.png

The central machine sends its part into the LED wall processor via a DP cable in a 1408 x 1232 pixels frame:

UsingLedWallsForVirtualProduction Image4.png

The second machine sends the left part via another DP cable also in a 1408 x 1232 pixels frame:

UsingLedWallsForVirtualProduction Image5.png

The third machine sends the two rear images through a single DP cable. They are arranged in a 1040 x 624 pixels frame:

UsingLedWallsForVirtualProduction Image6.png

Startup Configuration

Single-Machine Configuration

Preview Output

It is recommended to have a secondary monitor attached to your PC for preview purposes. If you have one select it as output #1.

UsingLedWallsForVirtualProduction Image7.png

If you do not have one, omit this step, you’ll be able to preview on one of the preview panels of the Composer as well.

Final Program Output

If you need to forward the final output into your studio system for broadcasting/recording, or you simply want to watch it on a dedicated monitor, assign #2 to one of your SDI outputs. Pay attention to setting the correct frame rate that is matched with both the camera input and the LED wall output (see below).

UsingLedWallsForVirtualProduction Image8.png

LED Wall Output(s)

Normally you can send all the LED wall images merged into a large frame through a single HDMI or DP cable, and the LED wall processor will send everything to the right place.

The LED wall processor will report the total size of the frame and also the expected frame rate for your PCs. This will appear as a monitor of the right size both in Windows and in Aximmetry.

In this case, select this output as #3 and set Sync on it.

UsingLedWallsForVirtualProduction Image9.png

In this case, you send the pictures through multiple HDMI/DP cables assign #3, #4, #5, etc. to these outputs, and set Sync on the first one.

UsingLedWallsForVirtualProduction Image10.png

Multi-Machine Configuration

Regarding what is multi-machine configuration and how to assign remote engines please consult this documentation. Here we only discuss the LED wall output-related setup.

It is up to you how many PCs you use and which LED wall(s) these PCs render. E.g. you can use a separate PC for each LED wall, or you can use 2 PCs each rendering 2-2 LED walls, etc.

Central Machine Setup

The central machine will handle all the camera and tracking inputs, the final output, and the output(s) of its intended subset of the LED walls. The setup is basically identical to the single-machine case.

UsingLedWallsForVirtualProduction Image11.png

The next step is defining the two other machines as remote renderer engines. Let's suppose they are located at IPs 192.168.0.2 and 192.168.0.3 on the LAN.

UsingLedWallsForVirtualProduction Image12.png

Remote Engines Setup

The following setups have to be done on the second and third machines themselves by starting the Aximmetry Renderer Config app.

Each remote engine will handle its intended subset of the LED walls only. If they're driven via a single HDMI/DP cable then assign #1 to the corresponding output. (In the case of multiple cables use the #1, #2, #3, etc indices.)

Second machine

UsingLedWallsForVirtualProduction Image13.png

Third machine

UsingLedWallsForVirtualProduction Image14.png

Channel Matrix

After starting the Composer on the central machine go to the Edit menu / Preferences, then Channel Matrix. Turn off the Unified multi-machine setup. You have to set it up so that channels 1 and 2 go to the preview and final outputs of the central machine, while channels 3, 4, and 5 go to the LED wall outputs of the corresponding machine. It should look like this:

UsingLedWallsForVirtualProduction Image15.png

It is a typical scenario for two machines and interprets as:

  • Channel 1, 2, and 3 go to outputs #1, #2, and #3 of the local (central) machine
  • Channel 4 goes to output #1 of the remote machine at 192.168.0.2
  • Channel 5 goes to output #1 of the remote machine at 192.168.0.3

Rendering Frame Rate

The system rendering frame rate must match the refresh frequency of the LED wall system (and normally should match the camera frame rate as well). Start Composer and go to Edit menu / Preferences, then the Rendering section, and set the Frame rate.

UsingLedWallsForVirtualProduction Image16.png

Please note that the Frame size setting has no effect here since the Frustum and Fill rendering frame size will be specified individually on the control boards, see below.

Setting Up an Unreal Engine Scene

Using Unreal Engine for Aximmetry 5.x

Load your project into the Unreal Editor for Aximmetry.

Once the project is loaded, insert an LED wall camera from Aximmetry->Add Camera.

Once the Aximmetry camera has been added, click Cook Content for Aximmetry DE to cook the scene.

Using Unreal Engine for Aximmetry 4.x

In your Aximmetry projects folder find the

Common_Studio\Unreal_Assets\Aximmetry_LEDWallCam

folder and copy it into the Content subfolder of your Unreal project.

UsingLedWallsForVirtualProduction Image17.png

Load your project into Unreal Editor for Aximmetry.

Find the following blueprint in your Content and drag and drop it into the scene. The position does not matter.

UsingLedWallsForVirtualProduction Image18.png

UsingLedWallsForVirtualProduction Image19.png

Cook the scene for Windows.

Setting Up Unreal Engine Scene in Aximmetry DE

Start Aximmetry DE if you haven’t started yet.

Create a new compound.

Drag and drop your uproject file into the Flow Editor. You will get:

UsingLedWallsForVirtualProduction Image20.png

Go to File Browser and find
[Common_Studio]:Camera\LEDWallCam\LEDWallCam_3-Cam_4-Wall.xcomp

Drag and drop it into the Flow Editor.

Connect everything in this way:

Setting Up an Aximmetry Native Engine Scene

For rendering the scene you have the provide one main camera for the Frustum rendering and one camera per used LED wall for the Fill rendering.

It's recommended to start by using the
[Common_Studio]:Compounds\Render\Render_LEDWall_4-Wall.xcomp

compound from the beginning and change its internals to your needs.

You will also need the
[Common_Studio]:Camera\LEDWallCam\LEDWallCam_3-Cam_4-Wall.xcomp

compound.

All these have to be connected like this:


In order to edit your scene, you can use the Free camera mode.

UsingLedWallsForVirtualProduction Image23.png  UsingLedWallsForVirtualProduction Image24.png

In this mode, you'll only see your 3D scene on the preview monitor and you can edit it normally. When you finished editing and want to continue with setting up the LED walls simply switch back to normal mode.

UsingLedWallsForVirtualProduction Image25.png

Setting Up the Inputs

Camera Inputs

As we discussed earlier it's highly recommended to wire back your camera signal into Aximmetry. For that, you have to go to the INPUTS control board and specify an input device and its video mode:

UsingLedWallsForVirtualProduction Image26.png    UsingLedWallsForVirtualProduction Image27.png

Of course, you can use a Mapped device as well as demonstrated in other tutorials.

On the preview and the output monitors, you'll immediately see the camera image.

If you use multiple cameras repeat this step for INPUT 2, and INPUT 3 as well. In order to select which camera's image is seen on the preview and final output go to the CAMERAS control board and use the CAM buttons:

UsingLedWallsForVirtualProduction Image28.png

UsingLedWallsForVirtualProduction Image29.png

Tracking Input

For each camera, you also have to specify a Tracking Device that reports the position of the given camera.

UsingLedWallsForVirtualProduction Image30.png  

If you use a Tracking Device that needs lens data created with Aximmetry's own Camera Calibration tool you also have to select the lens file in the Calibration Profile property:

If you use an independent device for zoom and focus encoding you have to specify it as well:

UsingLedWallsForVirtualProduction Image33.png

For further info on using camera tracking devices and lens calibration please consult this and this documentation.

Tracking Delay

Tracking information is used for four different purposes:

  1. to render the LED wall's picture from the perspective of the camera.
  2. to help with specifying the LED wall's position in the 3D space using the STUDIO view mode.
  3. to render the Digital Extension around the LED wall's position.
  4. to place AR graphics in front of the camera picture as discussed here: Combine Different Productions in Separate Machines

For the first one, no delay is used at all, since we have to minimize the time elapse until the graphics are updated on the LED walls according to the current camera position.

For the rest, it has to be ensured that the real camera image and the AR graphics are moving together at the same time. For that you usually have to specify a delay for the tracking data:

UsingLedWallsForVirtualProduction Image34.png

It's measured in frames and can be a fractional value if necessary.

The right value has to be found by trial. Go to the LEDWALLS control board and select the STUDIO view mode:

UsingLedWallsForVirtualProduction Image35.png  UsingLedWallsForVirtualProduction Image36.png

You'll see a virtual checker floor pattern appearing over the real image. You can use a chair or the LED walls themselves or any other object to check the synchronization of the movement.

UsingLedWallsForVirtualProduction Image37.png

Move the camera while adjusting the delay value to find the correct one.

The Zoom Delay setting is only needed if you use an independent Zoom Device.

Setting Up the LED Walls

Go to the LEDWALLS control board.

This particular compound allows the use of a maximum of 4 LED walls. Each of them has a row on the control board.

Selecting How Many LED Walls Are Used

With the On/Off button at the beginning of each row, you specify which of them is used to describe your LED wall structure. According to our example, we'll use only three of them.

UsingLedWallsForVirtualProduction Image38.png

Specifying the Sizes

The first step is specifying the size and pixel resolution of each LED wall. Select the LED Wall X panels one by one and set the properties.

IMPORTANT:  Size is meant in meters and should represent the exact dimensions of your LED screen.

LED Wall 1

UsingLedWallsForVirtualProduction Image39.png

LED Wall 2

UsingLedWallsForVirtualProduction Image40.png

LED Wall 3

UsingLedWallsForVirtualProduction Image41.png

Building the Arrangement of the Output Frame

The images that should be sent out to the LED walls are produced on the LED Wall X output pins of the LEDWallCam compound.

In our example, we're using a single output (a single DP cable) to send all the 3 images, in an arrangement we described earlier. In this simple case, it is very simple to achieve using a Sticker module in the Uneven mode and setting the Columns and Rows to 3 x 1.

This simply puts the three images after each other keeping their size intact and determining the total size automatically.

In most cases this simple method is applicable. In the case of a more complex arrangement, you can use a series of Placer Precise modules placing all the images into an arbitrary rectangle within the large frame. In our case, this should be like this:

UsingLedWallsForVirtualProduction Image43.png  UsingLedWallsForVirtualProduction Image44.png

UsingLedWallsForVirtualProduction Image45.png  UsingLedWallsForVirtualProduction Image46.png

UsingLedWallsForVirtualProduction Image47.png  UsingLedWallsForVirtualProduction Image48.png

The Multi-Machine Case

In our multi-machine example, you also have to activate LED Wall 4 and set the sizes like this:

LED Walls 1 and 4

UsingLedWallsForVirtualProduction Image49.png

The frame arrangement can be done like this. LED Walls 1 and 4 can go directly to the corresponding outputs, only 2 and 3 need to be merged into a single frame.

Checking the Output Arrangement

In order to see if all the size, resolution, and frame arrangement settings are correct select the STUDIO view mode.

UsingLedWallsForVirtualProduction Image55.png

This will send a checker pattern to all LED walls.

UsingLedWallsForVirtualProduction Image56.png

UsingLedWallsForVirtualProduction Image57.png

Your settings are correct if:

  • The big index numbers at the centers are the correct ones.
  • You can see the red border on all four edges of all LED walls. The border must be 5 cm thick everywhere.
  • Each numbered square of the checker patterns has to be 50 cm x 50 cm. Use a measuring tape to check that.

NOTE: The numbering of the checker pattern also reflects the dimensions of the LED wall in meters.

Specifying the LED Wall Positions

Remain in the STUDIO view mode.

UsingLedWallsForVirtualProduction Image58.png

Placing a Flat LED Wall

In our example, we start with LED Wall 2, since it is a flat one.

Rotate the camera so that you can see the bottom center of the LED wall.

The goal here is to put the virtual image of the LED wall in Aximmetry into a good match with the image of the real LED wall using the helper graphics displayed on the preview monitor.

Initially, the virtual image will be at the origin of the virtual space, so it might not even appear on the screen, or at least it is in a very wrong position.

UsingLedWallsForVirtualProduction Image59.png

In order to help with the first step of the placing, you can use the Put In Front feature of the LED Wall X panel.

UsingLedWallsForVirtualProduction Image60.png

UsingLedWallsForVirtualProduction Image61.png

By pressing the Trigger the virtual image of the LED wall is put exactly in front of the camera at the specified Put Distance.

UsingLedWallsForVirtualProduction Image62.png

This makes any further positioning much easier.

The LED wall usually has a gap between its bottom and the floor. We have to compensate for it in Aximmetry, so measure the gap.

UsingLedWallsForVirtualProduction Image63.png

In our case it is 5 cm, let's enter it into the Y position field of the LED wall.
NOTE: The virtual LED wall's transformation position is meant in meters again, so it should correspond to the real-world position of the LED wall.

UsingLedWallsForVirtualProduction Image64.png

Now you can use the usual 3D editing tool of Aximmetry to move the virtual image into a matching position. Select your preview monitor for editing.

UsingLedWallsForVirtualProduction Image65.png
NOTE: If you are not using a fullscreen monitor, but a preview panel of Composer, please select Preview 1, 2, etc. accordingly.

Then select the LED Wall X panel.

UsingLedWallsForVirtualProduction Image66.png

A moving handle will appear at the bottom of the virtual image.  Use it to move the wall until it roughly matches the center of the real LED wall. Please pay attention to not moving the wall along the Y-axis at all.

UsingLedWallsForVirtualProduction Image67.png

Switch to the Rotate mode either by clicking the Rot button or pressing E.

UsingLedWallsForVirtualProduction Image68.png

Rotate the wall until its bottom is parallel to the bottom of the real LED wall. Please pay attention to only rotating around the Y-axis.

UsingLedWallsForVirtualProduction Image69.png

Now switch back to the Position mode either by clicking the Pos button or pressing W.

UsingLedWallsForVirtualProduction Image70.png

Move the wall until you reach a good match.

UsingLedWallsForVirtualProduction Image71.png
NOTE: Depending on the quality of the tracking and the lens calibration you may not reach a perfect match. If you get a slight ghost image it will not affect the quality of your final composition.

However, if you cannot reach a reasonably close match then you might either have a lens calibration problem or your tracking is not set properly. E.g. A typical error is that the virtual LED wall appears to be bigger or smaller than the real one. This usually means that the virtual floor does not match the real one (the tracking system's zero plane is not at the real floor).

In our example, the same procedure has to be done for LED Wall 3, since it's also a flat one.

Placing a Curved LED Wall

Our example LED Wall 1 is a curved one, let's set up that.

Firstly do the Put In Front and measure and enter the Y position steps we described above.

The first goal is to align a flat virtual wall to a tangent of the curved LED wall that goes through the center of the LED wall. A number of tricks can be figured out how to achieve this. One example is stretching a rope or a cable between the endpoint of the curve creating a straight line.

UsingLedWallsForVirtualProduction Image72.png

Then use moving and rotating to align the flat wall to the rope.

UsingLedWallsForVirtualProduction Image73.png

Then without any further rotation move the wall to align its center to the real LED wall's center. You can use the checkerboard pattern to determine the center of the walls.

UsingLedWallsForVirtualProduction Image74.png

Now rotate the camera so that you can see one of the side edges of the LED wall.

UsingLedWallsForVirtualProduction Image75.png

Adjust the Radius parameter of the LED Wall until you get a match.

UsingLedWallsForVirtualProduction Image76.png

UsingLedWallsForVirtualProduction Image77.png

Please note that Radius has a minimum value, the one that describes a half circle. As long as the property is under that value you will see a flat LED wall. Only after crossing that minimum value, you will start to see the curve is changing with the parameter.

Studio Free View

By turning on the mouse button of the STUDIO panel, you'll get a free view of your virtual LED wall setup. You can move around freely with the mouse. Also, you can see the current position of your tracked camera.

UsingLedWallsForVirtualProduction Image78.png

UsingLedWallsForVirtualProduction Image79.png

Precise Aligning of Touching LED Walls

If two LED walls are touching you might need a more precise placement to avoid tearing in the rendered image.

E.g. in our multi-machine example, we split the central curved LED wall into halves.

UsingLedWallsForVirtualProduction Image80.png

In this scenario, you have to define the two halves as two separate curved LED walls and it takes extra attention to align them precisely to each other.

Firstly you can use the Studio free view described above. From a top view, you can see the small misalignments and can adjust the rotation and position parameters of either LED wall to resolve them.

UsingLedWallsForVirtualProduction Image81.png

Secondly, in the final view, any significant misalignment will be seen readily. You can make some final adjustments using this view as well. For that, turn off the SCENE viewing mode.

UsingLedWallsForVirtualProduction Image82.png

UsingLedWallsForVirtualProduction Image83.png

To see the misalignment more readily you can also use the checker pattern of the LUT panel. 

UsingLedWallsForVirtualProduction Image84.png

UsingLedWallsForVirtualProduction Image85.png

This pattern only appears within the camera frustum, therefore you have to rotate your camera so that it points to the problematic area.

Having finished with the setting simply turn off the LUT mode to see the final result.

Rendering Parameters

If you turn off the SCENE viewing mode, you'll see the final virtual background rendered on each LED wall from the right perspective.

UsingLedWallsForVirtualProduction Image86.png

UsingLedWallsForVirtualProduction Image87.png

UsingLedWallsForVirtualProduction Image88.png

Positioning the Virtual Scene

Go to the INPUTS control board.

Using the SCENE panel’s Base Cam Transf property you can set the relative position of the virtual and real space thus determining which part of the virtual scene is seen on the LED walls and from what angle.


UsingLedWallsForVirtualProduction Image89.png

UsingLedWallsForVirtualProduction Image90.png  UsingLedWallsForVirtualProduction Image91.png

NOTE: You can learn more about Camera and Head Transformations here.

Frustum Rendering

The system performs a separate render pass for producing the image portion that the camera currently actually sees on the LED wall in order to get the highest possible quality for that portion.

UsingLedWallsForVirtualProduction Image92.png

On the LEDWALLS control board, the FRUSTUM panel provides options for this render pass.

UsingLedWallsForVirtualProduction Image93.png

UsingLedWallsForVirtualProduction Image94.png

Resolution

The most important parameter is the pixel resolution. It's up to you to select a resolution that is good enough for your production while not entertaining too much load on the GPU. In general, it is recommended to use the vertical resolution of your main LED wall. The horizontal resolution is calculated automatically from the provided Aspect Ratio.

UsingLedWallsForVirtualProduction Image95.png

Edge Expand

There's always a certain amount of delay while any change in the position/orientation of the camera is reflected on the LED wall as well. Therefore it's not enough to exactly render the frustum that the camera currently sees, because any movement of the camera would cause the slipping out of the rendered image from the camera's field of view.

In order to compensate for this issue, you can specify an Edge Expand value that expands the area that it actually renders. The faster camera movements you want to compensate for the larger Edge Expand you have to specify.

UsingLedWallsForVirtualProduction Image96.png

By default, the system renders this enlarged view using the same pixel resolution you specified. This can lead to a drop in the quality of the central part the camera actually sees. If you find this drop too noticeable you can select the Preserve Resolution option that will increase the pixel resolution proportionally with the amount of expansion.

UsingLedWallsForVirtualProduction Image97.png

Use this feature with care, because it can cause a significant increase in the GPU load when used with a larger Edge Expand value.

It's also worth noting that Edge Expand is not only needed because of the delay. The camera lens will always have some degree of distortion. The frustum area projected on the LED wall will never be seen as a perfect rectangle through the camera. Its edges can shrink inwards which has to be compensated with expanding the area.

Virtual Lens Parameters

The following parameters can be set arbitrarily, independently of the actual settings of the real camera.

UsingLedWallsForVirtualProduction Image98.png

Per-LED-Wall Options

You can make color adjustments of the frustum image for each LED wall independently using the ADJUSTER X panels.

UsingLedWallsForVirtualProduction Image99.png

Normally you want to use LED walls with the same characteristics, so you won't need independent settings. In this case, simply select all the ADJUSTER panels and set their properties together.

If you select a LED Wall X panel you can turn on/off the displaying of the frustum image individually. However, this option is only meaningful if you use a multi-machine configuration, see more info later.

UsingLedWallsForVirtualProduction Image100.png

UsingLedWallsForVirtualProduction Image101.png

Fill Rendering

For each LED wall, a separate rendering is performed that provides the content for the parts the camera does not see currently. Since the main purpose of these image parts is to provide an ambient lighting and reflection environment they can be rendered in reduced quality.

Fixed Position

By default, the Fill rendering is performed from the perspective of the current camera position. In many cases this is unwanted. Instead, you want the fill rendered from the perspective of the talent or a reflective object. This will produce more accurate lighting on the talent and accurate reflections that are independent of the motion of the camera.

For that select the FILL panel and set the Use Fixed Position option.

UsingLedWallsForVirtualProduction Image102.png  UsingLedWallsForVirtualProduction Image103.png

Now the rendering is performed from the perspective of a constant "camera" position. The position itself can either be specified manually or you can use the Capture Fixed Position trigger to store the current position of the real camera.

UsingLedWallsForVirtualProduction Image104.png

Freeze

Another way of creating a fixed fill image is to literally freeze the last rendered frames. Its huge advantage is that it frees the GPU from the continuous rendering of all the Fill content, thus allowing to use of a single machine for any number of used LED walls.

Screenshot 2021-03-24 171927.png

Of course, this method is only suitable if you do not need any changes in the ambient/reflection (e.g. the fluctuating light of a fire or any moving object).

Per-LED-Wall Options

Select an LED Wall X panel.

UsingLedWallsForVirtualProduction Image105.png

As we discussed, the Fill rendering does not require the same quality as the Frustum rendering. In order to spare the GPU processing power you can specify a reduced resolution for Fill:

UsingLedWallsForVirtualProduction Image106.png

If this parameter is 1 then the rendering is performed in the pixel resolution of the LED wall itself (specified in the Resolution property). Any other value will work as a multiplier for this base resolution. This way you can either increase or decrease the rendering quality.

You can apply a Blur on the Fill image in order to make the ambient lighting more diffuse.

UsingLedWallsForVirtualProduction Image107.png

UsingLedWallsForVirtualProduction Image108.png

This function is also useful when you choose to render in a low resolution and want to smoothen out the jagged edges of the picture.

You can make color adjustments of the frustum image for each LED wall independently using the FILL ADJUST X panels.

UsingLedWallsForVirtualProduction Image109.png

Note that the Fill adjustments can be specified not only independently for each LED wall, but independently from the Frustum adjustments as well. This way you can boost or alter the color of the ambient lighting effect or the reflection while the part the camera actually sees remains unaltered.

UsingLedWallsForVirtualProduction Image110.png

Green Frustum

In some cases, you might choose to render the final background in post-production and only use the LED walls for ambient lighting/reflection. In this case, you need a green background to be able to key out the actors in post-production.

For that activate the GREEN panel.

UsingLedWallsForVirtualProduction Image111.png

This will replace the Frustum rendering with a solid green color:

UsingLedWallsForVirtualProduction Image112.png

The Color itself can be any of your choice:

UsingLedWallsForVirtualProduction Image113.png

You can also choose to only work with a part around the actor in post-production. For that use the Crop properties to specify a smaller green area within the Frustum image.

UsingLedWallsForVirtualProduction Image114.png

UsingLedWallsForVirtualProduction Image115.png

When GREEN is activated you'll see green on your preview monitor as well, since it shows the incoming camera signal.

UsingLedWallsForVirtualProduction Image116.png

It's great when you are setting up the green area, but during the recording, you naturally will want to see a preview of the final production instead. For that activate the PREVIZ KEYER panel.

UsingLedWallsForVirtualProduction Image117.png

It will key out the green part and replace it with the image rendered for the Frustum.

UsingLedWallsForVirtualProduction Image118.png

The panel controls an Advanced B keyer that you can operate as usual.

UsingLedWallsForVirtualProduction Image119.png

Tracking Markers

For post-production work, you'll need the camera tracking information as well.

One way to obtain it is to record the tracking data in FBX. On this please check the "Final composite recording" sections of this documentation.

But you can also display tracking markers in the green region of the LED wall and use them for traditional post-production tracking.

For that turn on Tracking Markers on the GREEN panel and set up the marker properties according to your needs.

UsingLedWallsForVirtualProduction Image120.png    UsingLedWallsForVirtualProduction Image121.png

UsingLedWallsForVirtualProduction Image122.png

Multi-Machine Settings

In the case of multi-machine configuration, you have to specify which LED wall is rendered by which machine. In this case, it can be done by selecting the LED Wall X panels and setting their Engine property.

In our example, it looks like this:

LED Wall 1

UsingLedWallsForVirtualProduction Image123.png

LED Walls 2 and 3

UsingLedWallsForVirtualProduction Image124.png

LED Wall 4

UsingLedWallsForVirtualProduction Image125.png

This will result in each machine rendering its own LED walls only, thus dividing the total GPU load among each other.

However, it is important to note that the Frustum pass has to be rendered for all LED walls that can appear in the field of view of the camera. This results in a high extra GPU load for all the machines. But usually, the side, rear, and ceiling LED walls only serve as ambient lighting/reflection sources and do not participate in camera frustum.

For these LED walls you can turn off Frustum, in our example:

LED Walls 2 and 3

UsingLedWallsForVirtualProduction Image126.png

If you turn off Frustum for all the LED walls handled by a specific machine then the Frustum rendering will be omitted on that machine. In our example, the Remote #2 machine renders LED Walls 2 and 3, and we turned off Frustum for both of these, therefore Remote #2 won't render the Frustum pass.

This way you need less GPU power (fewer machines or weaker GPU) for the side LED walls.

Digital Extension

Basics

Aximmetry is capable of extending the virtual graphics beyond the boundaries of the physical LED walls. The system generates a mask based on the camera position and the location and size information of the LED walls. Within the masked area, the real camera image is seen (with the LED wall in the background) while outside of it the extended virtual space is.

Please note that in order to have a usable digital extension you need absolutely precise camera tracking and lens distortion. Therefore usually this feature is only applied as a preview during the recording, the final digital extension is done in post-production.

Also, note that this method is only usable if the actors are always seen in front of the LED walls. If they are allowed to leave outside then you need a green screen behind the LED walls.

To use the feature simply activate the DIGITAL EXT panel.

UsingLedWallsForVirtualProduction Image133.png

UsingLedWallsForVirtualProduction Image134.png

By adjusting the Edge Softness property you can get a harder or softer transition between the real and the extended image.

UsingLedWallsForVirtualProduction Image135.png

UsingLedWallsForVirtualProduction Image136.png

Color Matching Using a Vignette Correction and LUT

There will always be a color difference between the real and the extended image. Firstly the real image is shown on the LED wall, secondly, it is seen through a camera. Both of them entertain some degree of color transformation.

As a first measure (especially if you only use the digital extension option for previz) you can use the DIGIT EXT ADJ panel's usual Brightness / Contrast etc. controls to get the virtual part's colors closer to the LED walls' colors.

However, for a better match, you will need a Vignette mask and a LUT.

Vignette Correction

Applying Vignette Correction to your camera input is a great way to improve the quality of the LUT, if you are using a single focal length setup.
If you wish to learn more about the effect called Vignette, and/or how to use Vignette Correction, please refer to this article: How Vignette Correction could be useful for you.

LUT

In the panel's LUT property, you can specify any standard Cube LUT files.

UsingLedWallsForVirtualProduction Image137.png  UsingLedWallsForVirtualProduction Image138.png

Aximmetry also provides the means to create such a LUT file. For that activate the LUT MEASURE panel.

UsingLedWallsForVirtualProduction Image139.png

At first, you will see a checker pattern projected on the LED wall in the frustum area.

UsingLedWallsForVirtualProduction Image140.png

On your preview monitor, you will see the 8 x 8 pattern coming through the camera and also 8 x 8 rectangular markers that designate the areas where Aximmetry will sample (and average) the colors.

UsingLedWallsForVirtualProduction Image141.png

As you can see due to the camera's lens distortion the sampling areas might be a bit off. It is important that they are placed inside the pattern cells for the correct color sampling. To compensate for the distortion, adjust the following properties at will until you have all the sampling rectangles inside the corresponding pattern cell.

UsingLedWallsForVirtualProduction Image142.png  UsingLedWallsForVirtualProduction Image143.png

UsingLedWallsForVirtualProduction Image144.png

Specify the target folder and filename parameters.

UsingLedWallsForVirtualProduction Image145.png

Press the play button.

UsingLedWallsForVirtualProduction Image146.png

The system will go through the 32 x 32 x 32 different colors displaying and sampling a 8 x 8 portion at a time. The whole process takes several minutes.

UsingLedWallsForVirtualProduction Image147.png

In the end, a .cube file is saved containing the 32-sized cube LUT. It is the file you need to manually specify for DIGIT EXT ADJ to see its effect.

The Shot Period tells the interval between two 8 x 8 measures. Due to the total delay of the LED walls + the camera input, it's necessary to wait between the changing of the colors and the sampling of the results. If you have a low delay in your system you can try to lower the Shot Period to shorten the total process time.

UsingLedWallsForVirtualProduction Image148.png

It's important that the displayed colors go through the frustum ADJUSTER X of the given LED wall. Therefore please note that a given LUT file is only usable with the exact same physical settings of both the LED wall and the camera and the exact same settings of ADJUSTER.

Switching Between Cameras

Switching between different studio cameras introduces some issues, especially with tracked cameras and using a digital extension and animations in your scene. You have two options:

  • single-machine setup
  • multi-machine setup

For both cases, the use of a genlock device is an absolute must.

Picture Delay

Doesn't matter which option you go for, you will have to set up the Picture Delay value.

Go to the CAMERAS control board.

Here you can select between studio cameras using the CAM 1, CAM 2, and CAM 3 buttons.

UsingLedWallsForVirtualProduction Image127.png

When switching cameras there's always a certain amount of delay while the change in the perspective reaches the LED walls, then the studio camera records the image and then the camera image reaches Aximmetry. To ensure that the switch of the LED wall picture and the camera change happen at the same time on Aximmetry's final output you have to enter a Picture Delay value. This will delay the camera picture you switched to. Adjust the value until you get a satisfying result.

For each camera, you will have to define a Picture delay value.

Picture Delay CAM 1 is the delay in frames that will be applied to the camera's image when changing TO this camera.

Single Machine

Limitation

With this setup, you can use more than one tracked studio camera and you can synchronize their movement with the Digital Extension, but you won't be able to synchronize the switching between these cameras.

Another limited option is if you are using stationary (non-tracked) studio cameras. In this case, you just have to set the Picture Delay for each camera to synchronize both the switches between the cameras and the Digital Extension. You can also adjust the Animation delay parameter to synchronize the animations. See later: Animation Delay.

The current limitations of a single-machine setup are:

  • the simultaneous synchronization of the switching between cameras and the syncing of their video and tracking data,
  • the syncing of the video input and tracking data and the use of the Animation Delay.

Multi-Machine

This is the best practice if you would like to switch between tracked studio cameras without any limitations: the switching between cameras, the Digital Extension, the animations, and the camera tracking with the video input will all be in sync.

In this case, the Digital Extension will be rendered by the Control Machine. All the LED wall images will be rendered by the remote machine(s). The Digital Extension will be rendered without the picture delay, so when the camera switch happens, the digital extension and the image on the LED wall will be synchronized.

Settings

Navigate to the LEDWALLS control board and select the LED-wall panel.

In the Pin Values window select the remote machine you would like to use to render the LED wall's image.

NOTE: In the above image, all other LED walls are turned OFF. We recommend that if you turn an LED wall ON, also select the Remote machine that renders the image.

If Digital Extension is turned ON, in the Info panel, you can read a message about the setting.

If the Local option is selected:

If the Remote option is selected:

Animation Delay

If you would like to trigger animations in your scene, in order to keep these synced to the Digital Extension and the change of the cameras, you have to build an additional logic in the FLOW editor.
For example a logic like this:


The above Flow Editor logic delays the Play of a Sequence on the control machine. While on the remote machines, no delay is applied.
The System Params module's Engine pin returns the ID of the machine, where the control machine's ID equals zero. So the If module is true on the control machine and returns the Animation Delay, while on other machines it returns zero. This Animation Delay value or zero value is then used by the Delay module to delay the Trigger pin data, making the sequencer start later on the control machine. Note that the Delay module's Frames pin was turned On, as animation delay is counted in frames.

When you have this logic ready, you can adjust the Animation Delay in the CAMERAS control board on the SELECT CAMERA panel.

In the Pin Values window, you will find the Animation Delay Adjust feature

Additions to the Final Image

As we discussed before, the incoming camera image already contains the final composition, therefore it is simply passed through to the program output.

However, you may want to make some final adjustments to the image. For that go to the INPUTS control board and use the ADJUSTER X panels.

UsingLedWallsForVirtualProduction Image130.png

UsingLedWallsForVirtualProduction Image131.png

Also, similar to other camera handling compounds, LEDWallCam also has input pins that allow the use of the standard Overlays functionality in order to add lower thirds, channel logo, etc.

Excluding LED Walls from the View

You might want certain LED walls to not be visible through the Digital Extension. A typical example is that of the side/ceiling walls that are only used for ambient lighting/reflection, and usually display a low-resolution image. For these, you can turn off the Digital Ext Thru option.

 

Article content

Loading
Close
Loading spinner icon
1/10