Start typing to search...

Setting Up Virtual Sets with VR Cameras



In this next video, I will be demonstrating the possible and necessary tasks when running an already existing virtual studio before and during a broadcasting session, particularly in those cases when we are using Aximmetry exclusively with virtual camera movements without any camera tracking device.

There are various sets of requirements defined by the studios and these can be multifaceted and different in respect of how many cameras they want to use, how many outputs they want to handle, what kind of virtual camera movement they want and how they want to control these.


The Composer system offers an environment where you can design and set up control boards freely and with hardly any restrictions. Because of this, the final setup can vary from studio to studio. Nevertheless, their logic is identical to this general studio case I am going to demonstrate here.


In this scene, the system is set up with 3 camera inputs and 1 composite output. Each 3 camera input can be defined with many different virtual camera paths. Also, you can set which rendered composite image, from which camera should go to our single output.


First of all we have to set the video inputs and outputs. After starting Composer we can do this in the Startup Configuration View. The first thing we see is a list of the possible video outputs. This list includes the monitors currently connected to the PC and the available SDI video card outputs. We have now 2 monitors connected. Working with at least 2 monitors is highly advisable because one monitor will display the system´s control panels. The second monitor is then important mainly for two reasons:

First, certain pre-broadcast editing operations can be performed much easier on this one.

Second, in order to get preview images simultaneously from all three cameras when having only one SDI output, our only option is to use the second monitor as a preview monitor.


Every output monitor has an ID tag in this corner. In the configuration view, every video output device has a button. If we click on them like this, we can easily identify them. We see how the corresponding tag is flashing. This is our main monitor….. this is our supplementary monitor…….. and on each SDI output, we can see a fullscreen ID tag like this.


In our current configuration, we have to select two outputs altogether. One of them is the so-called Preview monitor and the second is the Main output, which will use the SDI output.


We recommend always assign the number “1” to the Preview monitor since the control board - that we are going to explain here - will always use the #1 by default. Setting the number “1” works like this: we open the drop-down list belonging to the required monitor and in this menu, we select the #1 item to tag the monitor with that number. Now we can see that #1 is displayed on the ID tag of our number 1 monitor.

The number “2” has to be assigned to one of our SDI outputs. In this hardware configuration, we have an AJA Corvid card, which has 4 ports. Each port can be defined as either input or output port. Let us now assume that we are using Channel 1 of this card to connect the output cable and there for select #2 in the drop-down list for output like this.


In the case of SDI cards, we can also select different video modes. By clicking on this button with three dots on it, we open the menu for Video Modes. We can select between different standard video resolutions and frame rates. The default value is always the European HD standard 1080i50. There is one more important thing to mention. Here in the last Sync selecting column, we always tick the box of our output SDI channel. This means that the rendering frequency of the Aximmetry system will be defined by the frame rate of the main output. In this case, the system will render 50 images per second.


As we can see, the following information is displayed on the ID tag of our SDI output channel. We see the #2 numbering, the video mode is set to 1080i50 and the Synchronization is set on the output marked with red.


The next step is the definition of the video inputs. Before we see how it is done it is worth mentioning, that it is possible to select input devices directly by their name later in Composer. But in practice, it is more appropriate to assign “1”, ”2”, ”3” and so on numbering to these inputs, already here at this point. By doing this, we avoid dealing with “input definitions” later on when using Composer´s control boards. These boards will use this pre-defined input by default.


This numbering task can be done here, in the Device Mapper section, more precisely in its Video sub-section like this. We simply assign video input ports to the #1, #2, #3, etc. numbers in their own drop-down list. These will be of course SDI inputs. Having already used the Channel 1 port of this AJA Corvid card as output, we simply have to use Channel 2, 3, and 4 ports for connecting camera inputs or any other input sources. Let us select these now. Channel 2 as #1 input, Channel 3 as #2 input, and so on.


For every input port, we can freely select different Video modes. Naturally, in most standard cases it must match the Video mode of our Output. Therefore we use the 1080i50 default video mode here too.


With this last setting, we have completed the Startup configuration of the system. We should mention here that our settings can be saved and therefore these can be loaded in later if we want to perform changes between configuration settings or if we want to secure ourselves against system “accidents” - as a backup measure. We can do this by clicking on the Manage Settings button. We will find many different groups of settings here. Among these, we have only dealt with the one called Startup here. Nevertheless, we leave all these groups “On” by leaving their boxes ticked. Now we click on the Backup button and save it on the desktop as a file called: Test1 like this. Now we find this new option file on the desktop. Then we can perform a reload of this setting. First, we are resetting the current configuration by clicking on the Reset button. This action will perform a quick restart of the Composer. We see how all our previous configuration work is lost. Now we select Restore in the Manage Settings and double-click on our backup file on the desktop like this. We have now retained all of our saved input and output settings.


Now, we can start the application. As long as there isn’t any content sent to the outputs, the system will display mono scope on each of them. On each mono scope, we can identify the specific number of that output. Here we can see that this is our number “1” monitor, which is our Preview monitor. We can also see the resolution of the monitor, the refresh frequency of the output (60), and finally the rendering frequency of the Aximmetry system (50). Obviously, in the case of SDI outputs, these two figures must be identical. This means that we must see 50/50 here. But in the case of Preview monitors, this frequency difference is not that significant. The output image will be a bit “bumpy” though. We also have to mention that the system runs in interlaced mode (50i) and therefore due to performance considerations, on preview monitors, we only get half of that refresh rate, 25p output instead, meaning an already uneven flow of images. We should not worry when we see this on the preview monitors. It is a quite normal occurrence.


Now, I am loading an example set, that already containing our 3-camera configuration.


I have also enlarged the Preview panel at the bottom in order to follow our actions easily. Obviously, in a real studio environment, this is not necessary at all. (We can freely increase, decrease the size of this panel or it can be closed altogether.)


The first thing we see here is the FLOW Editor, which really has significance when assembling the studio itself. We will come back to here later on in the process since some pre-broadcast settings can also be done. The things we are going to focus on now, are the so-called Control Boards. These can be selected and changed here in the Toolbar at the top. In this configuration, we have access to three control boards. We have several options to change between the control boards. One is to click on the top of the toolbar and change between them or by pressing “Alt 1”, “Alt 2” or “Alt 3” or by repeatedly pressing the F2 key on the keyboard choosing the next control board in the sequence.

The first Control board we are dealing with here is the: BILLBOARDS. What is a Billboard then?


In a studio system where real camera tracking is non-existent, we place people into the studio environment by “keying out” the person´s image coming from the camera and attach this image onto a flat surface. This flat surface, as a new object, will then be placed into the 3D virtual space. From this moment this will behave as a normal object with attributes such as shadows or a mirror image, etc…


This flat surface placed into the 3D space is called Billboard in the system. How can we manage these Billboards then?


For every single camera input image, there is a specific Billboard. In the Control Board, we can select among the processing stages from the raw camera image to the final Billboard object. These stages are represented with their own boxes in a sequence, like this. This means that for each camera we use, there is a separate row on this work surface. We can also look into each processing stage. For this purpose, we have this box named Monitor 1.


If we click on the Input mode here, we see the raw input image. We recognize in an instant, that here we have used a camera with a 90-degree rotation. The reason for this is, that in the case of this particular billboard method, we need a program commentator who is standing still and is allowed only to make small steps to the left or right. The person will have to remain in an upright position the whole time. By this 90 degrees rotation, we can better utilize the available 16:9 screen area format. Also, with this rotation technique, we are able to perform a more extensive virtual “zooming in” on the person as a consequence of higher camera resolution. Obviously, it is not mandatory to use the camera this way, this is only a suggested option.


The next processing stage is the Crop where the person is presented in natural rotation and with the excessive parts cut out. The next one is the Keyed, the resulting image of the keying process. The next one is the Matte which displays the mask resulted from the keying, which naturally supports the settings of the keying action. The last one is the Final where we can see the camera image placed on the Billboard.


In a similar way, we can click through the phases of the second camera. This looks quite the same as the first one. Finally, there is the third camera, which differs in that respect that we see a near field camera image, which in turn will require a different virtual camera setting.

We can also observe that, if the Final is selected in all three rows, then in the Preview panel we are going to see the composite image belonging to camera 1. The reason for this is that currently, our camera 1 is selected as program output. This setting is dealt with in the next Control Board, called CAMERAS. We shall come back to this later. In the meantime, I just want to mention quickly, that there is a SELECT CAMERA box and here we can choose which camera´s composite image will be sent to the main Output. I can choose between cameras 2, 3 and 1. The bottom line here is basically that if I want to make setting changes to Camera 2 and want to display its composite image in the Preview, then it is crucial to select CAM 2 in this box as Output.


Let us now return to the Billboards and check what kind of settings belong to each processing phase.

The first phase is the Input, the raw camera image. Its settings can be reached by selecting the INPUT box. Its properties become visible here on the right. There is not much to adjust here. As I mentioned earlier, we can directly select a Video Input Device by name. However, we have already selected these in the Device Mapper and allocated their serial numbers: 1,2, and 3 respectively. This leaves us with an appropriate setting here. Obviously, in every row, the properly indexed device is set already.


The next phase is the Crop action. We click on the CROP box in order to reveal its constituting parameters. First of all, we choose whether we record with a normal position or with a 90 degree positioned camera. This is done by switching the Portrait button on/off, like this.


After this action, we can remove the unnecessary areas of the image. The use of this action can is to move disturbing visible objects showing up in the green area or to fill the green area with the person to a maximum. These measures can make the actual placement of the Billboard in the 3D scene a bit easier.

The crop action of the four sides of the rectangular image is done by these four sliders. In the case we need a fine biased adjustment, we have to keep the “Shift” key pressed down while moving the sliders horizontally. As we see, this will result in a much slower adjustment on the image. During these actions, the standard Undo, Redo functions and their corresponding “Ctrl Z” and “Ctrl Y” keyboard combinations are always applicable. We have other functions available as well. When we right-click on the name of each parameter in the properties panel, we find a function for instance called Reset. This will set the slider to the default position of the selected property. In this case, this is zero, meaning no cutting at the bottom. Also, we find the Revert function here, which will adjust the slider to a position, that was valid at the initial loading of the set.


In the next phase, we handle the keying process. We select the KEYER box for this. We have two different types of keyers: the Chroma keyer and the Difference keyer. We can shift between them with this switch. Let us take a look at the Difference keyer first, mainly because it has fewer parameters and it generally provides us with a satisfactory result. We have to select a Channel for the background. We may choose between Green(1), Blue(2) or even Red(0) background colours. When this is selected, we have to adjust the black and white levels. The best way to start is to reset both parameters. We then start with the black and draw its slider as long as it makes the background disappear. At best it will disappear in full. We may eventually end up with a result, where white holes will appear on the body or at the sides of the person. We can correct this by adjusting the white parameter with its slider, like this.


The Prefilter parameter can assist us in some cases when the contour line of the person is a bit shaky or lacking stability. The easiest way of adjusting these numeric parameters is moving the mouse up and down when pressing down the left mouse button. The parameters will change with 1/100s of a unit. Or if we repeat the same movement while pressing the Shift key, the parameter will change by higher values. We should keep the prefilter at a lower value because it may cause more harm than good.

Our next parameter is the Shrink. We can reduce the contour at a certain level in the case that we end up with an unwanted outline around the person. With the Soften parameter, we can adjust the contour of the person. The Despill parameter is used for filtering out the green reflections on the person. It is worth leaving it ON. It has a threshold value also which can be changed but in most cases, its default value: 1, will do fine.


If we cannot come up with a satisfactory result with the Difference keyer, we can try the Chroma type one. Let us switch this button OFF. The main difference here is that this will try to remove a certain hue from the image. For this, we have to specify a color in the “Color” parameter. This color will define the hue and to some extent the nearby hues that will be removed from the image.

A smart way of doing this is to switch back into Input (in the MONITOR 1 box) and select the desired color shade directly from the Input image. Then we click on the green rectangular field in the properties panel. We should see a color selecting dialog where we can manually choose one. The most practical way of doing this, however, is to select a color directly from our Input image. We do this by clicking on the Pipette icon, and while keeping the mouse button pressed, we move the Pipette cursor to a distinct part of the Input image. By this move, we can select the color of a specific pixel on the image. In addition, if we keep the “Shift” key pressed as well, then we can select the average value of a smaller rectangular color area. This may be more useful in the case of defining the correct shade of a green background. We can also use this function with the “Ctrl” key, where the average color value is calculated from a slightly larger area. When we release the mouse button and click OK the selected color will be set in the properties panel.


We can now return to the Keyed mode. By adjusting the Hue Tolerance, we set an interval for those color shades, that we identify as background colors. The Hue Smooth will then fine bias the values in both ends of our specified Hue interval. The Min Saturation will exclude the least saturated greens from the background. While with the Min Brightness and Max Brightness properties we can set a brightness interval causing a more accurate keying of the person. If I reduce the Min Brightness for instance to a minimum, the persons´ almost black trousers will have keyed too. The following parameters in this panel work the same as the Difference keyer. We work the same way with them and they behave the same way as well. These are used to fine-tune the keying with even more precision.


In every stage of the keying phase, we can switch to the Matte mode. We see the keying mask here. On this black and white image, we can detect those keying mishaps that do not appear on the colored image. It is worth checking while doing the fine adjustments.


If we are happy with the keying, we can move on and place our Billboard into the 3D space.

Let us switch to Final mode and click on the BILLBOARD box to present the relevant parameters. Firstly, we have to stop the movement of the camera. This can be done in the Control Board called CAMERAS, by clicking on the Stop button like this. Now we return to the BILLBOARD box. There are basically two methods of placing the billboard in the 3D space. One is the Transformation parameter in the Billboard panel, where we set the place of the billboard object by numeric values. We can both move the billboard in 3 directions here and adjust the scaling of the billboard, like this. There is a parameter for rotating as well but this function is not needed in normal studio settings except when the camera is inclined or tilted for some reason.


The second method is more user-friendly. This leads us to one of the main reasons why it is useful to connect an extra monitor to the system.

On the top toolbar, we open the menu named: Edit Scene On. Here we can see all of our Outputs and Preview windows listed. If I select the monitor output now, I tell the system that on this monitor, I will perform 3D editing. In the case the Billboard box is selected, then the necessary control functions (and arrows) together with our Billboard object will appear on that selected monitor. I can now move the person in the 3D space freely either with the help of the arrows or with the help of the small rectangles. By selecting the different buttons on the top, I can choose between different editing modes as well. If I, for instance, choose the scaling here, I can change the size of the person. A white sphere will appear, that we can grip on with the mouse and move it in any direction. With this tool then, we can easily adjust the size of the person. There is no reason to use the axis arrows here since they will distort the image in the given direction.


There is a parameter called Lift in the Billboard properties. This is also used to move the person up and down. What is the main difference then, compared to the previous 3D movements along the vertical axis? Well, as we can see here, the Billboard has this tiny “Shadow” spot attached to it. (We will come back to this later on in detail.) When I move the entire billboard in general, I move it together with this shadow. So our target here is to place the billboard in a way that the shadow will elegantly stick to the ground surface without losing the shadow effect. This adjustment sometimes causes - depending on the camera or crop settings - that the person to seemingly hovers over the studio surface. With the lift parameter, we can fine-tune the person vertically without moving the shadow itself.


Let us take a look at the other parameters of the billboard. First is the Look At Camera. In the basic settings, the billboard always turns towards the virtual camera in order to gain a full value image of the person. But there can be special camera paths where this does not look good at all. In such cases, it is preferable to turn the billboard in a certain direction. Obviously, the placement of the billboard has to be set by us, manually. We select the rotation function and along the vertical axis, we turn the billboard to the desired position.


In the majority of cases, this should be left switched ON. The next one is the Flip which does a simple mirror effect on the person. The Cast Shadows switch will activate or deactivate the shadow cast from the person. This is not identical to the shadow spot mentioned earlier (we are going to cover that as well). This shadow appears here behind the person. It is generated by the active virtual light sources in the space.


We have the Receive Shadows parameter, which determines whether the body of the billboard person receives shadow from the virtual objects placed in the virtual surroundings. This can be quite impressive eventually, when we let´s say place a virtual graph in front of a program host, letting this graph cast shadow on him or her.


The next one is important. This is one is called Show AO and this is the ON/OFF switch of that tiny shadow spot mentioned earlier. What is the purpose of this property? Well, it is obvious that the virtually casted shadow in itself is not sufficient to manifest convincingly that the person is firmly standing on the surface. It does not generate enough darkening around the feet as it would occur in different real studio lighting scenarios. We are trying to emulate this real scenario with this AO function. AO is the abbreviation of the technical term: Ambient Occlusion. Here we can adjust its different characteristics. The dark spot itself is generated in the first place from the shape of the person and therefore it is moving together with it and moreover, it follows to some extent the movements of the person´s arms. But in order to achieve the right fit with the feet, we have to make certain adjustments to it.


The first parameter here is the AO Front Offset, which adjusts the position of the spot in depths, like this, in order to fit the spot better with the feet. The second is the “AO Front Scale” which adjusts the elongation of the spot, here in depths as well. We have the AO Width then, which adjusts the width of the spot. The AO Fatten adjusts the spot radius making it bigger or smaller. This should not be set too high, since it will cause the effect to become unrealistic. Then we have got the AO Blur effect which adjusts the softness of the spot. If I reduce it to the minimum, we can see the outline of our person on the spot. So, this value should be set higher in order to increase the fuzziness of the shadow. The AO Opacity will then adjust the strength or density of the dark spot. Neither this nor parameter should be increased too much. This has to be a fine effect only. The remaining parameters that I have not covered will be tools for the graphical designer and therefore these are not so relevant to us here.


While adapting the same methodology, we have to repeat this process and go through the billboard settings belonging to the next two cameras as well. For instance, if we want to set the parameters of the second camera, we change the SELECT CAMERA box to the second virtual camera (CAM 2) and continue with selecting the same phases as earlier. What i important to mention here, is that, if we want to copy certain settings of one of our cameras, for instance, the keyer settings, then we select the appropriate box and choose the Copy Settings option from the right-mouse menu. Or we simply use the “Ctrl C” key combination. Then we select our target keyer box and select the Paste Settings option. With this action, we have copied the current keyer parameters from one box to another. Of course, this action is relevant only between identical box types.


It is also evident that every change undertaken on a Control board can be saved by saving the whole set as well. If we choose to Save in the File menu or the “Ctrl S” key combination, we save the full set with its latest adjustments.


It may also happen, that we want to move a part of an existing setup into another set or we want to be able to shift between different setups. We can do this by right-clicking on an empty area and in the menu, we find an option called Save Settings. In this new panel, we can choose a target folder. This can be the Desktop or the specific project folder of the studio. We name it here and title it as Test. With this action, these settings are now saved. With the Load Settings option, we can always reload it again. Each save will apply only to that specific Control Board, in this case, we only have saved the settings of the BILLBOARDS control board. If we want to save the control board settings of our cameras, then we have to perform a new separate Save Setting on that control board.


Let us now continue with the CAMERAS panel. In this control board, we can choose between 3 different virtual cameras and each camera has its own specific, numbered billboard allocated to it by default. This means, that if I select Camera 1, then I will see Billboard 1 on the output. This rule applies to Camera 2 and 3 as well. Each virtual camera can have a maximum of 8 different camera paths. Now let us start with the different camera paths. Here, in this box, we can start those eight different camera paths, which are associated with our Camera 1. Obviously, these can be fixed camera positions also. I select Camera 2 and here again, we can see the 8 associated camera paths and so on.


Here at the bottom section, we can find a PREVIEW MONITOR OUTPUT box. If we select the PROGRAM mode, then the Preview monitor will display the same image, as the one going through the SDI output. We also have a special MATRIX mode here. This is the other main reason why it is useful to connect an extra monitor to the system. In this mode, we can follow the images from all three virtual cameras in addition to the selected output running at the top of the left section. It is clearly visible, that the preview images of the currently inactive cameras are jagged. The reason for this is to reduce the heavy load on the processors: these previews are rendered in much lower resolution.


In the PREVIEW MONITOR OUTPUT panel, we can also choose each camera separately. This means, that we can watch the images from all three virtual cameras without having to change the main output image.


Both the MATRIX and the three separate camera modes, therefore, give us the opportunity to watch or even change camera paths on other virtual cameras before the actual shift of the main output to another virtual camera image takes place. Let us assume that we are making preparations to shift the main output to the second virtual camera but not with this selected camera path. We change it here, knowing that the audience does not see this one. When this is done, we go ahead and select CAM 2 as the main output in the SELECT CAMERA box.


We can now notice the change in the main output here. It is also worth knowing here, that when we change to another virtual camera, the currently active virtual camera path on that camera will start from the beginning. In other words, this means that the new virtual camera with its new camera path will not copy into the output camera with its actual camera position but it will start its path again from its initial position. The same thing happens when we shift to a new camera path. The present path will start from its initial position.


Well, how can we define the new preset camera paths? This is a rather simple procedure. Let us say that we want to define a new virtual camera path to our Camera 1. We select one of the 8 existing paths that we want to overwrite. Nr. 7 will do for instance. The path itself is defined by selecting two endpoints. We click on the mode marked with an A letter in the EDIT CAMERA PATH box. We can now add the initial or starting point of the path. We can move the camera with the mouse either here in the preview panel or in the output monitor.

There is written documentation on the virtual camera movements but I will cover this subject here, briefly mentioning the most common combinations:

By pressing the right mouse button we can move horizontally in the space in both directions.

By pressing the left mouse button we can turn the camera in place.

By pressing the middle-mouse button we can move vertically.

By pressing both the Space key and the right-mouse button we can move back and forth along the viewing direction of the camera.

Similarly, by pressing the Space key and the left mouse button we can use the Zooming function of the camera.


Let us now set an initial point of the new camera path. A distant view like this, for instance. Now we set the end position of the path. The most convenient way is to set the endpoint, starting from our starting, A position. It is quite easy to achieve this. While we have marked the A point we click on the copy icon, then we select the B point and click on the paste icon. We have now overwritten the old B position with this new one. Now we have identical A and B points and if we start now the camera with the play icon we get a fix, standing camera image. Basically, this is the way to define a fixed, standing camera. But our task is to make a camera that can be moved. Let us select the B icon then move the camera to another position. We can adjust the zoom as well. When returning to the playing mode the whole camera path sequence will occur.

The speed of the movement can be set in the CAMERA PATH box. Every camera path has a speed parameter in the Pin Value panel. We have now edited the path Nr. 7 and therefore we have to adjust the SpeedPath 7 parameter. The units here are defined in seconds and we adjust the number of seconds it should take to complete one camera path, from A to B. If I set this value to 2 then we end up with a very fast movement.


There are further functions here in the EDIT CAMERA PATH box to further modify the camera path. There is the Bounce button, which will define an A-B-A movement of the camera or if I switch it OFF, the actual movement will occur in only one direction from A to B. There is also the Looping button. If I switch this button OFF, the movement will not recur but end at B after one run. Obviously, these settings apply to each time I change the camera path in the CAMERA PATH box. If I change the camera from 7 to 6 for instance regardless of where camera 7 was in its moving sequence, when I select 7 again, the path will start again from its initial position.


These last three buttons define the different smoothing modes for the paths. The first one is a linear mode, the camera will maintain the same speed along the whole sequence. It starts immediately at its defined speed and stops abruptly as well. The second mode will smooth the movement at both ends, resulting in a slowly increasing speed at the start and a slowly decreasing speed at the end. The third mode provides us with a sudden camera start and a smooth, soft stop at the end. This mode would work remarkably well with this one-direction, full stop camera path setting. Whenever, you change back to this path, equipped with the third smoothing mode, you will see an already moving camera while the stop of it will be rather soft and gentle.


Whenever we edit either of the endpoints, we have the option to use the Undo, Redo functions. It’s important to know that we cannot use the general “Ctrl Z”, “Ctrl Y” undo, redo functions, because they are not applicable here. Instead, we have to use the designated icons here in the EDIT CAMERA PATH box. marked with similar symbols. If I click on these, I see that the position jumps to the previous camera positions. We also have a Reset button that can be used in case we lose our orientation in the scene for some reason. With this function, we can place ourselves in the center of the scene.


I forgot to mention earlier, that if we keep the Ctrl key pressed down in all types of camera movements, we obtain a much faster camera movement and if we keep the Shift pressed down, we obtain a much slower camera movement.


Let us now take a look at the CAMERA & RENDER SETUP box. We are already familiar with the Play and Stop buttons. But there are three other mode selector buttons, available to us. The first on the left is the Free camera mode. With this, we can freely move and look around in the space or scene, without compromising or jeopardizing any already defined camera path.

The Camera mode puts us back into our original setup, where we select manually among those predefined camera paths appearing in all 3 CAMERA PATH boxes. Finally, there is the Playlist mode. This is used when the actual set contains automation for switching among the camera paths along with a certain predefined logic. Our scene does not contain any of these things.


What else can we do with our billboards then? Well, I have already mentioned that system has been configured in such a way, that the 3 different virtual cameras activate their own specific billboards with the same number associated with them. In other words, when we switch to a camera with a certain number, the billboard with the same number is displayed.


This rule, however, can be applied differently. We may have the intention to configure the system, that one virtual camera will display two billboards. We may eventually have two persons, who have been recorded by two separate cameras, maybe at different locations. Now, these two people come together in one shared studio space.

In order to achieve this, we have to return to the BILLBOARDS control board. Here we find a box called “VISIBILITY”. I have not mentioned this so far. With the help of this box, we can specify those virtual cameras, which will have access to that specific billboard and make it visible. We can for instance decide that we want to make the content of Billboard 2 visible as well, when our Camera 1 is active, simultaneously with its own associated Billboard 1 content.


Obviously, we have to move this Billboard 2 into the virtual space of our Camera 1, where it can be visible. The easiest way to do this is to copy the position of the person in Billboard 1 by selecting Copy in the “Transformation” parameter with a right-click. Then move this to the Billboard 2 box with Paste or “Ctrl V” and by this overwriting the existing values here. Now we can detect this new, second billboard at the same position as the original Billboard 1. As a natural consequence, the size and the position of the new billboard are quite odd. We can easily move and scale this new person to make it work well with the other person in the scene. We can place him behind the desk too. With these adjustments, we have now moved both people into one common space.


Of course, the 3D editing on the output monitor works the same way as we have already seen it earlier, with a single billboard. If I now select Billboard 1, I will be able to edit its parameters directly in the output monitor. If I select Billboard 2, then the controls will appear attached to the other person. I can also work with both Billboards using the “Ctrl” key while selecting the boxes. Multi-selections are supported this way. In the case that both billboards are selected, I can move both people simultaneously, like this. We now have active control arrows attached to only one person. We can grip both objects by the active arrow-shaped control bar in the front. But we can freely select which one of the two control bars to become active. As we see the other person has only a small control bar attached to it. We cannot grip them now but if I click onto the white dot, it will activate this control bar instead. Now, I can move both people with this active control instead.


Let us talk about some of the control functions that can be performed during live studio sessions. Well, the majority of the settings appearing on our BILLBOARDS control board are rarely modified, if at all, during broadcasting, except maybe from the Keying settings. In general, we use the CAMERAS control board for shifting between different camera paths. But in most sets, we will usually find a pre-defined special, tailormade control board, where operations - exclusively valid to the actual set - can be executed. A good example of this is for instance a graph that we can initiate or terminate here or a newsroom type crawl that can be displayed on our command.


In addition to these, we can also find a setting, which is not particularly a live broadcast setting, but it is unique and tailormade to this scene only. It offers different lighting conditions in accordance with the different parts of the day.


So far we have performed all control functions by clicking on various buttons and icons. This may not be particularly convenient during live conditions. For this reason, it is possible to allocate these buttons and switches to various, external control devices. The most common one is the keyboard attached to this PC. Let us try and associate the three-camera buttons to the “1”, ” 2”, and “3” keys respectively on the keyboard. We right-mouse click on our target camera button and select the Assign Keyboard option and then simply press the desired key on the keyboard in this case the “1” key. In a similar way, we cover the next two cameras as well. Now, we have assigned these respective keys to each camera in the box. If we have already assigned a key to a button, it will indicate this by placing a small, dark triangle in its bottom left corner, like this.


We can now choose from the available cameras by hitting their assigned keys on the keyboard. Another usual controller tool is the GPIO. If this equipment is connected to the system we can use this one too. We select the Assign GPIO and simply press its target button on the device. In the same manner, we can use MIDI and DMX controllers too.


Let us now return to the node-based editing interface that we call here as FLOW Editor. This has been covered lightly at the beginning of this video. I have also mentioned there, that this editing interface is most useful when the studio is in its building phase. In addition to its core purpose, certain pre-broadcast settings might have to be configured here. One good example of this is when we decide to load a specific video content into our virtual display appearing in the background.

We can do this by selecting the target video in the File Browser, for instance, a newsroom video that shows a transforming graph image. We simply drag the video into the editor. This action will create a video playing box for this file. We are now wiring this box into its destination box in the set, in this case, the purple, LCD Screen 2 box. We can immediately notice the new video on the panel in the back. If we want to change the video, we simply draw the new one from the file browser directly onto the actual video player box. Again, we see the change immediately.


Now we will stop here. Since this is a complex area and it can consist of many unique functions based on the requirements of each new set.

And this concludes the explanation of the control board functions of virtual camera sets.

Article content