VR Camera

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 31 Current »


This is a generic VR camera implementation. It features two cameras offset by a distance with a specific 360º projection. Its main advantages are:

  • It works with any scene component that can be rendered in an offline renderer: meshes, hair, particles, volumetrics and complex shading networks.
  • An easy learning curve to begin to create VR content. Just add the VR camera to your existing project.
  • Modest hardware requisites to reproduce the content. Any platform that can reproduce a video with the required projection is fine to experience the generated content.
  • Content that is easy to distribute either as a video file or via video streaming. It can be reproduced using dedicated software or an app, or with web standards like WebGL, WebVR. It also works with Google and Facebook 360 3D videos.

Limitations

  • Poles: By default, poles will show very evident artifacts. This requires that you adjust the stereoscopic effect for each scene and smooth it near the poles, thus diminishing the stereoscopic effect.
  • Tilt: Due to the way the stereoscopic effect is done, tilting your head will destroy the stereoscopic perception.
  • Parallax: When you move your head along any axis, there is a change in the viewpoint that the offline VR scene can’t take into account. This can diminish the immersion of the experience since we can only react to head rotations.


The cameras page has more details about the common camera parameters. The additional attributes are shown below:

It is recommended that Lock Sampling Pattern is enabled in the Arnold Render Settings when using the vr_camera. This will avoid the sample pattern changing from one frame to the other, which can be distracting in VR.

 

Mode

There are four mode options available so that you can get the result that better adapts to your pipeline. They are as follows:

  1. Side by Side
  2. Over Under
  3. Left Eye
  4. Right Eye

 Stacked eye formats and cube map formats can sometimes have seams in the final immersive image. To avoid them:

  • use a 1x1 box filter (this is fine because a softer additional filter will be applied when the cube map is warped for display).
  • use resolutions that are multiples of the number of faces stacked in each direction. For instance, for a 3x2 cube map with one eye on top of the other, the vertical resolution should be a multiple of 4 and the horizontal resolution a multiple of 3.

Projection

Depending on the selected projection, and options, each sample will correspond to a ray direction so that all of the space around the camera is completely covered. Choose between lat-long, cube map (6x1), cube map (3x2).

Latlong Projection

Cubemap (6x1)

Cubemap (3x2)

The 3×2 Cube Map has the advantage of better aspect ratio images.

Eye Separation

Defines the separation between the right and the left camera, required to achieve the stereoscopic effect. The camera origin position is updated for each sample and displaced from the center perpendicularly to the ray direction. Doing this per sample level and not per pixel creates a better result than using two physical cameras. Here is a picture explaining this:

Eye to Neck

The horizontal distance from the neck to the eye.

Top Merge Mode

These parameters define the merging function of the sky. Usually, a Cosine function (Cos) will be smoother and less prone to artifacts. Choose between None, Cosine or Shader.

Top Merge Angle

Defines the angle in degrees from where the merge starts to take effect in the sky. The nearer the angle to the pole (0º top or 180º bottom), the bigger stereoscopic effect you will see below it, but the most probable artifacts will appear at the poles.

Below you can see the difference between a start top angle from 0 to 80 using a cos merging function:

Bottom Merge Mode

These parameters define the merging function of the floor. Usually, a Cosine function (Cos) will be smoother and less prone to artifacts. Choose between None, Cosine or Shader.

Bottom Merge Angle

Defines the angle in degrees from where the merge starts to take effect on the floor. The nearer the angle to the pole (0º at the bottom, 180º at the top), the bigger stereoscopic effect you will see below it, but the most probable artifacts will appear at the poles.

If the bottom_merge_angle is above the top_merge_angle, it will be clamped to the top_merge_angle.

Merge Shader

This is used when merge_mode is set to "shader." It can be used to improve control of smoothing the poles. For example, if you have to integrate 3D with real life footage from cameras that have a very specific pole merging. Without Merge Shader, you only have generic pole merging. Black in the shader, results in no merge at all and white is completely merged.

Example of a ramp shader used to smooth the poles

 

 

Workflow

Pole Merging

While this method creates a very nice 3D effect for objects around the viewer, some nasty artifacts will appear at the top and bottom poles. This is because the camera offset makes it impossible for the rays to reach the top and bottom positions as you can see here:


Here you can see an example of how poles look like in this case:

 

 

We can fix these artifacts by smoothing the camera displacement when rays point upwards or downwards. This solution will remove the stereoscopic effect in the poles, but in practice, it is not very noticeable. In this case, the camera rays will look like this:

 


The pole merging will generate a result like this:

 

 

Different settings are provided to control this pole merging so it can be adjusted depending on the scene. First, top and bottom poles parameters are independent as they could have different requisites in your scene. For example, the pole merging at the floor might need to be very smooth if it has a lot of detail, but top merging can be more aggressive if it has a flat color, giving the upper hemisphere a more relevant stereoscopic effect. An aggressive merging at the top will look like this:

 


Pole Merging Workflow

To maximize the stereoscopic effect in the scene while avoiding poles artifacts, the top and bottom poles can be adjusted independently to better suit the specifics of a scene.

The artifacts at the poles can be of two different kinds:

Distortion Artifacts:

The single image you see from an eye has some deformations at the poles. Perpendicular lines do not meet at 90 degrees. These artifacts tend to appear as you increase the Merge Angle. Some examples:

 

Stereoscopic Artifacts:

When you take a look at a single eye image, you might notice a different distortion, but when you see both right and left images, you will notice a circular wave on the pole. Here is an example:

Right eye / Left eye

 

 

 

 

Example Workflow: Mery Video

Making this video was quite simple. The following steps were taken to create it:

  • Add a generic VR camera and adjust it with the correct parameters.
  • Fix the scene so that you can look in any direction. Behind the windows, some images were added to simulate the outdoor as well as a couple of columns to hide some lighting artifacts that didn’t appear in the original scene camera.
  • A single frame with the credits was rendered and added back to the final video inside Nuke.
 

 

 

  • No labels