text
stringlengths 0
1.11k
|
|---|
Scene Support must be set to Required (in order to request spatial data permission)
|
Graphics API must be set to Vulkan
|
Rendering mode must be set to Multiview
|
Scene support
|
The feature requires an application to be granted with com.oculus.permission.USE_SCENE for Spatial data permission. To add it to AndroidManifest, change Scene Support to Required in OVRManager.
|
You must set Scene Support to Required in OVRManager for Depth API to work.
|
Additionally, the application must prompt users to accept the permission. The code for this is provided in the Depth API implementation.
|
Occlusion
|
The main use case for the Depth API is allowing real-world objects to occlude virtual objects visible in passthrough. This sample project showcases occlusion and provides help for implementing the feature in a Unity project.
|
It provides two example projects that showcase occlusion implementation in two contexts: Universal Rendering Pipeline and Built-in Rendering Pipeline.
|
There are two types of occlusion available: hard occlusions and soft occlusions. Hard occlusions are easier to integrate and perform better, but are less visually appealing. For details on implementation, troubleshooting, and integration, please refer to the GitHub repository.
|
Occlusions are implemented by writing to the alpha channel of the application buffer. Later it will be used to compose the application buffer with passthrough.
|
Depth API
|
The Depth API itself is a much more low level feature. It can be found in XR.Oculus.Utils namespace. It is referenced as Environment Depth.
|
Support
|
The Depth API is only supported on Quest 3 devices. The feature has a convenient function to check if it is supported.
|
Utils.GetEnvironmentDepthSupported()
|
Setup
|
In order to use Environment Depth, SetupEnvironmentalDepth needs to be called to initialize runtime resources. When environment depth is no longer needed, and resources need to be freed, calling ShutdownEnvironmentalDepth will clean everything up.
|
Utils.SetupEnvironmentDepth(EnvironmentDepthCreateParams createParams)
|
Utils.ShutdownEnvironmentDepth()
|
If the application didn’t request permission for USE_SCENE, the SetupEnvironmentDepth will perform it automatically. The service will start producing depth textures once the permission is granted.
|
Runtime controls
|
After SetupSetupEnvironmentalDepth is called, the feature can be enabled/disabled in the runtime via:
|
public static void SetEnvironmentDepthRendering(bool isEnabled)
|
*Note: Even if the application doesn’t consume Environment Depth textures, performance overhead still exists if the feature is enabled. Make sure to call SetEnvironmentalDepthRendering with isEnabled:false to improve performance.
|
Rendering
|
In order to consume depth textures on each frame, you need to call this function: Utils.GetEnvironmentDepthTexutureId(ref uint id)
|
A successful query will return true. The texture ID is written to the ID ref parameter. This texture ID can be used to query a RenderTexture using XRDisplaySubsystem.GetRenderTexture. RenderTexture can then be used in rendering or in compute shaders.
|
Hand Removal
|
If you want to render virtual hands to replace the physical hands, this can result in depth fighting. In order to avoid this you can enable the hands removal feature which will remove hands from the environment depth texture and replace it with an approximate background depth. To check if the device supports it:
|
Utils.GetEnvironmentDepthHandRemovalSupported()
|
To toggle the feature, call: Utils.SetEnvironmentDepthHandRemoval(bool enabled)
|
Advanced use
|
In addition to depth textures, applications can access per-eye metadata. It is returned in the form of the EnvironmentDepthFrameDesc struct. This contains useful information for more precise and advanced use of depth textures. To get the struct, call:
|
Utils.GetEnvironmentalDepthFrameDesc(int eye)
|
It will return EnvironmentDepthFrameDesc struct. It contains:
|
The isValid field indicates whether the depth frame is valid or not. If it is false, the other fields in the struct may contain invalid data.
|
The createTime and predictedDisplayTime fields represent the time at which the depth frame was created and the predicted display time for the frame, respectively.
|
The swapchainIndex field represents the index in the swap chain that contains the current depth frame. The current implementation doesn’t give you the ability to query a specific swapchain index.
|
The createPoseLocation and createPoseRotation fields represent the location and rotation of the pose at the time the depth frame was created.
|
The fovLeftAngle, fovRightAngle, fovTopAngle, and fovDownAngle fields represent the field of view angles of the depth frame.
|
The nearZ and farZ fields represent the near and far clipping planes of the depth frame.
|
The minDepth and maxDepth fields represent the minimum and maximum depth values of the depth frame.
|
For example, the GitHub sample’s implementation of occlusions uses this information to reproject depth frame pixels into the application’s eye buffer frame. This makes them work irrespective of the application’s camera configuration. It also provides motion compensation caused by the differences in display times between the application and passthrough.Unity Scene Overview
|
Unity
|
All-In-One VR
|
The platform for which this article is written (Unity) does not match your preferred platform (Nativa).
|
Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.
|
Health and Safety Recommendation: While building mixed reality experiences, we highly recommend evaluating your content to offer your users a comfortable and safe experience. Please refer to the Health and Safety and Design guidelines before designing and developing your app using Scene.
|
What is Scene?
|
Scene empowers you to quickly build complex and scene-aware experiences with rich interactions in the user’s physical environment. Combined with Passthrough and Spatial Anchors, Scene capabilities enable you to build Mixed Reality experiences and create new possibilities for social connections, entertainment, productivity, and more.
|
Mixed Reality Utility Kit provides a rich set of utilities and tools on top of the Mixed Reality APIs, and is the preferred way of interacting with Scene.
|
How Does Scene Work?
|
Scene Model
|
Scene Model is the single, comprehensive, up-to-date representation of the physical world that is easy to index and query. It provides a geometric and semantic representation of the user’s space so you can build mixed reality experiences. You can think of it as a scene graph for the physical world.
|
The primary use cases are physics, static occlusion, and navigation against the physical world. For example, you can attach a virtual screen to the user’s wall or have a virtual character navigate on the floor with realistic occlusion.
|
Scene Model is managed and persisted by the Meta Quest operating system. All apps can access Scene Model. You can also access the Scene Model over Link. You can use the entire Scene Model, or query the model for specific elements.
|
As the Scene Model contains information about the user’s space, you must request the app-specific runtime permission for Spatial Data in order to access the data. See Spatial Data Permission for more information.
|
Scene Capture and Scene Model
|
Space Setup
|
Space Setup is the system flow that generates a Scene Model. Users can navigate to Settings > Physical Space > Space Setup to capture their scene. The system will assist the user in capturing their environment, providing a manual capture experience as a fallback. In your app, you can query the system to check whether a Scene Model of the user’s space exists. You can also invoke the Space Setup if needed. Refer to Requesting Space Setup for more information.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.