text
stringlengths
0
1.11k
m_indicator = Instantiate(appManifest.DropIndicator, m_appPlacementVisual.transform);
This uses a RayInteractor to create a ray that has a filter on. It checks the Scene element to figure out what type of element the input is moving over.
For example, certain apps have to be on a desk. This is disabled in DroneRage, but it is useful when you want to query what type of object each one is.
Locomotion
Locomotion for both controllers and hands has been reduced because Mixed Reality apps occur within a user’s space, unlike in larger virtual reality scenes where there is more space.
Discover uses standard controller locomotion, where using the joystick allows the user to move around and teleport. Hands use a gesture that is built using built-in prefabs from the Interaction SDK.Discover and DroneRage FAQ
Unity
Quest
The platform for which this article is written (Unity) does not match your preferred platform (Nativa).
Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.
This guide is based on an interview with the engineering team that created Discover and DroneRage. The topics above have covered how Discover and DroneRage integrated colocation, Interaction SDK, and Scene. Below are a few topics not yet covered.
Q: What are the most important Meta Quest features used in the Discover Showcase? Can you list them? A: Passthrough, Spatial Anchors, Shared Spatial Anchors, Avatars, Interaction SDK, Audio, and Scene.
Q: Was there anything specialized to note when embedding Avatars or Audio into DroneRage? A: Avatars in DroneRage don’t do anything special. And in terms of Audio, Discover uses the standard spatial audio SDK, the Meta XR Audio SDK.
Q: How was developing DroneRage in MR vs VR? Were there major differences in the development process? It seems like the game loop was the same, and it was more about getting the underlying MR SDKs in order with the colocation package. A: Yes. It wasn’t a particularly different workflow. It is nice because when you are testing it, it’s comfortable in MR because you’re not taken completely out of your real space every time you put the headset on to test something and for example I can chat on my computer while wearing the headset.
In terms of using Unity, it’s basically colocation, passthrough, and Scene API. Getting all these integrated was the only real difference. It is a VR app essentially; you are just using Passthrough and Scene APIs.
Q: What was the performance optimization process? Was there anything specific used during development (I.E. Tools, processes, focus on computation cost)? A: A key thing is that you are not rendering an entire environment like in VR. There is a lot less rendering to do in MR. Passthrough has a performance cost. We did the standard things like in the Meta Utilities package we have. For example, we extensively used this AutoSet class in /Packages/com.meta.utilities/AutoSet.cs.
We use that because Unity’s GetComponent is slow, so this avoids that when possible.
We also used the Universal Render Pipeline which runs fine. That didn’t include having to port some shaders, that was non-trivial to do. In the public repo, you can see those shader changes too. Check out this Shaders-related commit. It might be useful to some of our community developers.
Q: How do the drones move in DroneRage? Are there any agents, bots, or ML usage? A: They don’t use ML Agents or any Unity package. The way that enemies work is a simple state machine system. For example, you can study the EnemyBehaviour class in /Assets/Discover/DroneRage/Scripts/Enemies/EnemyBehaviour.cs.
In each state there is an EnterState method, an UpdateState, and so on. All this can be inherited, but if someone wants to do that from scratch, they can use an animator so that they can visualize the state machine or state-of-the-art in Unity’s AI tools.
Q: How did you test while developing? A: The primary difficulty we had is testing colocation because you need two devices in the same physical space.
ParrelSync is useful with XR Simulator for testing multiplayer in general. You can have two Unity Editors open at once, and connect their sessions, so you connect one of them as remote rather than collocated. You can do that with two XR Simulators or with one XR Simulator and one Link. This is useful for multiplayer testing, including this specific use case.Learn more about Mixed Reality Development
Unity
Quest
The platform for which this article is written (Unity) does not match your preferred platform (Nativa).
Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.
Are you ready to start designing experiences for Mixed Reality? We’ve put together the following design guidelines to help you understand MR’s crucial elements, what you can do with them, and how you can design great MR experiences.
General Best Practices: How to start designing for MR, including best practices for interacting with virtual content in Mixed Reality.
Mixed Reality Experience and Use Cases: Mixed Reality is a foundational pillar of the current and future Meta Quest headsets and is a key part of delivering next-level experiences to users in the Meta Quest ecosystem. In this document you will explore some of the use cases for MR.
Scene Understanding: Use the real world as a canvas using Scene Understanding.
Passthrough: Blend virtual objects with the physical environment using Passthrough.
Spatial Anchors: Anchor virtual objects in the real-world environment, and provide shared MR experiences.
Health & Safety: Learn how to design safe MR experiences.
ChatGPT
Ok.
User
Mixed Reality Utility Kit Overview
Unity
Quest
The platform for which this article is written (Unity) does not match your preferred platform (Nativa).
Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.
Unity-MRUtilityKit Banner
Overview
MR Utility Kit provides a rich set of utilities and tools on top of Scene API to perform common operations when building spatially-aware apps. This makes it easier to program against the physical world, and allows developers to focus on what makes their app unique.
The utilities we provide broadly fall into 3 categories:
Scene queries
Ray cast queries without using the built in physics engine.
Find a valid spawn position on the floor, walls, surfaces.
Find the best surface position for placing content using ray casts.
Check if a position is inside a room.
Get the bounding box of a room.
Get the parent/child relationship between anchors (for example, volumes stacked on top of each other or wall art attached to walls).
Graphical helpers
Render the walls such that the textures wrap around smoothly, without any seams, and don’t get stretched and deformed depending on their size. This is crucial for reskinning mixed reality worlds.
Make it easy to place virtual objects and furniture to replace their physical counterparts with various options to match orientation, size, and aspect ratio.
Room boundary implemented in the application to keep users safe.
Development tools
Scene debugger allows you to visually inspect the anchors to get their location, orientation, labels, etc.
A selection of 30 prefab rooms for testing your application to ensure it works in a variety of environments.
Health and safety guidelines
While building mixed reality experiences, we highly recommend evaluating your content to offer your users a comfortable and safe experience. Please refer to the Mixed Reality H&S Guidelines before designing and developing your app using this sample project or any of our Presence Platform Features.
Getting started
Requirements
Unity 2021.3.30f1