You can access all the functionality provided by MR Utility Kit either via Blueprint or from C++.
The first thing you will want to do is load a scene. This can be done via an Async Blueprint node or the MRUKSubsystem:
LoadSceneFromDeviceAsync: This async node will load the Scene data stored on your device and is an asynchronous operation. The Success or Failure pins will be executed if the Scene data could be loaded or not.
LoadSceneFromJsonAsync: This async node will load the Scene data from a JSON string which has previously been saved via SaveSceneToJsonString. The Success or Failure pins will be executed if the Scene data could be loaded or not. This is useful if you want to capture your scene and then iterate on it in the editor without needing to run on your device.
For reference, take a look at the level Blueprint in Core.umap, which first tries to load data from the device. If that fails, it falls back to loading a random room from JSON. The sample project has about 30 previously captured rooms in a Data Table where each row contains a different room.
Once the Scene is loaded, an instance of AMRUKRoom will be spawned for each room in your scene. Usually it’s only one room, but it can also be more than one room. You can access this by calling GetCurrentRoom or referencing the Rooms property from MRUKSubsystem. For each anchor in your room an instance of AMRUKAnchor will be spawned as a child of the room, these are accessible via AllAnchors, WallAnchors, FloorAnchors and CeilingAnchors. At this point nothing will be visible in your world yet, but you may query the actors to get information about the anchors, including their labels, position, plane/volume bounds, and so on. In addition to this basic data, there are a number of methods that will help you reason about the scene and populate the room with renderable components. Below is a high level overview of these, for more details about the API please refer to the documentation in the source code or tooltips in the Blueprint nodes:
Raycast/RaycastAll(): Raycast against anchors in the scene, this is implemented independently from the Unreal Engine helpers so does not interfere with your physics setup.
GetBestPoseFromRaycast(): Return a suggested pose from a raycast. Useful for placing AR content with a controller.
IsPositionInRoom(): Check if a position is within the room.
IsPositionInSceneVolume(): Check if a position is within any volumes in the room.
TryGetClosestSurfacePosition(): Get the position on the surface that is closest to the given position with respect to the distance.
TryGetClosestSeatPose(): Finds the closest seat given a ray.
GetLargestSurface(): Return the largest surface for a given label.
GetKeyWall(): Return the longest wall in the room that has no other walls behind it.
GetAnchorsByLabel(): Get a list of anchors by their label.
RoomBounds: World-aligned bounding box for macro functionality.
ParentAnchor/ChildAnchors: Uses heuristics to determine the child/parent relationship between anchors (for example a door would have a wall as a parent, a volume stacked on another will be its child).
GenerateRandomPositionInRoom(): Generate a uniform random position within the room.
Please also refer to the MRUK Samples for a demonstration of all the features that are available.
High-Fidelity Scene
HiFi scene provides a more detailed version of the room layout that allows features such as multiple floors, columns, and slanted ceilings to be queried by the developer.
To use HiFi Scene in MRUK, select the scene model when loading the scene:
When using Load Scene From Device Async Blueprint node, set the Scene Model parameter to V2 or V2_Fallback_V1
When using Load Scene From Json Async Blueprint node, set the Scene Model parameter to V2 or V2_Fallback_V1
When calling LoadSceneFromDevice() or LoadSceneFromJsonString() from C++, pass EMRUKSceneModel::V2 or EMRUKSceneModel::V2_Fallback_V1
The scene model options are:
V1: Standard scene with single floor and ceiling
V2: High-Fidelity Scene with multiple floors, ceilings, and enhanced labels
V2_Fallback_V1: Try to load V2, but fall back to V1 if unavailable
New Features
Inner Wall Faces - A room can now have inner wall faces to represent complex architectural features such as columns, pillars, or other structural elements. These are represented by the new INNER_WALL_FACEsemantic label.
Multiple Floors - A room can now have multiple floors if detected by the space setup. This feature allows for more accurate representation of multi-story buildings or rooms with varying floor levels. The Scene API will provide separate floor planes for each distinct floor level.
Multiple Ceilings - A room can now have multiple ceilings if detected by the space setup. This feature supports complex ceiling geometries, such as slanted or dropped ceilings. The Scene API will provide separate ceiling planes for each distinct ceiling.
World locking
World locking is a feature of MRUK that makes it easier for developers to keep the virtual world in sync with the real world. Virtual content can be placed arbitrarily in the scene without needing to explicitly attach it to anchors, MRUK takes care of keeping the camera in sync with the real world using scene anchors under the hood. This is achieved by making small imperceptible adjustments to the camera rig’s tracking space transform optimizing it such that nearby scene anchors appear in the right place.
Previously with SceneActor, the recommended method to keep the real and virtual world in sync was to ensure that every piece of virtual content is attached to an anchor. This meant that nothing could be considered static and would need to cope with being moved by small amounts every frame. This can lead to a number of issues with networking, physics, rendering, and so on. When world locking is enabled, virtual content can remain static. The space close to the player will remain in sync; however, the trade-off is that space further away may diverge from the physical world by a small amount.
World locking is enabled by default, but can be disabled by deselecting Enable World Lock in the project settings under Plugins > Meta XR > MR Utility Kit.
Tools
These actors and components are designed to be dropped in your scene and used without extra code.
AMRUKAnchorActorSpawner
Used to spawn actors (for example, Blueprint classes) at the location of the anchors and scaled to fit the size of the volumes/planes. This can be used to spawn virtual representations of the objects in your room. If you’re coming from the Scene Actor workflow, you can use this instead as an easier and more flexible way to instantiate objects at anchor locations.
The Spawn Groups property allows you to define what to spawn based on the Semantic Label (example given, BED, CEILING, COUCH, et cetera.). For each label you can define an array of Actors, when an anchor of the given label is encountered it will pick an actor for the list. There are 2 Selection Modes to choose from:
Random: Will pick an actor at random from this list. You can specify an Anchor Random Spawn Seed to get a deterministically random selection. This is useful for multiplayer scenarios where you want the same pseudo-random selection to occur on all clients or simply if you want to ensure the same selection is made between sessions.
Closest Size: The actor picked will be the one which most closely matches the size of the scene volume. Size is defined as the cube root of width * height * depth.
If the Actors array is empty then you have the option to Fallback to Procedural both at the Label level and the global level. If Fallback to Procedural is enabled then a procedural mesh will be created instead. A material may be assigned to the procedural mesh via the Procedural Material property. If Fallback to Procedural is disabled then nothing will spawn for the given anchor.
For each entry in the Actors array you can specify:
Actor: This is the actor to spawn, this can be a Blueprint class.
Match Aspect Ratio: When this is enabled, the orientation of the actor will best match the aspect ratio of the actor in the X/Y plane. Scene volumes don’t have a forward direction, so this can be useful for couches for example to make sure they don’t get overly distorted. It is recommended to enable this on actors which are long and thin volumes, keep it disabled for objects with an aspect ratio close to 1:1. Only applies to volumes.
Calculate Facing Direction: When calculate facing direction is enabled the actor will be rotated to face away from the closest wall. If match aspect ratio is also enabled then that will take precedence and it will be constrained to a choice between 2 directions only. Only applies to volumes.
Scaling Mode: This determines how the actor is scaled to fit the scene volume. By default the actor will be stretched to fit the size of the plane/volume. But in some cases this may not be desirable and can be customized here.
Stretch: Stretch each axis to exactly match the size of the Plane/Volume.
UniformScaling: Scale each axis by the same amount to maintain the correct aspect ratio.
UniformXYScale: Scale the X and Y axes uniformly but the Z scale can be different.
NoScaling: Don’t perform any scaling.
Fallback to Procedural: Falls back to generating a mesh for the anchor. For example for walls this will be a plane. For a couch this will be a box. The generated meshes have texture coordinates as well. Anchors of type WALL_FACE get treated special. If Fallback to Procedural is set to true for WALL_FACE the texture coordinates will be chosen in a way that you get continuous texture coordinates over all walls.
Find spawn position
The GenerateRandomPositionInRoom function on AMRUKAnchor provides a convenient way to find spawn positions for Actors. It will generate a random uniformly distributed position in the room, you can optionally specify how far away from the nearest surface it should be and if it should avoid generating points inside other scene anchor volumes or not.
Environment Raycasting
Raycasting with Raycast() or RaycastAll() as described above, requires the user to go through space setup. If a more dynamic approach to raycasting is needed, then Environment Raycasting might be more applicable. It uses live Depth data to perform raycasts, so no space setup is needed.
To use environment raycasting, ensure you are granted the com.oculus.permission.USE_SCENE permission, and start the environment raycaster with UMRUKSubsystem::CreateEnvironmentRaycaster(). After the call succeeds, raycasts can be executed with UMRUKSubsystem::RaycastEnvironment(). If raycasting is not needed anymore it can be stopped with UMRUKSubsystem::DestroyEnviornmentRaycaster().
The environment raycaster takes a few frames to initialize. The method RaycastEnvironment() can be used right away, but it might return a status of Failure for a few frames. To understand whether the raycaster is currently stopped, creating, or running, use UMRUKSubsytem::EnvironmentRaycasterStatus().
UMRUKDebugComponent
The Debug Component is a convenient way to be able to visualize anchors in your app. It can be configured such that pointing and clicking on an Anchor will display its label(s), scale, collision point and a suggested Pose for placing MR content.
In order to use this, add the component to your Pawn class and hook up the inputs in BluePrint code. You will need to call ShowAnchorAtRayHit or ShowAnchorSpaceAtRayHit passing in an Origin and Direction which can be obtained from one of the motion controllers for example when a button is pressed. When the button is released call HideAnchor or HideAnchorSpace. An example of this setup can be found in BP_VRDemoPawn event graph Blueprint in the sample app.
AMRUKGuardianSpawner
Creates a Guardian-like protective mesh that renders as a player approaches it, eventually showing Passthrough directly. This is helpful for safety if your scene is intended to be fully virtual instead of using Passthrough. The mesh is created using the GenerateProceduralAnchorMesh function on the anchors. Refer to Guardian.umap for an example of it being used.
UMRUKBlobShadowComponent
A common alternative to real-time shadows are blob shadows. They are simple blots of color that only take the general shape of the actor into account and are more performant. They can be attached to any actor to create a blob shadow below it. This works with Passthrough as well if you spawn transparent meshes in place of the scene anchors. For an example take a look at PTRL.umap.
AMRUKLightDispatcher
The data of the point lights in the scene are sent to MPC_Highlights via MRUKLightDispatcher. Each point light is represented by 3 vectors: one for the light’s position, one for the light’s parameters (AttenuationRadius, LightBrightness, LightFalloffExponent, and UseInverseSquaredFallOff0) and one for the light’s color. Finally, the amount of lights is sent in the scalar parameter TotalLights.
AMRUKDistanceMapGenerator
Generates a distance map. A distance map is a texture that allows to obtain distances to scene objects in for example materials for various visual effects. The distance map works by defining two zones in the room: Zone A (inside the room’s free space) and Zone B (outside of the room or inside scene volumes). The distance map generator then creates a texture where each pixel’s red and green components represent the coordinates of the closest pixel from zone A to Zone B. Likewise, the blue and alpha components represent the coordinates of the closest pixel from Zone B to Zone A. For convenience, a material a material function called MF_DistanceMap is provided that takes as input the distance map and calculates for each pixel in the final image the distance to the opposite zone. Pixels that are in Zone A will be positive whereas pixels in Zone B will be negative. Take a look at the material M_Rainbow which gets used in DistanceMapSample to understand how the distance map can be used in materials.
AMRUKDestructibleMeshSpawner
The Destructible Global Mesh Spawner is a feature designed to automatically generate destructible global meshes when new rooms are created. A destructible global mesh is a version of the global mesh that can be broken down during runtime. The core functionality is managed by the UDestructibleMeshComponent. Upon initialization, this component performs a one-time preprocessing step that segments the global mesh into smaller chunks. These chunks can then be manipulated during gameplay, allowing for dynamic interactions such as removal through ray casts, which simulates the effect of the global mesh breaking apart. To enhance the visual quality during the destruction process, such as when mesh chunks are removed, a particle system can be utilized to create realistic effects. Additionally, the system provides the capability to define specific areas within the mesh that should remain non-destructible. This can be useful to for example keep the floor indestructible.
Passthrough Camera Access
Overview
Building on top of the Android Camera2 API this release introduces the new feature of Passthrough Camera Access API, which provides access to the forward-facing cameras on the Quest 3 and Quest 3S for the purpose of supporting Computer Vision and Machine Learning. We anticipate developers using the Passthrough Camera API to add application-specific computer-vision capabilities to extend the understanding of the user’s environment and actions beyond what is provided by Quest Scene API.
Either permission android.permission.CAMERA or horizonos.permission.HEADSET_CAMERA is required. While the CAMERA permission gives access to both passthrough and avatar camera, the HEADSET_CAMERA permission gives access only to the passthrough camera.
Passthrough feature: must be enabled to access the Passthrough Camera API.
Usage
To create a PassthroughCameraAccessTexture, right-click in the Content Explorer and navigate to Miscellaneous open MRUtilityKit and click on PassthroughCameraAccessTexture. You can create up to two PassthroughCameraAccessTexture assets: one for the left eye and one for the right eye.
Additionally, you need to create a PassthroughCameraAccess asset for each eye, where you can configure the resolution, framerate, and specify the eye. The PassthroughCameraAccess asset needs to be assigned to the PassthroughCameraAccessTexture. To do that, open the PassthroughCameraAccessTexture and assign it there.
To start the camera feed, use UMRUKPassthroughCameraAccess::Play() on the PassthroughCameraAccessTexture. There is also a Blueprint node available for this function.
You can query available resolutions using GetSupportedResolutions() on the UMRUKPassthroughCameraAccessSubsystem.
The UMRUKPassthroughCameraAccess class provides several helper functions to convert points between the Passthrough Camera texture space and world space. Refer to the API documentation for ViewportPointToWorldSpaceRay() and WorldToViewportPoint(). Additional functions are available, so please consult the API documentation for more details.
There is also support for getting the timestamp corresponding to the latest camera image with UMRUKPassthroughCameraAccess::GetTimestamp().