Health and Safety Recommendation: While building mixed reality experiences, it is highly recommended to evaluate your content to offer your users a comfortable and safe experience. Please refer to the Health and safety guidelines and Design guidelines before designing and developing your app using Scene.
Overview
Mixed Reality Utility Kit provides a set of utilities and tools on top of Scene API (not to be confused with Spatial SDK Scene) to perform common operations when building spatial apps. This makes it easier to program against the physical world, and allows you to focus on what makes your app unique.
How does scene work?
Scene model
Scene model is a comprehensive, current representation of the physical world that can be indexed and queried. It provides a geometric and semantic representation of the user’s space, allowing you to build mixed reality experiences. You can think of it as a scene graph for the physical world.
The main use cases include physics, static occlusion, and navigation in the physical world. For example, you can attach a virtual screen to the user’s wall or have a virtual character navigate the floor with realistic occlusion.
Scene model is managed and persisted by the Meta Quest operating system. All apps can access scene model. You can use the entire scene model or query the model for specific elements.
As the scene model contains information about the user’s space, you must request the app-specific runtime permission for spatial data in order to access the data. See Spatial Data Permission for more information.
Space setup
Space setup is a system flow that generates a scene model. Users can navigate to Settings > Environment Setup > Space Setup to capture their scene. The system will assist the user in capturing their environment. It also provides a manual capture experience as a fallback. In your app, you can query the system to check whether a scene model of the user’s space exists. You can also invoke space setup as needed. See Requesting Space Setup for more information.
You cannot perform space setup over Link. You must perform space setup on-device prior to loading the scene model over Link.
Getting started
Prerequisites
Include the meta-spatial-sdk-mruk.aar and org.jetbrains.kotlinx:kotlinx-serialization-json:1.6.3 dependencies in your project as described in the setup tutorial. If you are using one of the sample projects, this step is already completed for you.
Adding the MRUK feature
MR Utility Kit is provided as a SpatialFeature. In order to enable it in your app, add MRUKFeature to the list returned by the registerFeatures function.
MR Utility Kit requires the USE_ANCHOR_API and USE_SCENE permissions to be enabled to access scene data from the device. Open projects/android/AndroidManifest.xml and add these permissions:
USE_SCENE is a runtime permission. In addition to declaring it in the AndroidManifest.xml you must also add code to your app to prompt the user for permission.
Here is an example:
companion object {
const val PERMISSION_USE_SCENE = "com.oculus.permission.USE_SCENE"
const val REQUEST_CODE_PERMISSION_USE_SCENE = 1
}
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
if (checkSelfPermission(PERMISSION_USE_SCENE) != PackageManager.PERMISSION_GRANTED) {
requestPermissions(arrayOf(PERMISSION_USE_SCENE), REQUEST_CODE_PERMISSION_USE_SCENE)
}
}
override fun onRequestPermissionsResult(
requestCode: Int,
permissions: Array<out String>,
grantResults: IntArray
) {
if (requestCode == REQUEST_CODE_PERMISSION_USE_SCENE &&
permissions.size == 1 &&
permissions[0] == PERMISSION_USE_SCENE) {
if (grantResults[0] == PackageManager.PERMISSION_GRANTED) {
// Safe to access scene data
} else {
// Handle permission denied
}
}
}
Wait until permission is granted before attempting to load scene data. As this is an asynchronous operation, you must wait until the onRequestPermissionsResult callback is received. Handle cases where the user denies permission by implementing a suitable fallback.
Loading scene data
Once you have permission to access scene data, you can call loadSceneFromDevice from the MRUKFeature class:
var future = mrukFeature.loadSceneFromDevice()
future.whenComplete { result: MRUKLoadDeviceResult, _ ->
Log.i(TAG, "Load scene from device result: ${result}")
}
loadSceneFromDevice returns a CompletableFuture. This is an asynchronous operation. You must wait until the operation is completed before attempting to access the data. Use the whenComplete method to do this.
High-Fidelity Scene
MRUK supports the High-Fidelity Scene API. HiFi scene provides a more detailed version of the room layout that allows features such as multiple floors, columns, and slanted ceilings to be queried by the developer. To use HiFi Scene in MRUK, use the checkbox in MRUK Enable High-Fidelity Scene.
To use High-Fidelity Scene call mrukFeature.loadSceneFromDevice(true, false, SceneModel.V2). To fallback to the simple scene use mrukFeature.loadSceneFromDevice(true, false, SceneModel.V2_FALLBACK_V1).
New Features
Inner Wall Faces - A room can now have inner wall faces to represent complex architectural features such as columns, pillars, or other structural elements. These are represented by the new INNER_WALL_FACE semantic label.
Multiple Floors - A room can now have multiple floors if detected by the space setup. This feature allows for more accurate representation of multi-story buildings or rooms with varying floor levels. The Scene API will provide separate floor planes for each distinct floor level, enabling developers to create more immersive and realistic experiences.
Multiple Ceilings - A room can now have multiple ceilings if detected by the space setup. This feature supports complex ceiling geometries, such as slanted or dropped ceilings, and allows developers to create more detailed and realistic environments. The Scene API will provide separate ceiling planes for each distinct ceiling, enabling developers to create more immersive and interactive experiences.
JSON data
You can load scene data from a JSON string, in addition to loading it from your device. This can be useful for testing how your app will behave in a variety of different rooms without being physically present.
Here is an example:
val file = applicationContext.assets.open("scene.json")
val text = file.bufferedReader().use { it.readText() }
mrukFeature.loadSceneFromJsonString(text)
Unlike loadSceneFromDevice, this is a synchronous operation. You can access the data immediately after calling it.
Accessing scene data
Once you have loaded the scene data (either from device or from JSON), you can access it through the rooms property on the MRUKFeature class. This provides a list of MRUKRoom instances. Each room has an anchors property, which is a list of entities. Each entity has a Transform and MRUKAnchor component associated with it. It optionally has a MRUKPlane and/or MRUKVolume component.
Here is an example of how to iterate over the data and print it out:
for (room in mrukFeature.rooms) {
Log.d("MRUK", "Room ${room.anchor}")
for (anchorEntity in room.anchors) {
val anchor = anchorEntity.getComponent<MRUKAnchor>()
Log.d("MRUK", "Anchor: ${anchor.uuid}, labels: ${anchor.labels}")
val transform = anchorEntity.getComponent<Transform>()
Log.d("MRUK", "Transform: ${transform.transform}")
val plane = anchorEntity.tryGetComponent<MRUKPlane>()
if (plane != null) {
Log.d("MRUK", "Plane min: ${plane.min}, max: ${plane.max}, boundary: ${plane.boundary}")
}
val volume = anchorEntity.tryGetComponent<MRUKVolume>()
if (volume != null) {
Log.d("MRUK", "Volume min: ${volume.min}, max: ${volume.max}")
}
}
}
You can use Query to find entities without going through the MRUKFeature class. This is described in the ECS documentation.
Spawn glTF meshes
AnchorMeshSpawner provides a convenient way to spawn glTF meshes that are scaled and positioned to match the bounds of a MRUKVolume or MRUKPlane. This allows you to create virtual representations of your furniture and have them appear in the same location as your physical furniture.
Here is an example:
val meshSpawner: AnchorMeshSpawner =
AnchorMeshSpawner(
mrukFeature,
mapOf(
MRUKLabel.TABLE to AnchorMeshSpawner.AnchorMeshGroup(listOf("Furniture/Table.glb")),
MRUKLabel.COUCH to AnchorMeshSpawner.AnchorMeshGroup(listOf("Furniture/Couch.glb")),
MRUKLabel.WINDOW_FRAME to AnchorMeshSpawner.AnchorMeshGroup(listOf("Furniture/Window.glb")),
MRUKLabel.DOOR_FRAME to AnchorMeshSpawner.AnchorMeshGroup(listOf("Furniture/Door.glb")),
MRUKLabel.OTHER to AnchorMeshSpawner.AnchorMeshGroup(listOf("Furniture/BoxCardBoard.glb")),
MRUKLabel.STORAGE to AnchorMeshSpawner.AnchorMeshGroup(listOf("Furniture/Storage.glb")),
MRUKLabel.BED to AnchorMeshSpawner.AnchorMeshGroup(listOf("Furniture/TwinBed.glb")),
MRUKLabel.SCREEN to
AnchorMeshSpawner.AnchorMeshGroup(listOf("Furniture/ComputerScreen.glb")),
MRUKLabel.LAMP to AnchorMeshSpawner.AnchorMeshGroup(listOf("Furniture/Lamp.glb")),
MRUKLabel.PLANT to
AnchorMeshSpawner.AnchorMeshGroup(
listOf(
"Furniture/Plant1.glb",
"Furniture/Plant2.glb",
"Furniture/Plant3.glb",
"Furniture/Plant4.glb")),
MRUKLabel.WALL_ART to AnchorMeshSpawner.AnchorMeshGroup(listOf("Furniture/WallArt.glb")),
))
meshSpawner.spawnMeshes(room)
Create a procedural mesh
AnchorProceduralMesh provides a way to create a procedural mesh that matches the 2D plane boundary of an anchor. This is useful for creating meshes for the floor, ceiling, and walls. You can supply a custom material.
Here is an example:
val floorMaterial =
Material().apply { baseTextureAndroidResourceId = R.drawable.carpet_texture }
val wallMaterial = Material().apply { baseTextureAndroidResourceId = R.drawable.wall_texture }
val procMeshSpawner: AnchorProceduralMesh =
AnchorProceduralMesh(
mrukFeature,
mapOf(
MRUKLabel.FLOOR to AnchorProceduralMeshConfig(floorMaterial, false),
MRUKLabel.CEILING to AnchorProceduralMeshConfig(wallMaterial, false),
MRUKLabel.WALL_FACE to AnchorProceduralMeshConfig(wallMaterial, false),
))
procMeshSpawner.spawnMeshes(room)
Raycasting
The MRUKFeature allows raycasting against a room. You can query for hits starting from a specified origin and direction, typically a head or controller pose. TheraycastRoom function returns the first hit encountered within the room or null if no hit is found. The raycastRoomAll returns all hits as a collection. The result is of type MRUKHit and provides the distance, position, and normal of a hit.
Here’s an example of querying for hits from the right hand to the current room:
val hits =
mrukFeature.raycastRoomAll(
currentRoom.anchor.uuid,
rightHandPose.t,
rightHandDirection,
maxDistance,
SurfaceType.ALL)
Environment raycasting
Raycasting, as described above, requires the user to go through space setup. If a more dynamic approach to raycasting is needed, then Environment Raycasting might be more applicable. It uses live Depth data to perform raycasts, so no space setup is needed.
To use environment raycasting, ensure you are granted the com.oculus.permission.USE_SCENE permission, and start the environment raycaster with:
val result = mrukFeature.startEnvironmentRaycaster()
if (result == MRUKStartEnvironmentRaycasterResult.SUCCESS) {
Log.i(TAG, "Environment raycaster started successfully")
} else {
Log.e(TAG, "Environment raycaster failed to start: $result")
}
After this call succeeds, raycasts can be executed, for example, with:
val depthRaycastResult = mrukFeature.raycastEnvironment(rightHandPose.t, rightHandDirection)
if (depthRaycastResult.result == MRUKEnvironmentRaycastHitResult.SUCCESS) {
val hitPoint = depthRaycastResult.point
val hitOrientation = Quaternion.lookRotation(depthRaycastResult.normal.normalize())
}
The environment raycaster takes a few frames to initialize. The method raycastEnvironment can be used right away, but it might return a status of NOT_READY for a few frames. To understand whether the raycaster is currently stopped, creating, or running, use mrukFeature.getEnvironmentRaycasterStatus(). If raycasting is no longer needed, mrukFeature.stopEnvironmentRaycaster() can be called to free up resources.
Trackables
A trackable is a physical object (for example, a keyboard) that can be detected and tracked at runtime.
Currently, the only supported trackers are keyboards and QR codes.
MRUK simplifies the runtime detection of trackables by automatically instantiating entities. To enable the detection of trackables, configureTrackers needs to be called with the requested trackables.
In the case of keyboard tracking, MRUKFeature will create an Entity with a TrackedKeyboard, Transform, and an MRUKVolume component. The MRUKVolume component contains the bounding box of the keyboard. Please note that for keyboard tracking to work, you need to enable ‘Show my keyboard’ under Settings -> Devices. For QR codes, MRUKFeature will create an Entity with a TrackedQrCode, Transform, and MRUKPlane component. The TrackedQrCode component will contain the payload of the QR code.