Streaming is available in most browsers,
and in the Developer app.
-
Develop your first immersive app
Find out how you can build immersive apps for visionOS using Xcode and Reality Composer Pro. We'll show you how to get started with a new visionOS project, use Xcode Previews for your SwiftUI development, and take advantage of RealityKit and RealityView to render 3D content.
Chapters
- 0:00 - Introduction
- 1:06 - Create an Xcode project
- 8:57 - Simulator
- 12:12 - Xcode Previews
- 13:34 - Reality Composer Pro
- 20:18 - Create an immersive scene
- 25:28 - Target gestures to entities
- 30:16 - Wrap-up
Resources
Related Videos
WWDC23
- Build spatial experiences with RealityKit
- Get started with building apps for spatial computing
- Go beyond the window with SwiftUI
- Meet ARKit for spatial computing
- Meet Reality Composer Pro
- Meet UIKit for spatial computing
- Take SwiftUI to the next dimension
- Work with Reality Composer Pro content in Xcode
-
Download
♪ Mellow instrumental hip-hop ♪ ♪ Hi, my name is Peter, and I work on the RealityKit Tools team at Apple. Today, we'll look at how you can get started developing your first immersive app. Spatial computing offers entirely new ways of presenting your content, and integrating deeper levels of immersion in your apps. While the platform itself is new, building apps for it uses workflows that may already be familiar to you. In this session, we will start by creating a new app project in Xcode. We will see how the Simulator allows you to experience your app in a simulated scene, and how you can use Xcode Previews for quick iteration. We will introduce Reality Composer Pro, a new tool that helps you prepare and preview spatial content for your apps. Finally, we'll show how your app can create an immersive scene, and target SwiftUI gestures to RealityKit entities. Millions of developers like you use Xcode every day to create, preview, debug, profile, and prepare apps for distribution. Xcode is the best place for you to create your first app. Let's go through the Xcode project creation process and see what's new for this platform. When we create a new project in Xcode, we are presented with the new project assistant. It organizes project templates by platform and project type. The app project template is available in the Application section under the Platform tab. Note that the new project assistant may ask you to download platform support if it isn't already installed. The new project assistant presents us with several options, two of which are new for this platform. Let's take a closer look at each of these new options. The first new option, Initial Scene, allows us to specify the type of the initial scene that's automatically included in the app. The new project assistant always creates a starting point with a single scene of the type you choose here. As a developer, you can add additional scenes later. These can be of the same type as your initial scene, or they can be a different scene type altogether. The template offers two initial scenes, window and volume. Let's take a look at the differences between these. Windows are designed to present content that is primarily two-dimensional. They can be resized in their planar dimensions, but their depth is fixed. Windows will generally be shown alongside other running apps. You can learn more specifics about the window scene type, along with additions and changes to SwiftUI, in the session, "Meet SwiftUI for spatial computing." Volumes are designed to present primarily 3D content. Their sizes in all three dimensions are controlled by the app itself, but cannot be adjusted by the person using the app. Like windows, volumes will generally be shown alongside other running apps. The session "Take SwiftUI to the next dimension" provides more information about the volume scene type. The second new option, Immersive Space, gives you the opportunity to add a starting point for immersive content to your app. When you add an Immersive Space scene type to your app, you can use it to present unbounded content anywhere on the infinite canvas. When your app activates this scene type, it moves from the Shared Space to a Full Space. In a Full Space, other running apps are hidden in order to avoid distraction. Your app can also access dedicated rendering resources, and it can request permission to enable ARKit features like hand tracking. If you wish to create an immersive experience for your app, SwiftUI offers three different styles for your scene: mixed immersion, progressive immersion, and full immersion. The mixed immersion style lets your app place unbounded virtual content in a Full Space while still keeping people connected to their surroundings through passthrough. The progressive immersion style opens a portal to offer a more immersive experience that doesn't completely remove people from their surroundings. When a portal opens, people get a roughly 180-degree view into your immersive content, and they can use the Digital Crown to adjust the size of the portal. The full immersion style hides passthrough entirely and surrounds people with your app's environment, transporting them into a new place. We'll talk more about Immersive Spaces later in this session. For a deep dive, we invite you to watch the session, "Go beyond the window with SwiftUI". By default, no Immersive Space is added to your app. This is the behavior when you select the option None. However, if you select one of the Immersive Space options, the template will automatically add a second SwiftUI scene with the Immersive Space style you've selected. By default, it will also provide a SwiftUI button in the windowed scene so that someone can open the immersive content. In general, we recommend apps always start in a window on this platform, and provide clear entry and exit controls so that people can decide when to be more immersed in your content. Avoid moving people into a more immersive experience without their knowledge. Let's configure our project for this session. We will start with an initial volume and no Immersive Space. We finish creating our project as usual, giving it a name, and telling Xcode where it should be saved. Once created, the new project opens. On the left side, we see Xcode's Project Navigator. The first file is MyFirstImmersiveApp.swift, which declares a WindowGroup for the app that presents the initial volume. WindowGroup is the same construct that you've seen on iOS that specifies the top level SwiftUI views that your app presents. The second file is ContentView.swift, which is the view that is shown in this initial volume. The project opens with ContentView.swift in the main editor. Xcode also shows us a preview of ContentView, which loads the contents of a RealityKit content package that was automatically included with the project. Most of the code in the new project is in ContentView. ContentView uses several new platform-specific features, so let's take a closer look. ContentView is the name of the SwiftUI view presented by the volume. It defines a single SwiftUI State property called "enlarge" that's used for a simple effect. As a SwiftUI view, our content is provided by the body property. The body consists of two views nested in a VStack. The VStack makes the nested views stack vertically. The first nested view is a RealityView. RealityView is new for this platform, and we'll come back to it in a moment. The second nested view is a standard SwiftUI Toggle view, embedded in another VStack. The Toggle view toggles the value of the enlarge property. The VStack provides the glassBackgroundEffect to ensure that buttons are legible and easy to interact with. If you've worked with SwiftUI, there's a good chance you've already seen the Toggle view. Most of the SwiftUI controls that are already supported on other platforms will work as expected. In a moment, we'll see how to use a gesture to toggle the enlarge property. But first, let's take a closer look at RealityView. RealityView allows you to place Reality content into a SwiftUI view hierarchy. The RealityView initializer used in ContentView takes two closures as parameters, a make closure and an update closure. The make closure adds the initial RealityKit content to the view. It tries to load the contents of the RealityKit content package. And if it succeeds, it adds the loaded content to the view using content.add. We could also generate the initial content procedurally, or use a combination of procedural and loaded content. The update closure is optional, but if provided, it will be called whenever the SwiftUI state changes. It starts by getting the first entity from content.entites, since that's what was added in the make closure. It then choses a uniformScale factor based on the value of the enlarge property in the SwiftUI state and applies this scale to the entity. It is important to note the RealityView update closure is not a rendering update loop and is not called on every frame. Instead, the update closure is only called when the SwiftUI state changes. Finally, the RealityView has a gesture attached to it. When you tap on the RealityKit content, it toggles the value of the enlarge property, producing the same effect as tapping the Toggle view that we previously covered. To learn more about RealityView and gestures, you can watch "Build spatial experiences with RealityKit." Now that we've taken a look at ContentView, let's introduce the simulator and show how to navigate and interact with apps running in a simulated scene. We'll then see how our app looks in the Simulator. The Simulator presents itself in a window that will be familiar to you if you've used it for other platforms. When it first launches, you are presented with the application launcher. The Simulator mimics what someone would see wearing a device. By default, the pointer controls what you're looking at. Clicking the mouse or trackpad simulates tap, and holding the click simulates pinch. A big part of spatial computing is being able to look and move around your surroundings. The Simulator offers additional controls to do exactly that. In the bottom-right corner of the Simulator window there are several buttons for controlling the simulated device. Clicking and holding while moving the mouse or trackpad on these allows you to look around... ...pan... ...orbit... ...and move forwards and backwards. Clicking and holding these controls gives you the ability to quickly switch between interacting with content and looking and moving around. You can also click on these buttons to switch into a given control mode so that you don't need to keep holding the mouse button. For example, if I click on the pan button, then clicking and dragging the viewport pans the view. Clicking the leftmost control switches back to controlling look and tap. The Simulator comes with several simulated scenes that you can use to see your app running in different rooms and lighting conditions. You can switch between them through the simulated scenes menu on the toolbar.
For more information on using the Simulator, please see the documentation on developer.apple.com. Now that we're familiar with the Simulator, let's take a look at our new app running there. As usual, we run the app from Xcode by clicking on Run in the Product menu. Once the app launches, we see the volume showing the contents of the RealityKit content package. Tapping the Enlarge RealityView Content button causes the content to enlarge, and tapping on it again causes it to return to its original size. We can also tap on the sphere to enlarge it because of the gesture on the RealityView.
The button's highlight changes when we tap the sphere. The tap gesture is updating the SwiftUI state, causing both the RealityView and the Toggle view to react to the state changes. Xcode Previews allows you to quickly focus and iterate on the look and behavior of your app's views. When you are editing a source file that contains a SwiftUI preview provider, the preview canvas will automatically open in Xcode. As with the Simulator, Xcode Previews are presented as a simulated device view. You can use the same controls to navigate the preview window as you used to navigate the Simulator. Let's use the controls to move a bit closer to the content. You can also change the simulated scene, as well as camera angles, using the controls in the bottom-right corner. We can make changes to the SwiftUI code and see the preview update in real time. Let's go ahead and make a change to the text of the toggle, changing it to "Change Size." Notice the preview update as we change the text. Also, notice the button is still functional in Xcode Previews. We can use this to iterate on the contents of the RealityView closures as well. Xcode Previews has many more advanced features, including an object mode that allows you to discover content that extends beyond the bounds of your app, as well as custom camera angles. You can find out more about Xcode Previews in the developer documentation. We have created a new tool to help you work with RealityKit content packages. Reality Composer Pro is a great place for you to prepare and preview spatial content for your apps. Our app's ContentView uses RealityView to load its content from a RealityKit content package. The content package created by the template is called RealityKitContent and is located in the Packages group in the Xcode project. Here we see our project with RealityKitContent selected. RealityKit content packages are Swift packages containing RealityKit content. They are processed at build time to optimize your content for runtime use. If we click on the disclosure indicator for RealityContent, we see the contents of the content package. If we click on Package, with the cube icon, we see a preview of one of the scenes in the content package. To edit the content package, click the Open in Reality Composer Pro button in the top right. This will launch Reality Composer Pro. When Reality Composer Pro launches, we see the 3D content loaded by the ContentView. While Xcode's main focus is on editing source files and app resources, Reality Composer Pro puts 3D content front and center. Its primary view is the 3D viewport, which can be navigated using controls similar to those in the Simulator. Reality Composer Pro organizes its contents into scenes. The content package that was included in the project template starts with a single scene. In order to enhance our project, let's create a new scene that will contain content for an Immersive Space. From Reality Composer Pro's File menu, select New > Scene. Give it a name -- in this case, we'll simply call it ImmersiveScene -- then click Save. After we create the scene, it's automatically opened, and we see a thumbnail of the empty scene in the Project Browser at the bottom of the window. We can switch between scenes by clicking on their names at the top of the window, or by double-clicking on them in the Project Browser. We are now ready to add immersive content to the new scene. When we configured the Xcode project, we mentioned how you can use SwiftUI's ImmersiveSpace to present unbounded content anywhere around you. There are two more key details to understand about this scene type. First, unlike the window and volume scene types, ImmersiveSpace uses the inferred position of your feet as the origin of the content. In this coordinate system, the positive x-axis is to your right, the positive y-axis is up, and the negative z-axis is in front of you. Second, when your apps run in a full space, they can request access to additional data, such as the exact position and orientation of your hands. Keep in mind that some of this data is privacy-sensitive. If your app requests privacy-sensitive data, the person using the app will be prompted to approve this request. This is not available to apps in the shared space. For more information on additional data available and privacy considerations for apps presenting an Immersive Space, please refer to the session "Meet ARKit for spatial computing." Now that we know more about how to create an immersive experience, let's assemble some content that will work well in an ImmersiveSpace. I have a USDZ cloud model that we'll use to create some content that's appropriate for an immersive experience. To add a USDZ model to a Reality Composer Pro scene, open the File menu and click Import. Then choose the file. Notice that the USDZ model appears in the Project Browser. To add it to the scene, just drag it onto the viewport. You can also simply drag and drop a USDZ file from a Finder window onto the viewport to import and add it to the scene at the same time. Now, let's position the cloud in our immersive scene. We can move objects around by selecting them and using the handles that appear. Or we can manually set values in the Inspector panel on the right. Since this scene type uses the inferred position of your feet as the origin, we should position the cloud such that it will appear someplace we'll immediately see it. In this case, we'll place it in front and a bit to the right of you, somewhat above eye level. I want this cloud to appear a bit to the right. The positive x-axis is to the right, so let's set X to 50. Notice when we make this change, the cloud moves out of the viewport. To focus on it again, double-click on it in the scene hierarchy on the left. With the cloud visible again, let's think about the Y coordinate. We'd like the cloud to appear above us, so let's place it at a height of 200 centimeters. That's about six and a half feet above the floor. The cloud again leaves the viewport, so let's bring it back into view. We should place the cloud somewhat in front of us so that we don't have to look straight up to see it. The direction away from us is the negative z-axis, so let's set the Z position to -200 centimeters. Double-click on it in the scene hierarchy one more time to bring it front and center. The cloud is on the small side for our immersive scene. Let's see how we can make it bigger. To increase the scale, drag the circle away from it. We'd like it to be about five times bigger than it was when imported. Finally, let's add a second cloud, this time to the left. We can use the Edit menu > Duplicate command to make a copy of the first cloud. To put the copy to the left, set the X coordinate to -50.
To frame all the contents of the scene in the viewport, double-click on Root in the hierarchy. Great, now we have a scene with content that's suitable for an immersive experience. Let's save our changes before we go back to Xcode using File > Save All. Reality Composer Pro is a powerful tool for preparing, previewing, and integrating spatial content into your app. For a more detailed introduction, we invite you to watch the session "Meet Reality Composer Pro." The session "Work with Reality Composer Pro content in Xcode" builds on the first and shows you ways to closely integrate the content in your RealityKit content package with your app. The next step is to present the immersive content we've created in our app. The scenes presented by the app are in the source file App.swift, prefixed with the project name. Let's take a closer look at it now. You may have asked yourself how our app knows to present ContentView. We see that our app uses a single WindowGroup to present ContentView as the contents of the volume. WindowGroup is a scene that creates one or more windows or volumes that present the given view. The first scene in the body property is the one that will be presented by the app when it is launched, and you can add additional scenes to your app by adding them after the first scene. We'd like our app to present an immersive space with the content we just created in Reality Composer Pro. The space will show the contents of a new view called ImmersiveView that we will add to our app. We need to assign an ID to the space. We've chosen the string "ImmersiveSpace" as its ID, which we will later use when we open the space. Let's add this code to our project's App.swift source file, and then add code to ImmersiveView to load the new scene we created in Reality Composer Pro. I've already added ImmersiveView.swift to the project using the SwiftUI View template in Xcode. In our project's App.swift, we add the ImmersiveSpace. Then, at the top of ImmersiveView.swift, we import RealityKitContent so that we can use the RealityKit content package. We'll also need to import RealityKit to use RealityView. The default content for ImmersiveView is just a text box. Let's replace it with a RealityView that loads the content from the new scene we added to the content package. To do so, double-click ContentView in the project hierarchy to the left, select and copy the code for the RealityView along with its first closure. We can use its open file tab to go back to ImmersiveView, where we select the text view and then paste to replace it with the RealityView code. You may have noticed we didn't copy the update closure for RealityView. This is because we don't intend to update the contents of this view in response to changing SwiftUI state. Finally, to make it load the content of the Immersive Scene we've created, change the name of the loaded scene from "Scene" to "ImmersiveScene." The preview is now loading the content from ImmersiveScene, but why can't we see it in the preview canvas? When we created ImmersiveView, an Xcode Preview was automatically created for us. Let's take a closer look. If we look at the bottom of ImmersiveView.swift, we see the code that tells Xcode to show a preview. It's the block of code that starts with #Preview. By default, previews are clipped to default scene bounds. If it's presenting a view that loads content outside of these bounds, the content will not be visible. In order to support previewing immersive content that extends beyond these bounds, simply modify the view being prepared with .previewLayout(.sizeThatFits). Let's do that now. If I add .previewLayout(.sizeThatFits) to ImmersiveView's preview, the preview will update and we will see the immersive content.
Finally, let's have our app open the Immersive Space. If you have worked with multiscene SwiftUI apps on iOS, you may have already seen how additional scenes are opened from SwiftUI code. The first step is to capture the closure from the view's SwiftUI environment, which is then called in response to an event, such as the press of a button. Presenting an Immersive Space works the same way in SwiftUI on the new platform, except that the closure that is captured is called "openImmersiveSpace" and is asynchronous, allowing your code to know when the Immersive Space has been presented. Back in ContentView, we simply capture the openImmersiveSpace closure from the SwiftUI environment, and then add a button that invokes it.
We've now made all of the changes needed for our app to present the immersive content. You can experience your content in the Simulator, but immersion is particularly compelling in the device itself. Let's check it out. We now see a new button that, when pressed, presents our clouds as the content of our ImmersiveSpace. We see two clouds in front of us, one to the left and the other to the right. Note that the Immersive Space is distinct from the initial scene of the app. If we move the initial scene around, we see that the content in the ImmersiveSpace stays fixed. While a person can move the app's initial volume anywhere they like, an ImmersiveSpace is placed at a fixed location when it is opened. Rather than moving the Immersive Space around, you move yourself around inside the Immersive Space. We have built a simple app that presents clouds above your head using an Immersive Space. What if we wanted our app to respond to the interactions with the clouds? For simplicity, imagine that tapping on a cloud causes it to float gently across the sky. Let's see how we can accomplish this. For SwiftUI views to respond to input events, you can attach gestures to them. In this example, we have a simple text view. By attaching a TapGesture to the view, we are able to respond when a person taps on the view. When a gesture is attached to a view, it is given a closure to be invoked when the gesture is recognized. Since RealityView is just another SwiftUI view, it will respond to gestures in the same way. However, a RealityView may contain RealityKit content with multiple entities. For example, our app opens an ImmersiveSpace that shows a RealityView containing our cloud models. If a person taps on one of the clouds, SwiftUI invokes the TapGesture on the RealityView. But how do we know which cloud was targeted by the tap? This is where entity targeting comes in. The targetedToAnyEntity modifier works on a gesture attached to a RealityView to determine the exact entity the gesture targeted. There are other ways of targeting entities available. You can target a specific entity, or target all entities matching a query. For more information, please read the documentation on developer.apple.com. The value passed to the gesture's handlers, such as onEnded, has an entity property that indicates that the person interacted with that entity inside the RealityView. Note that for entity targeting to work on a given RealityKit entity, the entity must have both a CollisionComponent and an InputTargetComponent. Requiring RealityKit entities to have these components allows us to limit interactions to only chosen parts of the content in a RealityView. You can add these components to an entity in Reality Composer Pro, or you can add them programmatically in your app. Now that we've seen how entity targeting works, let's use it to detect when a person taps on a cloud. When this interaction occurs, we'll start a RealityKit animation. Let's start by adding the components we need in Reality Composer Pro. In our RealityKit content package, we can select both clouds at once from the view hierarchy using Command-click. We then click the "Add Component" button at the bottom of the Inspector panel and select Collision.
In the Inspector panel, we see that a CollisionComponent has been added to the clouds. Notice that Reality Composer Pro creates a CollisionComponent for the model by automatically choosing an appropriate collision shape. You can change this collision shape if needed. We now do the same for the InputTargetComponent. We click the Add Component button again, this time selecting Input Target.
Great! Let's save our changes by selecting File > Save All. To actually make a cloud move across the sky, we'll use a RealityKit animation in the gesture handler that's invoked when a cloud is tapped. We first capture the current value of the cloud's transformation as a mutable value, then we add an offset to the translation to move it 100 centimeters both forward and to the right, and then apply a RealityKit transform animation by calling .move on the cloud entity. Let's go back to Xcode to finish the app. ImmersiveView is the source file where we present the RealityView with the immersive content. Let's add the code to attach a TapGesture to the RealityView, and use entity targeting on it. And when a tap is detected, perform the transform animation. Let's run it on the Simulator and see it in action! We tap the button to open the ImmersiveSpace with our clouds in it, as before. But now, if we tap on a cloud, it floats gently across the sky. Entity targeting is the glue that connects SwiftUI interactions to RealityKit content. In our example, we performed a simple animation on the clouds in response to a tap. In a more complex app, you can use entity targeting to trigger more sophisticated actions, such as presenting additional views, playing audio, or starting animations. We covered many topics today; let's summarize them. We started with how to use Xcode's new project assistant to create your first immersive app. We then introduced the Simulator for the new platform, and showed how Xcode Previews makes it easy to iterate on the content of your app. We also introduced Reality Composer Pro and saw how it enables you to easily prepare and preview RealityKit content. Finally, we showed how to open an ImmersiveSpace and use entity targeting to programmatically enable and respond to interactions with immersive content. We hope you've enjoyed this presentation. We encourage you to explore more in-depth sessions on new SwiftUI and RealityKit APIs, as well as more advanced use cases for Reality Composer Pro. Thanks for watching! ♪
-
-
6:54 - Glass background effect
VStack { Toggle("Enlarge RealityView Content", isOn: $enlarge) .toggleStyle(.button) } .padding() .glassBackgroundEffect()
-
7:28 - RealityView
RealityView { content in // Add the initial RealityKit content if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) { content.add(scene) } } update: { content in // Update the RealityKit content when SwiftUI state changes if let scene = content.entities.first { let uniformScale: Float = enlarge ? 1.4 : 1.0 scene.transform.scale = [uniformScale, uniformScale, uniformScale] } } .gesture(TapGesture().targetedToAnyEntity().onEnded { _ in enlarge.toggle() })
-
20:31 - ImmersiveView
// MyFirstImmersiveApp.swift @main struct MyFirstImmersiveApp: App { var body: some Scene { WindowGroup { ContentView() }.windowStyle(.volumetric) ImmersiveSpace(id: "ImmersiveSpace") { ImmersiveView() } } }
-
22:58 - Size that fits
#Preview { ImmersiveView() .previewLayout(.sizeThatFits) }
-
23:48 - openImmersiveSpace
struct ContentView: View { @Environment(\.openImmersiveSpace) var openImmersiveSpace var body: some View { Button("Open") { Task { await openImmersiveSpace(id: "ImmersiveSpace") } } } }
-
25:48 - Entity targeting
import SwiftUI import RealityKit struct ContentView: View { var body: some View { RealityView { content in // For entity targeting to work, entities must have a CollisionComponent // and an InputTargetComponent! } .gesture(TapGesture().targetedToAnyEntity().onEnded { value in print("Tapped entity \(value.entity)!") }) } }
-
28:56 - Move animation
.gesture(TapGesture().targetedToAnyEntity().onEnded { value in var transform = value.entity.transform transform.translation = SIMD3(0.1, 0, -0.1) value.entity.move( to: transform, relativeTo: nil, duration: 3, timingFunction: .easeInOut ) })
-
-
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.