Behind the Development of an AR Based Exploration Gameplay – Medium

Nov 30
This is a small story about how I and my team designed and developed our first game with an AR Based Exploration being one of its core gameplay
Games has been a part of our life for a long time now. Recent advancement in technology also help in expanding the boundaries on how a game can be made, even enabling the world of the game into our reality. Yes, I think you know what I’m talking about. The Augmented Reality and Virtual Reality enables us to create a game where the world within that game is integrated with our world, which increase the immersive aspect of the game.
However, behind an AR Game, there is actually some consideration that needs to be taken before directly implementing it. Therefore, if you are interested in making one, particularly using the Swift iOS platform, I would like to share a bit on what I learned from the process of developing the AR functionality in my team’s new game, Carita.
This game first comes to fruition from our overall concern on the preservation of the culture in our country, Indonesia. Our country has a very diverse culture, and it can be shown by the number of cultural symbols we have. For instance, the number of folk stories we have has reached above one thousand. It’s pretty impressive, but also pretty concerning when the advancement of technology these days has faded the use of these folklore as a way to convey moral value and local cultural knowledge, especially for children in younger age. And so, we decided to make it our challenge to deliver a new way to elevate the experience of enjoying folklore stories through making a game.
The reason we implemented Augmented Reality as one of our core gameplay features is so that we can promote movement interaction for the children of our age target (4–8 years old) instead of heavily focusing on in-screen interaction, since this has been one of the concern of many parents during our research. And so, we started to develop even more concept on how we can implement AR into our game. That’s when we realize that there are some questions we need to ask ourselves when developing the gameplay using the AR technology:
If you are familiar with the Pokemon Go game, it combines a geo-based infrastructure to obtain coordinates to place checkpoints and spawn pokemon around them while also tracking user’s proximity to the spawn using the help of GPS. However, we can’t exactly use the same strategy as the main concept of our AR Gameplay is a guided free exploration which sparks a bit of concern if we bring the exploration outside considering our targeted user’s age range.
And so, we decided to limit the exploration within a safe indoor room, such as the children’s living room or backyard. But once again, this sparks another problem, since we can’t really utilize the GPS capability within such a small space. Therefore, we need to find another way to place our augmented reality objects around the room while catering different range of free space availability within each of our user’s house.
Room plan is one of the framework our team considered to use to assist us in placing the objects while also being aware of obstacles along the way. Basically room plan utilize the camera and LiDAR Scanner on the iOS device to map the room which with the help of Machine Learning can discerns obstacles along the way, such as interiors and walls. Its output would be in form of 3D usdz object that resembles the corresponding room interior. Here is a sample code on how to run the room plan session:
This implementation, however, bring us back to point 2 of the questions we need to pay attention to. Room plan is still a rather new implementation from Apple, and as you can probably guess from the description, it needs an iOS device complete with LiDAR scanner to function, which is currently only available on the newer iOS device. When you code this in Xcode, you also receives a warning that Room plan is only available for iOS 16 and above, therefore it can’t be accessible to everyone. It’s a pretty neat framework, however we can only hope that this implementation will become more accessible in the future as it has some promising features for us who want to develop some AR capability in our app/game.
The final concept we bring is that during certain part of the folklore story we delivered, there will be an exploration gameplay where the children needs to find checkpoints scattered around the house to help the characters in the story reach their destination and continue to the next part of the story. And since some of the solution we tried previously can’t really accommodate the placement of these checkpoints well as mentioned in point 3 of our questions, we finally decided to use the assistance of a physical medium in the form of image card as our object anchor.
Fun fact about object anchoring: This image anchor is not the only way you can anchor you object in AR. If you use RealityKit as your ARView framework, you can set the object to anchor on any horizontal/vertical surface detected, or even anchor it to the camera.
For our implementation, we try to use the SceneKit implementation in SceneKit based ARView to attach the object to a plane surface which we build above the image card itself. You can choose to use Scenekit or Realitykit as your ARView framework depending on your needs and preferences, but both overall provided an overall good 3D object rendering and functionality when combined with other framework, such as ARKit or Metal, for an AR based application.
Regarding the detection of the card itself, ARKit has covered the CoreML implementation needed to detect when a card is being scanned in the camera based on the given image guidelines. Therefore, all we have to do is to set up some configuration and run the given configuration in the AR session as such:
After we have set the configuration, we need to set up the renderer function which is included inside the ARSCNViewDelegate. This function will render the nodes inside the ARSceneView per frame, so we can provides condition inside the function to check if the card that is being scanned in the physical environment resembles any of the given image guideline. If the card in fact resembles the image, then the renderer function will place a plane and the provided object as a child node of the AR Scene accordingly.
Anyway, I forgot to mention that we uses viewmodels to manage the changes made in the AR Sceneview when a card is being scanned. We basically make a model which consists of the card’s name along with its corresponding object model’s name and other related attributes, and then we put it inside the viewmodel to observe if any change is being made in the controller. And thus, when a card is being scanned, it can swiftly bind the viewmodel attribute and take the corresponding object model to show it inside the AR Scene View, and voila! We have successfully anchor our 3D object to the card when scanned!
Aside from the checkpoint cards, we also have an additional type of card, which contains cultural treasures. So, when the player scan these cards, a treasure chest will appear and when opened, the player can collect various cultural item from all around Indonesia.
These items can later be found on the collection room provided inside our game which is also made using Scenekit.
Last bits I can probably share is more regarding the 3D assets. When using 3D on an application or a game, what I learned is that we need to pay a rather close attention to the poly count of the model we are using. This is because the higher the poly count goes, it will take more time for the device to render the model, as well as impacting the app/game size.
Another thing to consider as well is the 3D object file format you will be using, as each 3D file format brings different strength and weakness. For example, .fbx supports complex and high quality graphic but overall has bigger file size, while .obj is way smaller in size but can’t support animation and has a separate texture file. For AR based app, the common ground format is .usdz (required when using RealityKit), however this format is often not available to directly render to from 3D modeling software, such as Blender or Maya, so you will need a converter for that.
And there we go, building an AR capability requires many consideration, from the 3D models to development framework, to deliver the interactivity in AR we want to achieve
There are some other exciting gameplay that we developed in our game to support the interactivity and immersiveness of folklore storytelling, such as a tracing pattern game using PencilKit and an avoiding obstacles game using SceneKit and Core Motion. These gameplay however, would be a story for another day as we have talked about a lot of stuff in this story already, and I don’t want to make it way longer than it already is.
When all is said and done I’m very grateful to be involved in this project alongside talented individuals in my team to make Carita a reality. I’m really looking forward for the future of this project together with everyone in the team!
For those of you who are interested in finding out more about our newly developed game, our game is now already available in the App Store for iPAD device! We hope everyone can experience a fresh new exciting way to explore Indonesian folklore through Carita, your home your adventure! Until next time fellas!
© 2022 Carita Development Team. All Rights Reserved.
apps.apple.com


Love podcasts or audiobooks? Learn on the go with our new app.
Yingying Tang
Justin Ho
in
Bootcamp
Dorottya Kiss
Manel Tinoco de Faria
in
DataDrivenInvestor
Benedict Tyler O'yek
Paulius Papreckis
in
Prototypr
UsabilityGeek
in
UsabilityGeek
UsabilityGeek
in
UsabilityGeek
AboutHelpTermsPrivacy
Help
Status
Writers
Blog
Careers
Privacy
Terms
About
Text to speech

source

Leave a Comment