Welcome back to the article series “AR design challenges and best practices”. As a reminder, this article series showcases several best practices and UI solutions relating to Augmented reality design challenges. In today’s article, we will share our key findings on the onboarding process and an overview of interaction patterns encountered in AR mobile applications.
AR applications are still a new technology that can seem complex due to its technical constraints and some interactions may seem intimidating for users inexperienced with 3D interactions.
In addition, the user is in an uncontrolled environment and without any physical assistance at his disposal, he has to understand and overcome some technical challenges all by himself. As Designers, we have to ease the learning curve and introduce him, progressively, to the AR constraints and possibilities.
1 – Promoting AR and the application features
During the onboarding process in the first introductory screens, it’s a good practice to present and promote the app features, in a simple few steps.
In the following example the “Snappy AR social Network” mobile application, showcases it’s core features and the AR value as being a fun and more engaging social network.
2 – Technical considerations
In order to work properly, an augmented reality app relies on certain technical requirements such as :
- Minimal lighting conditions.
- An uncluttered environment and an unobstructed view, to prevent any injury and provide a better experience.
- An appropriate surface, such as a non-reflective and a textured surface (meaning non-uniform), typically a blank wall or a white desk are not suitable as opposed to a carpet that would work best.
- Moving the camera slowly, indeed the camera needs to be more or less stabilized to recognize the physical environment and thereby increase its accuracy, especially during the scanning phase.
In addition to these, the AR app will require the users’ permission to access the camera and other features.
All of these technical considerations are essential for the experience and it is important to inform the user :
– upfront as a forewarning during the walkthrough process ( so that users are aware of the technical constraints and may prevent any issues),
– and for a better error-handling such as when the camera tracking encounters an issue. In that case, it’s best to display an informative screen providing actionable steps to remedy the situation.
3 – Setting things up – Scanning
Simply put the latest Augmented Reality technology works by scanning the real environment to anchor virtual objects in a particular location in the environment.
This technology referred to as “SLAM” provides more accuracy and broadens the user-experience potential.
Usually, this scanning phase relies on 3 key stages
- Informing the user to move the camera slowly and point his camera toward an appropriate surface.
- Scanning his environment: it’s a good practice to provide visual feedback of the current status of the scanned and tracked area
- Displaying feedback of the play area: if scanned properly, we could briefly inform the user of the extent of the tracked area.
4 – How to move objects
Once the user has scanned his environment, he can then place or anchor virtual elements onto his environment and interact with the virtual objects.
When designing AR interactions for handheld devices, it’s best to rely on conventional gestures to avoid reinventing the wheel and having to explain how each works.
Here are some quick examples :
Selection > To select an item, it is common to directly tap on the element of interest.
Moving an object > To move an object in space, we can tap on an element and drag the finger around the screen to place the object to the desired location.
Scaling an object > In order to scale an object, the user can pinch his fingers.
Rotating an object > In order to rotate an object, this can be done by using either 1 finger or two fingers. The user can place both of his fingers on the screen and twist one of them to rotate along a single axis. This can also be achieved with a single finger by swiping horizontally, rapidly.
Of course, these interaction patterns can be challenged if there is a good reason to. In any case, it is important to provide coach marks and illustrate the different gestures to mimic to teach the user. An animated illustration works best.
5 – Translation
When an object needs to be moved in space, it’s a good practice to restrict the movement on a single or two-axis, based on the plane the object it is resting upon.
For instance, if the object is sitting on a horizontal surface, say a floor, the user should be able to move the object along the XZ plane only (meaning horizontally and in-depth but not vertically).
On the other hand, if it was resting on a vertical surface, such as a wall, then the XY plane (meaning horizontally and vertically but not in the depth axis) would be advised in this case.
If there is a change of height during the translation of an object, it’s best to maintain the previous height and provide an indicator of the destination point.
For greater realism, we can mimic real-world physics depending on the type of object’s properties, say if it is heavy, lightweight or bouncy.
6 – Interactive states
We can use proper feedback to indicate the different interaction states,
If an object is in edit mode we can use its color property to inform the user.
If the object is being moved we can color it in green or red to inform the user if the object is well placed or if it has collided with another virtual object.
For greater realism, the behavior of the virtual objects should comply with real-world physics.
6 – Indirect interaction
So far we‘ve seen direct interaction such as tap to select the object, use two-finger interaction to either rotate or scale the object, however, these interactions can also be achieved with an indirect interaction method.
Direct interaction relies upon a centered cursor ( located in the middle of the screen) to highlight an item and by the use of an on-screen button located near the thumb reachable area to validate the intent.
This indirect interaction has the benefit of being less tiresome and more accurate for the selection of small or distant items.
It can also be used to ease the onboarding process and anchoring process of objects.
For instance, the object’s location can be previewed in the middle of the screen, and the user can move his device to adjust its location and then press a button to validate and anchor the object into the environment.
We can also stylize the reticle indicator to convey a status such as whether the hovered area is appropriate or an invalid location as shown in the image below.
AR is still a new medium, there are certain: requirements, particularities that we need to be aware of, and thankfully conventions and design patterns that we can rely upon.
In all, to provide a useful onboarding and a reliable basis for a good AR experience we can :
- Promote the application and the AR value, in an informative and explicit way.
- Inform the user of the AR technical constraints and pre-requisites to work properly.
- Assist the user during the scanning process to improve the accuracy of the tracking and prevent any technical issues.
- Inform the user on how to interact with a 3-dimensional object and rely on conventional gestures unless there is a good reason not to.
- Be mindful of the how and where a virtual object can be moved, and provide visual feedback to convey a specific status ( collision, edit mode…)
- For greater realism, It’s good practice to comply with real-world physics for the behavior and physical properties of the object.
- Consider using indirect interactions to increase comfort, accuracy and reduce fatigue.
And that’s it for today, I hope this article was useful, feel free to share this article if you found it compelling.
Apps References :
Conduct AR / A&E® Crime Scene: AR / Myty AR / iScape / Arc core elements / Snaapy AR Social Network