SUI '18- Proceedings of the Symposium on Spatial User Interaction

Full Citation in the ACM Digital Library

SESSION: Keynote Address

Fusing Interfaces with Matter, Humans and Machines

Advances in the past century have resulted in unprecedented access to empowering technology, with user interfaces that typically provide clear distinction and separation between environments, technology and people.

The progress in recent decades indicates, however, inevitable developments where sensing, display, actuation and computation will seek to integrate more intimately with matter, humans and machines. This talk will explore some of the radical new challenges and opportunities that these advancements imply for next-generation interfaces.

SESSION: Input and Output

Pocket6: A 6DoF Controller Based On A Simple Smartphone Application

We propose, implement and evaluate the use of a smartphone application for real-time six-degrees-of-freedom user input. We show that our app-based approach achieves high accuracy and goes head-to-head with expensive externally tracked controllers. The strength of our application is that it is simple to implement and is highly accessible --- requiring only an off-the-shelf smartphone, without any external trackers, markers, or wearables. Due to its inside-out tracking and its automatic remapping algorithm, users can comfortably perform subtle 3D inputs everywhere (world-scale), without any spatial or postural limitations. For example, they can interact while standing, sitting or while having their hands down by their sides. Finally, we also show its use in a wide range of applications for 2D and 3D object manipulation, thereby demonstrating its suitability for diverse real-world scenarios.

Haptopus: Transferring the Touch Sense of the Hand to the Face Using Suction Mechanism Embedded in HMD

Along with the spread of VR experiences by HMD, many proposals have been made to improve the experience by providing tactile information to the fingertip, but there are problems such as difficulty in attaching and detaching and hindering free movement of fingers. As a method to solve these issues, we developed Haptopus, which embeds the tactile display in the HMD and presents tactile sense associated with fingers to the face. In this paper, we conducted a preliminary investigation on the best suction pressure and compared with the conventional tactile presentation approaches. As a result, it was confirmed that Haptopus improves the quality of the VR experience.

SESSION: Sketching and Haptics

Performance Benefits of High-Fidelity Passive Haptic Feedback in Virtual Reality Training

This work investigated how a tracked, real golf club, used for high-fidelity passive haptic feedback in virtual reality, affected performance relative to using tracked controllers for a golf putting task. The primary hypothesis evaluated in this work was that overall accuracy would be improved through various inertial advantages in swinging a real club as well as additional alignment and comfort advantages from placing the putter on the floor. We also expected higher user preference for the technique and correlation with putting performance in the real environment. To evaluate these prospective advantages, a user study with a cross-over design was conducted with 20 participants from the local population. Results confirmed performance advantages as well as preference for the tracked golf club over the controller, but we were not able to confirm a correlation with real-world putting. Future work will investigate means to strengthen this aspect, while evaluating new research opportunities presented by study findings.

Physical Guides: An Analysis of 3D Sketching Performance on Physical Objects in Augmented Reality

Besides sketching in mid-air, Augmented Reality (AR) lets users sketch 3D designs directly attached to existing physical objects. These objects provide natural haptic feedback whenever the pen touches them, and, unlike in VR, there is no need to digitize the physical object first. Especially in Personal Fabrication, this lets non-professional designers quickly create simple 3D models that fit existing physical objects, such as a lampshade for a lamp socket. We categorize guidance types of real objects into flat, concave, and convex surfaces, edges, and surface markings. We studied how accurately these guides let users draw 3D shapes attached to physical vs. virtual objects in AR. Results show that tracing physical objects is 48% more accurate, and can be performed in a similar time compared to virtual objects. Guides on physical objects further improve accuracy especially in the vertical direction. Our findings provide initial metrics when designing AR sketching systems.

Multiplanes: Assisted Freehand VR Sketching

The presence of a third dimension makes accurate drawing in virtual reality (VR) more challenging than 2D drawing. These challenges include higher demands on spatial cognition and motor skills, as well as the potential for mistakes caused by depth perception errors. We present Multiplanes, a VR drawing system that supports both the flexibility of freehand drawing and the ability to draw accurate shapes in 3D by affording both planar and beautified drawing. The system was designed to address the above-mentioned challenges. Multiplanes generates snapping planes and beautification trigger points based on previous and current strokes and the current controller pose. Based on geometrical relationships to previous strokes, beautification trigger points serve to guide the user to reach specific positions in space. The system also beautifies user's strokes based on the most probable intended shape while the user is drawing them. With Multiplanes, in contrast to other systems, users do not need to manually activate such guides, allowing them to focus on the creative process.

SESSION: Presence and Collaboration

IMRCE: A Unity Toolkit for Virtual Co-Presence

In this paper we present the design, implementation, and evaluation of IMRCE, our immersive mixed reality collaborative environment toolkit. IMRCE is a lightweight, flexible, and robust Unity toolkit that allows designers and researchers to rapidly prototype mixed reality-mixed presence (MR-MP) environments that connect physical spaces, virtual spaces, and devices. IMRCE helps collaborators maintain group awareness of the shared collaborative environment by providing visual cues such as position indicators and virtual hands. At the same time IMRCE provides flexibility in how physical and virtual spaces are mapped, allowing work environments to be optimised for each collaborator while maintaining a sense of integration. The main contribution of the toolkit is its encapsulation of these features, allowing rapid development of MR-MP systems. We demonstrate IMRCE's features by linking a physical environment with tabletop and wall displays to a virtual replica augmented with support for direct 3D manipulation of shared work objects. We also conducted a comparative evaluation of IMRCE against a standard set of Unity libraries with complementary features with 10 developers, and found significant reductions in time taken, total LOC, errors, and requests for assistance.

Over My Hand: Using a Personalized Hand in VR to Improve Object Size Estimation, Body Ownership, and Presence

When estimating the distance or size of an object in the real world, we often use our own body as a metric; this strategy is called body-based scaling. However, object size estimation in a virtual environment presented via a head-mounted display differs from the physical world due to technical limitations such as narrow field of view and low fidelity of the virtual body when compared to one's real body.

In this paper, we focus on increasing the fidelity of a participant's body representation in virtual environments with a personalized hand using personalized characteristics and a visually faithful augmented virtuality approach. To investigate the impact of the personalized hand, we compared it against a generic virtual hand and measured effects on virtual body ownership, spatial presence, and object size estimation. Specifically, we asked participants to perform a perceptual matching task that was based on scaling a virtual box on a table in front of them. Our results show that the personalized hand not only increased virtual body ownership and spatial presence, but also supported participants in correctly estimating the size of a virtual object in the proximity of their hand.

Injecting Nonverbal Mimicry with Hybrid Avatar-Agent Technologies: A Naïve Approach

Humans communicate to a large degree through nonverbal behavior. Nonverbal mimicry, i.e., the imitation of another's behavior can positively affect the social interactions. In virtual environments, user behavior can be replicated to avatars, and agent behaviors can be artificially constructed. By combining both, hybrid avatar-agent technologies aim at actively mediating virtual communication to foster interpersonal understanding and rapport. We present a naïve prototype, the "Mimicry Injector", that injects artificial mimicry in real-time virtual interactions. In an evaluation study, two participants were embodied in a Virtual Reality (VR) simulation, and had to perform a negotiation task. Their virtual characters either a) replicated only the original behavior or b) displayed the original behavior plus induced mimicry. We found that most participants did not detect the modification. However, the modification did not have a significant impact on the perception of the communication.

A Look at the Effects of Handheld and Projected Augmented-reality on a Collaborative Task

This paper presents a comparative study between two popular AR systems during a collocated collaborative task. The goal of the study is to start a body of knowledge that describes the effects of different AR approaches in users' experience and performance; i.e., to look at AR not as a single entity with uniform characteristics. Pairs of participants interacted with a game of Match Pairs in both hand-held and project AR conditions, and their engagement, preference, task completion time, and number of game moves was recorded. Participants were also video-recorded during play for additional insights. No significant differences were found between users' self-reported engagement, and 56.25% of participants described a preference for the hand-held experience. On the other hand, participants completed the task significantly faster in the projected condition, despite having performed more game moves (card flips). We conclude the paper by discussing the effect of these two AR prototypes in participants' communication strategies, and how to design hand-held interfaces that could elicit the benefits of projected AR.

SESSION: Space and Learning

Improving Spatial Orientation in Immersive Environments

In this paper, we present a comparative evaluation of three different approaches to improving users' spatial awareness in virtual reality environments, and consequently their user experience and productivity. Using a scientific visualization task, we test the performance of 21 participants to navigate around a virtual immersive environment. Our results suggest that using landmarks, a 3D minimap, and waypoint navigation all contribute to improved spatial orientation, while the macroscopic view of the environment provided by the 3D minimap has the greatest positive impact on spatial orientation. Users also prefer the 3D minimap for usability and immersion by a wide margin over the other techniques.

Effects of VE Transition Techniques on Presence, Illusion of Virtual Body Ownership, Efficiency, and Naturalness

Several transition techniques (TTs) exist for Virtual Reality (VR) that allow users to travel to a new target location in the vicinity of their current position. To overcome a greater distance or even move to a different Virtual Environment (VE) other TTs are required that allow for an immediate, quick, and believable change of location. Such TTs are especially relevant for VR user studies and storytelling in VR, yet their effect on the experienced presence, illusion of virtual body ownership (IVBO), and naturalness as well as their efficiency is largely unexplored. In this paper we thus identify and compare three metaphors for transitioning between VEs with respect to those qualities: an in-VR head-mounted display metaphor, a turn around metaphor, and a simulated blink metaphor. Surprisingly, the results show that the tested metaphors did not affect the experienced presence and IVBO. This is especially important for researchers and game designers who want to build more natural VEs.

Getting There and Beyond: Incidental Learning of Spatial Knowledge with Turn-by-Turn Directions and Location Updates in Navigation Interfaces

Spatial user interfaces that help people navigate often focus on turn-by-turn instructions, ignoring how they may help incidental learning of spatial knowledge. Drawing on theories and findings from the area of spatial cognition, the current paper aims to understand how turn-by-turn instructions and relative location updates can help incidental learning of spatial (route and survey) knowledge. A user study was conducted as people used map-based and video-based spatial interfaces to navigate to different locations in an indoor environment using turn-by-turn directions and relative location updates. Consistent with existing literature, we found that providing only turn-by-turn directions was in general not effective for helping people to acquire spatial knowledge as relative location updates, but map-based interfaces were in general better for incidental learning of survey knowledge while video-based interfaces were better for route knowledge. Our result suggested that relative location updates encourage active processing of spatial information, which allows better incidental learning of spatial knowledge. We discussed the implications of our results to designs trade-offs in navigation interfaces that facilitate learning of spatial knowledge.

SESSION: Selection and Travel

Evaluating the Effects of Feedback Type on Older Adults' Performance in Mid-Air Pointing and Target Selection

"Hands-free" pointing techniques used in mid-air gesture interaction require precise motor control and dexterity. Although being applied in a growing number of interaction contexts over the past few years, this input method can be challenging for older users (60+ years old) who experience natural decline in pointing abilities due to natural ageing process. We report the findings of a target acquisition experiment in which older adults had to perform "point-and-select" gestures in mid-air. The experiment investigated the effect of 6 feedback conditions on pointing and selection performance of older users. Our findings suggest that the bimodal combination of Visual and Audio feedback lead to faster target selection times for older adults, but did not lead to making less errors. Furthermore, target location on screen was found to play a more important role in both selection time and accuracy of point-and-select tasks than feedback type.

Evaluation of Cursor Offset on 3D Selection in VR

Object selection in a head-mounted display system has been studied extensively. Although most previous work indicates that users perform better when selecting with minimum offset added to the cursor, it is often not possible to directly select objects that are out of arm's reach. Thus, it is not clear whether offset-based techniques will result in improved overall performance. Moreover, due to the difference in muscle requirements of arm and shoulder between a hand-held device and a motion capture device, selection performance may be affected by factors related to ergonomics of the input device. In order to explore these uncertainties, we conduct a user study to evaluate the effects of four virtual cursor offset techniques on 3D object selection performance using Fitts' model and ISO 9241-9 standard while comparing two input devices in a head-mounted display. The results show that selection with No Offset is most efficient when the target is within reach. When the target is out of reach, Linear Offset outperforms Fixed-Length Offset and Go-Go Offset on movement time, error rate and effective throughput, as well as subjective preference evaluation. Overall, the Razer Hydra controller provides better and more stable selection performance than Leap Motion.

Look to Go: An Empirical Evaluation of Eye-Based Travel in Virtual Reality

We present two experiments evaluating the effectiveness of the eye as a controller for travel in virtual reality (VR). We used the FOVE head-mounted display (HMD), which includes an eye tracker. The first experiment compared seven different travel techniques to control movement direction while flying through target rings. The second experiment involved travel on a terrain: moving to waypoints while avoiding obstacles with three travel techniques. Results of the first experiment indicate that performance of the eye tracker with head-tracking was close to head motion alone, and better than eye-tracking alone. The second experiment revealed that completion times of all three techniques were very close. Overall, eye-based travel suffered from calibration issues and yielded much higher cybersickness than head-based approaches.

SESSION: Robotics andWearables

RobotIST: Interactive Situated Tangible Robot Programming

Situated tangible robot programming allows programmers to reference parts of the workspace relevant to the task by indicating objects, locations, and regions of interest using tangible blocks. While it takes advantage of situatedness compared to traditional text-based and visual programming tools, it does not allow programmers to inspect what the robot detects in the workspace, nor to understand any programming or execution errors that may arise. In this work we propose to use a projector mounted on the robot to provide such functionality. This allows us to provide an interactive situated tangible programming experience, taking advantage of situatedness, both in user input and system output, to reference parts of the robot workspace. We describe an implementation and evaluation of this approach, highlighting its differences from traditional robot programming.

Thumb-In-Motion: Evaluating Thumb-to-Ring Microgestures for Athletic Activity

Spatial User Interfaces, such as wearable fitness trackers are widely used to monitor and improve athletic performance. However, most fitness tracker interfaces require bimanual interactions, which significantly impacts the user's gait and pace. This paper evaluated a one-handed thumb-to-ring gesture interface to quickly access information without interfering with physical activity, such as running. By a pilot study, the most minimal gesture set was selected, particularly those that could be executed reflexively to minimize distraction and cognitive load. The evaluation revealed that among the selected gestures, the tap, swipe-down, and swipe-left were the most 'easy to use'. Interestingly, motion does not have a significant effect on the ease of use or on the execution time. However, interacting in motion was subjectively rated as more demanding. Finally, the gesture set was evaluated in real-world applications, while the user performed a running exercise and simultaneously controlled a lap timer, a distance counter, and a music player.

Development of a Wearable Haptic Device that Presents the Haptic Sensation Corresponding to Three Fingers on the Forearm

Numerous methods have been proposed for presenting tactile sensations from objects in virtual environments. In particular, wearable tactile displays for the fingers, such as fingertip-type and glove-type displays, have been intensely studied. However, the weight and size of these devices typically hinder the free movement of the fingers, especially in a multi-finger scenario. To cope with this issue, we have proposed a method of presenting the haptic sensation of the fingertip to the forearm, including the direction of force. In this study, we extended the method to three fingertips (thumb, index finger and middle finger) and three locations on the forearm using a five-bar linkage mechanism. We tested whether all of the tactile information presented by the device could be discriminated, and confirmed that the discrimination ability was about 90%. Then we conducted an experiment to present the grasping force in a virtual environment, confirming that the realism of the experience was improved by our device, compared with the conditions with no haptic or with vibration cues.

Step Detection for Rollator Users with Smartwatches

Smartwatches enable spatial user input, namely for the continuous tracking of physical activity and relevant health parameters. Additionally, smartwatches are experiencing greater social acceptability, even among the elderly. While step counting is an essential parameter to calculate the user's spatial activity, current detection algorithms are insufficient for calculating steps when using a rollator, which is a common walking aid for elderly people. Through a pilot study conducted with eight different wrist-worn smart devices, an overall recognition of ~10% was achieved. This is because characteristic motions utilized by step counting algorithms are poorly reflected at the user's wrist when pushing a rollator. This issue is also present among other spatial activities such as pushing a pram, a bike, and a shopping cart. This paper thus introduces an improved step counting algorithm for wrist-worn accelerometers. This new algorithm was first evaluated through a controlled study and achieved promising results with an overall recognition of ~85%. As a follow-up, a preliminary field study with randomly selected elderly people who used rollators resulted in similar detection rates of ~83%. To conclude, this research will expectantly contribute to greater step counting precision in smart wearable technology.

DEMONSTRATION SESSION: Demonstrations

Air Maestros: A Multi-User Audiovisual Experience Using MR

Extended reality (XR) technology challenges practitioners to update the method of representation in art, which our laboratory has been working on as well [2]. Thus, in this demonstration, we present Air Maestros (AM), a multi-user audiovisual experience in mixed reality (MR) space using Microsoft HoloLens. The purpose of AM is to expand an ordinary music sequencer method into a three-dimensional (3D) space and a multi-user system. In this case, the users place 3D note objects into the MR space and, with a certain gesture, shoot a glowing ball at them. When their shots hit the 3D note objects, audiovisual effects appear at the objects' spatial positions.

Flip-Flop Sticker: Force-to-Motion Type 3DoF Input Device for Capacitive Touch Surface

Cubic Keyboard for Virtual Reality

We developed a cubic keyboard to exploit the three-dimensional (3D) space of virtual reality (VR) environments. The user enters a word by drawing a stroke with the controller. The keyboard consists of 27 keys arranged in a 3 x 3 x 3 (vertical, horizontal, and depth) 3D array; all 26 letters of the alphabet are assigned to 26 keys; the center key is blank. The user moves the controller to the key of a letter of the word and then selects that key by slowing movement.

CVR-Analyzer: A Tool for Analyzing Cinematic Virtual Reality Viewing Patterns

Cinematic Virtual Reality (CVR) has been increasing in popularity over the last years. During our research on user attention in CVR, we encountered many analytic demands and documented potentially useful features. This led us to develop an analyzing tool for omnidirectional movies: the CVR-Analyzer.

MagicPAPER: An Integrated Shadow-Art Hardware Device Enabling Touch Interaction on Kraft paper

As the most common writing material in our daily life, paper is an important carrier of traditional painting, and it also has a more comfortable physical touch than electronic screens. In this study, we designed a shadow-art device for human--computer interaction called MagicPAPER, which is based on physical touch detection, gesture recognition, and reality projection. MagicPAPER consists of a pen, kraft paper, and several detection devices, such as AirBar, Kinect, LeapMotion, and WebCam. To make our MagicPAPER more interesting, we developed thirteen applications that allow users to experience and explore creative interactions on a desktop with a pen and a piece of paper.

Spatially-Aware Tangibles Using Mouse Sensors

We demonstrate a simple technique that allows tangible objects to track their own position on a surface using an off-the-shelf optical mouse sensor. In addition to measuring the (relative) movement of the device, the sensor also allows capturing a low-resolution raw image of the surface. This makes it possible to detect the absolute position of the device via marker patterns at known positions. Knowing the absolute position may either be used to trigger actions or as a known reference point for tracking the device. This demo allows users to explore and evaluate affordances and applications of such tangibles.

Slackliner: Using Whole-body Gestures for Interactive Slackline Training

In this demo, we present Slackliner, an interactive slackline training assistant which features life-size projection, skeleton tracking, and real-time feedback. Like in other sports, proper training leads to a faster increase of skill and lessens the risk of injuries. We chose a set of exercises from slackline literature and implemented an interactive trainer which guides the user through the exercises giving feedback if the exercises were executed correctly. Additionally, a post-analysis provides the trainee with more detailed feedback about her performance. The results from a study comparing the interactive slackline training system to a classic approach using a personal trainer indicate the interactive slackline training system can be used as an enjoyable and effective alternative to classic training methods (see [1] for more details). The contribution of the present demo is to showcase how whole body gestures can be used in interactive sports training systems. The design and implementation of the system informs many potential applications ranging from rehabilitation to fitness gyms and home use.

RealityAlert: Improving Users' Physical Safety in Immersive Virtual Environments

RealityAlert is a hardware device that we designed to alert immersive virtual environment (IVE) users for potential collisions with real-world (RW) objects. It uses distance sensors mounted on a head-mounted display (HMD) and vibro-tactile actuators inserted into the HMD's face cushion. We define a sensor-actuator mapping, which is minimally obtrusive in normal use, but efficiently alerting in risk situations.

Using Affective Computing for Proxemic Interactions in Mixed-Reality

Immersive technologies have been touted as empathetic mediums. This capability has yet to be fully explored through machine learning integration. Our demo seeks to explore proxemics in mixed-reality (MR) human-human interactions.

The author developed a system, where spatial features can be manipulated in real time by identifying emotions corresponding to unique combinations of facial micro-expressions and tonal analysis. The Magic Leap One is used as the interactive interface, the first commercial spatial computing head mounted (virtual retinal) display (HUD).

A novel spatial user interface visualization element is prototyped that leverages the affordances of mixed-reality by introducing both a spatial and affective component to interfaces.

POSTER SESSION: Posters

EyeControl: Towards Unconstrained Eye Tracking in Industrial Environments

We propose the idea of a powerful mobile eye tracking platform that enables whole new ways of explicit and implicit human-machine interactions in complex industrial settings. The system is based on two hardware components (NVIDIA Jetson TX2, Pupil labs eye tracker) and a message-oriented framework for real-time processing [1]. The design is described and potential use cases are sketched.

Real-Time Recognition of Signboards with Mobile Device using Deep Learning for Information Identification Support System

In this paper, a framework that uses deep learning on a server to recognize signboards in streets with mobile devices is proposed. The proposed framework enables a user to determine the type of shops in his/her location. Our experimental results revealed that the proposed framework recognized signboards with an 86% accuracy within 1 second.

Spaceline: A Way of Interaction in Cinematic Virtual Reality

Watching omnidirectional movies via head mounted displays puts the viewer inside the scene. In this way, the viewer enjoys an immersive movie experience. However, due to the free choice of field-of-view, it is possible to miss details which are important for the story. On the other hand, the additional space component gives the filmmakers new opportunities to construct stories. To support filmmakers and viewers, we introduce the concept of a 'spaceline' (named in analogy to the traditional 'timeline') which connects movie sequences via interactive regions. We developed a spaceline editor that allows researchers and filmmakers to define such regions as well as indicators for visualising regions inside and outside the current field-of-view.

Virtual Campus: Infrastructure and spatiality management tools based on 3D environments

This paper describes the development and implementation of "Virtual Campus", a prototype that brings together a set of interfaces and interaction techniques (VR, AR, mobile apps, 3D), to generate alternatives to the systems that are traditionally used (web, desktop applications, etc.), for the spatial management of the campus in Universidad Católica de Pereira, such as the reservation of classrooms, objects, zones, security, among others.1

Multiple Pointing Method with Smartphone Gyro Sensor

This paper proposes a pointing method named the Bring Your Own Pointer (BYOP). The BYOP enables an additional participation in a shared display collaboration and allows the users to point at the display simultaneously by using their own smartphones. A sticker application is developed to demonstrate the BYOP.

Identification of Out-of-View Objects in Virtual Reality

Current Virtual Reality (VR) devices have limited fields-of-view (FOV). A limited FOV amplifies the problem of objects receding from view. In previous work, different techniques have been proposed to visualize the position of objects out of view. However, these techniques do not allow to identify these objects. In this work, we compare three different ways of identifying out-of-view objects. Our user study shows that participants prefer to have the identification always visible.

An Emotional Spatial Handwriting System

According to graphology, people's emotional states can be detected from their handwriting. Unlike writing on paper, which can be analysed through its on-surface properties, spatial interaction-based handwriting is entirely in-air. Consequently, the techniques used in graphology to reveal the emotions of the writer are not directly transferable to spatial interaction. The purpose of our research is to propose a 3D handwriting system with emotional capabilities.

For our study, we retained height basic emotions represented by a large spectrum of coordinates in the Russell's valence-arousal model: afraid, angry, disgusted, happy, sad, surprised, amorous and serious. We used the Leap Motion sensor (https://www.leapmotion.com) to capture hand motion; C# and the Unity 3D game engine (https://unity3d.com) for the 3D rendering of the handwritten characters. With our system, users can write freely with their fingers in the air and immerse themselves in their handwriting by wearing a virtual reality headset.

We aim to create a rendering model that can be universally applied to any handwriting and any alphabet: our choice of parameters is inspired by both Latin typography and Chinese calligraphy, characterised by its four elementary writing instruments: the brush, the ink, the brush-stand and the ink-stone. The final parameter selection process was carried out by immersing ourselves in our own in-air handwriting and through numerous trials.

The five rendering parameters we chose are: (1) weight determined by the radius of the rendered stroke; (2) smoothness determined by the minimum length of one stroke segment; (3) tip of stroke determined by the ratio of the radius to the writing speed; (4) ink density determined by the opacity of the rendering material; and (5) in