SUI '24: Proceedings of the 2024 ACM Symposium on Spatial User Interaction

SUI '24: Proceedings of the 2024 ACM Symposium on Spatial User Interaction

Full Citation in the ACM Digital Library

SESSION: Keynote Abstracts

Playing with Tangibles in Virtual Reality

  • Maud Marchal

The talk presents how tangible objects could be used in Virtual Reality to enhance the user’s perception and immersion. In Virtual Reality, the user wearing a Head Mounted Display is generally unable to directly see the tangible objects and ends up confronting the virtual objects he sees to the tangible ones he can feel. This situation often leads to breaks of immersion, thus degrading the user’s experience in virtual environments.

The talk presents some of our latest contributions to handle the use of tangible objects for improving our 3D interaction with virtual worlds. The talk will illustrate first how and to what extent a discrepancy between the tangible objects and the virtual objects can be introduced without breaking the user’s immersion through different algorithmic strategies. In a second part, the talk will present how we could improve the registration between the tangible and the virtual objects using new technological solutions combined with appropriate 3D interaction techniques. At the end, the talk aims at introducing some of the next challenges in Virtual Reality for handling haptic feedback through the use of tangible objects.

Spatial Interfaces in Space Exploration

  • Tommy Nilsson

Renewed interest in space exploration is paving the way for humanity’s future among the stars, with ambitious new programs being launched aiming to establish human presence on the Moon, Mars, and beyond. The realization of this vision will depend on the development of a new generation of reliable, safe, and efficient technologies, along with accompanying astronaut work procedures, to underpin future missions.

Novel prototypes and procedures are typically developed, tested, and refined through field deployments in Earth-based analog environments. These environments, such as the underground cave systems in Sardinia, replicate some of the extreme conditions associated with extraterrestrial settings, thereby providing an operationally valid context for astronaut drills and prototype studies. While generally effective, this approach is facing criticism due to its logistical complexity and often prohibitive costs.

In response, the European Space Agency (ESA) is exploring the potential of virtual and mixed reality technologies as a cost-effective alternative to these traditional analog environments. This talk will present ESA’s initiative, focusing on the benefits and limitations of using immersive and spatial interfaces in the context of human spaceflight. We will conclude by discussing how these interfaces might evolve as we look toward the future.

SESSION: Application & Games

Construction of SVS: Scale of Virtual Twin's Similarity to Physical Counterpart in Simple Environments

  • Xuesong Zhang
  • Adalberto L. Simeone

Due to the lack of a universally accepted definition for the term “virtual twin”, there are varying degrees of similarity between physical prototypes and their virtual counterparts across different research papers. This variability complicates the comparison of results from these papers. To bridge this gap, we introduce the Scale of Virtual Twin’s Similarity (SVS), a questionnaire intended to quantify the similarity between a virtual twin and its physical counterpart in simple environments in terms of visual fidelity, physical fidelity, environmental fidelity, and functional fidelity. This paper describes the development process of the SVS questionnaire items and provides an initial evaluation through two between-subjects user studies to validate the items under the categories of visual and functional fidelity. Additionally, we discuss the way to apply it in research and development settings.

Trustful Trading in Shared Augmented Reality

  • Marko Ritter
  • Kongmeng Liew
  • Yasas Sri Wickramasinghe
  • Stephan Lukosch

If the Metaverse is the visionary space that is poised to greatly expand human activity, building of interpersonal trust within the Metaverse must be possible. Trust has been described as a "lubricative" for business, innovation, resilience, and even general enjoyment. In its most basic dyadic form, peer-to-peer trading of virtual items with the means of shared augmented reality should be possible in a trustworthy manner. Here, we investigate how specific design choices to facilitate such a trade impact trust. A user study with 36 participants showed that the entailment of a mutual confirmation of an item exchange improves both trust towards the software system, as well as interpersonal trust. We further found that perceived closeness towards the trade peer remains a much greater influence on trust than any other effect. We also found strong correlations between user experience and trust. In summary, our research shows that shared augmented reality can provide a great environment for trade and bartering among physically co-located peers, because potentially defrauding behaviours can be impeded by explicitly displaying item ownership and safeguarding the transfer of ownership.

3D Flight Planning Using Extended Reality

  • Shupeng Wang
  • Adrian Sarbach
  • Martin Raubal

For all pilots, before getting on their aircraft, they plan their flight. Through this, they preview flight information and assess the flight situation. However, despite the three-dimensional nature of flying, the majority of planning processes still rely on 2D map-based software. Extended Reality (XR) technology has been widely used in aviation, in particular in pilot and crew member training. Yet, there have been very few studies or applications that incorporate 3D visualization and XR into flight planning. To address this gap, our research focuses on the implementation of a 3D flight planning application in XR. Through a user study (N=16), we demonstrate that flight planning in 3D XR outperforms traditional 2D environments in terms of user experience. The study also provides insights into the preferred method of route planning: creating flight routes through a series of waypoints has better usability compared to drawing flight routes freely in a 3D XR environment.

Becoming Q: Using Design Workshops to Explore Everyday Objects as Interaction Devices for an Augmented Reality Spy Game

  • Mac Greenslade
  • Adrian Clark
  • Stephan Lukosch

The work in this paper extends state of the art research in the field of interaction design for everyday objects as interaction devices in Augmented Reality (AR), by taking a user defined approach to explore how users understand everyday objects as interaction devices in an AR game. A survey (n = 16) and workshop (n = 10) were conducted with members of the general public. The survey asked participants to select several everyday objects from their day to day life, answer questions regarding the object’s normal function, to think of and consider what the object could do if it was a spy gadget. The workshop followed up on this survey, participants were asked to bring their selected objects along, and during the workshop participants considered the objects they and other participants brought to collaboratively create new ideas about how these objects could be used if they were spy gadgets. The workshops were recorded and reviewed using reflexive thematic analysis, identifying four themes for interaction designers in this space to consider: ‘what players look for in objects’, ‘how players want to use objects’, ‘what players want their objects to be capable of in game’ and ‘concerns players have about object use’.

SESSION: Augmented & Mixed Reality

Keep Track! Supporting Spatial Tasks with Augmented Reality Overviews

  • Jannike Illing
  • Uwe Gruenefeld
  • Wilko Heuten

In industrial environments, efficient management of task takeovers is critical, especially when tasks are complex and workers face frequent interruptions such as shift changes. These takeovers often increase the cognitive load and the potential for errors. We propose the integration of Augmented Reality (AR) to assist users in handling takeovers of complex, and spatial distributed tasks. We developed three visualization techniques with increasing levels of spatial registration – Diagram View as Baseline, Map View, and Location-bound View – ranging from a simple overview to spatially anchored to enhance spatial and temporal awareness during task takeovers. A comparative user study (n=24) was conducted to evaluate the effectiveness of these techniques in improving task takeovers efficiency. Our results showed that the Location-bound View, in particular, significantly reduced cognitive load and improved task performance by integrating contextual information directly into the user’s field of view (FOV). This work provides insights for AR system designers and suggests that AR overviews with increasing levels of spatial registration can effectively support the takeover of complex distributed tasks in industrial settings.

Support Lines and Grids for Depth Ordering in Indoor Augmented Reality using Optical See-Through Head-Mounted Displays

  • Lisa Marie Prinz
  • Tintu Mathew

X-ray vision is a technique where Augmented Reality is used to display occluded real world objects, giving users the impression of being able to see through objects or humans. However, displaying occluded objects means that occlusion is not a reliable depth cue anymore which can lead to incorrect depth ordering. Additional depth cues like support lines and grids showed predominantly positive results with respect to depth perception in AR in previous studies. However, their impact on the ordinal depth estimation in x-ray vision applications has not been evaluated yet. While multiple different designs for lines and grids have been proposed, they have not been compared against each other. We conducted a within-subject user study with 48 participants to explore different support line and grid combinations for x-ray vision in indoor environments. Our results suggest that additional depth cues can result in an increased mental demand and should be selected carefully.

The Impact of Near-Future Mixed Reality Contact Lenses on Users' Lives via an Immersive Speculative Enactment and Focus Groups

  • Robbe Cools
  • Renée Venema
  • Augusto Esteves
  • Adalberto L. Simeone

In this paper we investigate the impact of near-future Mixed Reality (MR) contact lenses on users’ everyday lives via an Immersive Speculative Enactment (ISE) and focus groups. If or when MR technology advances to the same level of ubiquitousness of current smartphones, this is likely to have a large impact on people’s everyday lives. To gain qualitative insight on this impact, we created an ISE in which participants could experience a simulated MR lens prototype together in groups of four, thereby expanding the ISE method to multiple participants for the first time. This was followed by a focus group, in which the impact of the MR lenses was discussed. Participants raised concerns about the future of social interactions and expressing agency over the device, while also recognising how it could have practical applications. Based on these findings we formulate three guidelines for future MR contact lenses.

The Effect of Augmented Reality on Performance, Task Loading, and Situational Awareness in Construction Inspection Tasks

  • Oyindolapo Olabisi Komolafe
  • Colin A. Brett
  • Pouya Pournasir
  • Lloyd Waugh
  • Zhen Lei
  • Daniel J Rea
  • Scott Bateman

The construction industry is characterized by the need to perform detail-oriented tasks in complex environments – requiring tools and systems that prioritize precision, efficiency and safety. While Augmented Reality (AR) has emerged as a potential avenue for these tools, its effectiveness and impact on performance and situation awareness, as well as the challenges it may introduce, are yet to be fully understood. This research aims to investigate the efficacy of AR’s use in this domain through the representative task of inspecting prefabricated concrete panel casts, using studies complete with visual and auditory distraction simulations to explore two new AR schematic visualization systems. This work employs a dual-task user study (N = 18) to measure the impact of the AR on Situation Awareness, Task Loading, and Task Performance when compared to the conventional standard of paper blueprints. We find that AR solutions can lower perceived mental and temporal demands without negatively affecting situation awareness. Further, the AR solutions reduced the rate of false negatives and required less time than paper blueprints, suggesting that AR holds promise for improving construction workflows through increased performance and speed without impacting the safety provided by maintaining situation awareness.

OnArmQWERTY: An Empirical Evaluation of On-Arm Tap Typing for AR HMDs

  • Rajkumar Darbar
  • Xuning Hu
  • Xinan Yan
  • Yushi Wei
  • Hai-Ning Liang
  • Wenge Xu
  • Sayan Sarcar

Text entry is an essential and frequent task in Augmented Reality (AR) applications, yet developing an effective and user-friendly method remains a challenge. This paper introduces OnArmQWERTY, a text entry technique for AR HMDs that allows users to project a virtual QWERTY keyboard onto various locations on their non-dominant hand, including the palm, the back of the hand, and both the anterior and posterior sides of the forearm. Users interact with this overlaid keyboard on their skin by tapping with the index finger of the dominant hand, benefiting from the inherent self-haptic feedback of on-body interaction. A user study involving 13 participants evaluated the performance of OnArmQWERTY compared to a traditional mid-air virtual keyboard. The results demonstrate that OnArmQWERTY significantly improves typing speed and accuracy. Specifically, typing on the palm location outperforms all other on-arm locations, achieving a mean typing speed of 20.18 WPM and a mean error rate of 0.71%, which underscores the importance of comfortable, ergonomic typing postures and effective tactile feedback as key factors enhancing text entry performance.

SESSION: Assistance, Accessibility & Guidance

TAGGAR: General-Purpose Task Guidance from Natural Language in Augmented Reality using Vision-Language Models

  • Daniel Stover
  • Doug Bowman

Augmented reality (AR) task guidance systems provide assistance for procedural tasks by rendering virtual guidance visuals within the real-world environment. Current AR task guidance systems are limited in that they require AR system experts to manually place visuals, require models of real-world objects, or only function for limited tasks or environments. We propose a general-purpose AR task guidance approach for tasks defined by natural language. Our approach allows an operator to take pictures of relevant objects and write task instructions for an end user, which are used by the system to determine where to place guidance visuals. Then, an end user can receive and follow guidance even if objects change locations or environments. Our approach utilizes current vision-language machine learning models for text and image semantic understanding and object localization. We built a proof-of-concept system called TAGGAR using our approach and tested its accuracy and usability in a user study. We found that all operators were able to generate clear guidance for tasks and end users were able to follow the guidance visuals to complete the expected action 85.7% of the time without any knowledge of the tasks.

Goldilocks Zoning: Evaluating a Gaze-Aware Approach to Task-Agnostic VR Notification Placement

  • Cory Ilo
  • Stephen DiVerdi
  • Doug Bowman

While virtual reality (VR) offers immersive experiences, users need to remain aware of notifications from outside VR. However, inserting notifications into a VR experience can result in distraction or breaks in presence, since existing notification systems in VR use static placement and lack situational awareness. We address this challenge by introducing a novel notification placement technique, Goldilocks Zoning (GZ), which leverages a 360-degree heatmap generated using gaze data to place notifications near salient areas of the environment without obstructing the primary task. To investigate the effectiveness of this technique, we conducted a dual-task experiment comparing GZ to common notification placement techniques. We found that GZ had similar performance to state-of-the-art techniques in a variety of primary task scenarios. Our study reveals that no single technique is universally optimal in dynamic settings, underscoring the potential for adaptive approaches to notification management. As a step in this direction, we explored the potential to use machine learning to predict the task based on the gaze heatmap.

Improving Video Navigation for Spatial Task Tutorials by Spatially Segmenting and Situating How-To Videos

  • Book Sadprasid
  • Carl Gutwin
  • Scott Bateman

How-to videos are widely used for accessing instructional content. Many of the tasks covered in these videos are spatial, requiring movement between locations within a physical space to complete different parts of the activity. Conventional linear video interfaces, which often only allow time-based navigational techniques, like scrubbing, prove inefficient and cumbersome for such tasks. To address this, we investigate an approach for video browsing and navigation optimized for how-to videos involving spatial tasks by chaptering videos based on where tasks occur by using augmented reality to anchor these video segments to their physical locations via virtual signposts. Through two studies, we demonstrate that our approach outperforms standard and chaptered video interfaces in speed and ease. Our work contributes empirical evidence that spatially segmenting and situating tutorials is a promising strategy for improving video navigation.

Automatic Video-to-Audiotactile Conversion of Golf Broadcasting on A Refreshable Pin Array

  • Haerim Kim
  • Minho Chung
  • Eunchae Kim
  • Yongjae Yoo

Video accessibility is an important but challenging research question. In this study, we implemented and evaluated a system that converts video content into audio clips and tactile icons without losing context using a refreshable pin array display. The suggested system converts contextual information of the video to audio description and tactile scenes, allowing users to hear and touch. As an initial target, we selected golf broadcasting, a popular sport in the BLVI community, which has a clear context yet relies heavily on visual features and provides limited information through audio. We extracted contextual information through computer vision to deliver information such as scores and the trajectories and results of shots. Then, we converted them to audio via Text-to-Speech and tactile icons on the pin array. We evaluated the system by conducting a perception experiment and a usability survey, and the results showed that the system effectively converted the information.

Annorama: Enabling Immersive At-Desk Annotation Experiences in Virtual Reality with 3D Point Cloud Dioramas

  • Subramanian Chidambaram
  • Alex C Williams
  • Min Bai
  • Satyugjit Virk
  • Patrick Haffner
  • Matthew Lease
  • Erran Li

Point cloud annotation plays a pivotal role in computer vision and machine learning by facilitating the creation of volumetric annotations in 3D space. While prior research has explored point cloud annotation in VR environments, its practical implementation in space-constrained office settings, where data annotation is typically conducted, remains an open question. In this paper, we introduce Annorama, an interactive system that translates 3D point cloud scenes into miniature desk-scale dioramas, enabling annotation using a unique family of keyboard-assisted mid-air gestures inspired by direct manipulation. Through a within-subjects study with 16 participants, we demonstrate the feasibility of our system by assessing the efficacy of four types of mid-air gestures for drawing cuboid annotations. Our findings suggest that Annorama allows for rapid and accurate annotation of point cloud data, particularly with the Sizing and Two Point Gestures.

SESSION: Multi-User

Examining Pair Dynamics in Shared, Co-located Augmented Reality Narratives

  • Cherelle Connor
  • Eric Cade Schoenborn
  • Sathaporn Hu
  • Thiago Malheiros Porcino
  • Cameron Moore
  • Derek Reilly
  • Wallace S Lages

Augmented reality (AR) allows users to experience stories together in the same physical space. However, little is known about the experience of sharing AR narratives with others. Much of our current understanding is derived from multi-user VR applications, which can differ significantly in presence, social interaction, and spatial awareness from narratives and other entertainment content designed for AR head-worn displays. To understand the dynamics of multi-user, co-located, AR storytelling, we conducted an exploratory study involving three original AR narratives. Participants experienced each narrative alone or in pairs via the Microsoft Hololens 2. We collected qualitative and quantitative data from 42 participants through questionnaires and post-experience semi-structured interviews. Results indicate participants enjoyed experiencing AR narratives together and revealed five themes relevant to the design of multi-user, co-located AR narratives. We discuss the implications of these themes and provide design recommendations for AR experience designers and storytellers regarding the impact of interaction, physical space, spatial coherence, and narrative timing. Our findings highlight the importance of exploring both user interactions and pair interactions as factors in AR storytelling research.

Social VR for Professional Networking: A Spatial Perspective

  • Victoria Chang
  • Ge Gao
  • Huaishu Peng

One essential function of professional events, such as industry trade shows and academic conferences, is to foster and extend a person’s connections to others within the community of their interest. In this paper, we delve into the emerging practice transitioning these events from physical venues to social VR as a new medium. Specifically, we ask: how does the spatial design in social VR affect the attendee’s networking behaviors and experiences at these events? To answer this question, we conducted in-situ observations and in-depth interviews with 13 participants. Each of them had attended or hosted at least one real-world professional event taking place in social VR. We identified four elements of VR spatial design that shaped social interactions at these events: area size, which influenced a person’s perceived likelihood of encountering others; pathways connecting areas, which guided their planning of the next activity to perform; magnets in areas, which facilitated spontaneous gatherings among people; and conventionality, which affected the assessment of a person’s behavior appropriateness. Some of these elements were interpreted differently depending on the role of the participant, i.e., event hosts vs. attendees. We concluded this paper with multiple design implications derived from our findings.

Social VR Activities Should Support Ongoing Conversation - Comparing Older and Young Adults Desires and Requirements

  • Laura Simon
  • Lina Klass
  • Anton Benjamin Lammert
  • Bernd Froehlich
  • Jan Ehlers
  • Eva Hornecker

Keeping in social contact with friends and family and engaging in social and enjoyable activities is important for our well-being – especially for older adults, who often live distant from loved ones. A new opportunity is provided by Social VR (SVR) for engaging in shared activities beyond just talk. We present findings from interviews with older and young adults on their needs and desires for social VR, especially regarding the types of activity they would like to engage in. We compare these findings to identify differences, commonalities, and opportunities for inter-generational social VR activities. Despite the favoring of cultural (older) and sport (younger adults) interactions, both users groups preferred low-intensity and game-like activities that allow for ongoing conversation and ’sharing the moment’. Furthermore, ease of use, realistic avatars and the mitigation of age-related differences were core requirements for the older demographic.

Where to Draw the Line: Physical Space Partitioning and View Privacy in AR-based Co-located Collaboration for Immersive Analytics

  • Inoussa Ouedraogo
  • Huyen Nguyen
  • Patrick Bourdot

This paper investigates two main aspects of co-located collaboration using Augmented Reality (AR) for Immersive Analytics (IA): physical space partitioning and view privacy. AR-based collaborative work in IA can greatly benefit from direct conversational awareness cues and enhanced mutual understanding between users. However, some challenges still exist, particularly in enabling efficient interaction for IA tasks such as analysis and decision-making on complex data within limited physical space. Moreover, collaborative IA often involves both cooperative and individual tasks with experts of diverse backgrounds, necessitating effective workspace management. To address spatial proximity issues in limited space such as offices or meeting rooms, we explored a workspace partitioning approach that divided physical space with virtual boundaries on the floor. We conducted a user study to examine workspace management approaches (partitioning and non-partitioning) in conjunction with view privacy policies (public and private view). Findings suggest that under private view conditions, individual tasks were completed more quickly, and non-partitioning facilitated faster placement of shared objects. Additionally, public view improved object arrangement time in partitioned space.

SESSION: Perception

Mapping Real World Locomotion Speed to the Virtual World in Large Field of View Virtual Environments

  • Ian Smith
  • Erik J Scheme
  • Scott Bateman

In virtual environments, tracking physical movements in the real world and mapping them to movement in a virtual world increases immersion and the experience of presence. For example, walking on a treadmill in the physical world may be mapped to camera movement in a first-person view of the virtual world. However, due to interrelated factors relating to the field of view and distortion of objects in the virtual environment, matching physical movement speed to virtual world movement speed world so that it ‘feels right’ to a user can be complex. This perceived mismatch is detrimental as it can induce motion sickness and reduce the experience of presence. Although previously investigated with head-mounted displays, there is little information about how to overcome this mismatch when using large 2D screens that provide a very different viewing environment. To address this gap, we investigate how a 180-degree display that nearly fills the entire human FOV impacts this perceptual mismatch while walking and running on a treadmill. Our results show that people prefer camera speeds that actually exceed their physical movement speed, and increasingly so at higher speeds. Interestingly, though, people’s tolerance for deviations from the ideal camera speed mapping does not change with movement speed. We propose a simple personalized linear model that can be quickly calibrated for a user to provide the best match. This work provides important findings to inform and improve the design of virtual environments for an improved user experience.

Difficulties in Perceiving and Understanding Robot Reliability Changes in a Sequential Binary Task

  • Hiroshi Furuya
  • Laura Battistel
  • Zubin Datta Choudhary
  • Matt Gottsacker
  • Gerd Bruder
  • Gregory F Welch

Human-robot teams push the boundaries of what both humans and robots can accomplish. In order for the team to function well, the human must accurately assess the robot’s capabilities to calibrate the trust between the human and robot. In this paper, we use virtual reality (VR), a widely accepted tool in studying human-robot interaction (HRI), to study human behaviors affecting their detection and understanding of changes in a simulated robot’s reliability. We present a human-subject study to see how different reliability change factors may affect this process. Our results demonstrate that participants make judgements about robot reliability before they have accumulated sufficient evidence to make objectively high-confidence inferences about robot reliability. We show that this reliability change observation behavior diverges from behavior expectations based on the probability distribution functions used to describe observation outcomes.

Investigating Presence Across Rendering Style and Ratio of Virtual to Real Content in Mixed Reality

  • Eric DeMarbre
  • Jay Henderson
  • Robert J Teather

We investigate how the amount and rendering style of virtual content impact self-reported presence and subjective preference in an extended reality environment. In a within-subjects experiment, we vary the ratio of virtual to real content across three conditions: low (mostly real with some virtual elements), medium (a balanced mix of both), and high (mostly virtual with no real visual elements). For each ratio, we use two different rendering styles for virtual content: realistic and stylized (cartoon-like), evaluating presence through standardized questionnaires. Our results suggest that different ratios of virtual to real content minimally affect presence, with realistic renderings evoking stronger presence than stylized ones. Participants preferred higher amounts of virtual content and realistic virtual content over stylized versions. These findings imply that coherence and quality of virtual content may contribute more to presence in mixed reality settings than amount of virtual content.

Augmenting Virtual Spatial UIs with Physics- and Direction-Based Visual Motion Cues to Non-Disruptively Mitigate Motion Sickness

  • Zhanyan Qiu
  • Mark McGill
  • Katharina Margareta Theresa Pöhlmann
  • Stephen Anthony Brewster

The use of Virtual Reality (VR) technology in moving platforms such as vehicles can be difficult due to significant issues around motion sickness, partly due to the physical motion being occluded in VR. The use of visual cues within VR can mitigate this motion sickness. However, these additional visual cues can disrupt users. This paper presents two studies conducted on a yaw-motion platform, investigating the effectiveness of our efforts to manipulate the visually perceived motion of spatial UIs within VR environments using novel physics-based cues, reducing motion sickness with less distraction on tasks. The first study validates our design’s effectiveness, while the second compares it with existing solutions (speed/direction-base cues) regarding motion sickness and distraction levels among VR users. Our findings show that our design can relieve rotational motion sickness while concurrently diminishing distraction. This study serves as a valuable starting point for research into non-disruptively interleaving motion cues with spatial UI components within VR environments to mitigate motion sickness, emphasizing the delicate equilibrium between motion sickness mitigation and preserving the user experience.

SESSION: Selection & Manipulation

Evaluating Node Selection Techniques for Network Visualizations in Virtual Reality

  • Lucas Joos
  • Uzay Durdu
  • Jonathan Wieland
  • Harald Reiterer
  • Daniel A. Keim
  • Johannes Fuchs
  • Maximilian T. Fischer

The visual analysis of networks is crucial for domain experts to understand their structure, investigate attributes, and formulate new hypotheses. Effective visual exploration relies heavily on interaction, particularly the selection of individual nodes. While node selection in 2D environments is relatively straightforward, immersive 3D environments like Virtual Reality (VR) introduce additional challenges such as clutter, occlusion, and depth perception, complicating node selection. State-of-the-art VR network analysis systems predominantly utilize a ray-based selection method controlled via VR controllers. Although effective for small and sparse graphs, this method struggles with larger and denser network visualizations. To address this limitation and enhance node selection in cluttered immersive environments, we present and compare six distinct node selection techniques through a user study involving 18 participants. Our findings reveal significant differences in the efficiency, physical effort, and user preference of these techniques, particularly in relation to graph complexity. Notably, the filter plane metaphor emerged as the superior method for selecting nodes in dense graphs. These insights advance the field of effective network exploration in immersive environments, and our validations provide a foundation for future research on general object manipulation in virtual 3D spaces. Our work informs the design of more efficient and user-friendly VR tools, ultimately enhancing the usability and effectiveness of immersive network analysis systems.

PhoneCanvas: 3D Sketching System Using a Depth Camera-Equipped Smartphone as a Canvas

  • Yuki Takeyama
  • Kousei Nagayama
  • Myungguen Choi
  • Buntarou Shizuki

We present PhoneCanvas, a system using a depth camera-equipped smartphone as a canvas, which enables users to draw 3D sketches and view their sketches on a PC in real-time. Users can draw lines, erase lines, and draw surfaces by varying their hand gestures and rotating 3D models by rotating their smartphones. This system allows for 3D sketching operations using hand gestures, with the aim of providing operations for 3D modeling beginners to perform rapid prototyping. PhoneCanvas addresses the issue of a few 3D sketching systems for beginners balancing both installation costs and operability. We conducted studies with 3D modeling beginners to test the performance of the system. The study results showed that the system can be used for rapidly prototyping various 3D models and discussions.

Guiding Handrays in Virtual Reality: Comparison of Gaze- and Object-Based Assistive Raycast Redirection

  • Jenny Gabel
  • Susanne Schmidt
  • Ken Pfeuffer
  • Frank Steinicke

Handray selection is widely used for hand-tracking-based interactions in head-mounted displays, as it is a simple and straightforward interaction technique. However, selection performance decreases for small and distant objects. It is also negatively affected by input inaccuracies due to natural hand tremors, tracking issues, and movement caused by pinch gestures. Recent work introduced assistive raycast redirection for controller raycasting which facilitates object selection in virtual reality. It applies a gradual proximity and gain-based redirection of the ray towards the target center within a predefined redirection zone. Inspired by this approach, we implemented two redirection techniques for improving handray selection while maintaining ease of use: (1) RayToTarget, which adapts existing controller-based raycast redirection to hand-tracking, and (2) gaze-assisted RayToGaze as a novel technique. We evaluated them together with classic handray in a Fitts’ Law user study. Our findings suggest that both redirection techniques perform significantly better than classic handray, but performance for RayToGaze decreases at greater target depth compared to RayToTarget. Generally, handray redirection was well received and did not decrease the sense of agency. However, different individual preferences and target acquisition strategies affect the user experience for both redirection techniques and might impact selection performance.

Evaluation of Retrieval Techniques for Out-of-Range VR Objects, Contrasting Controller-Based and Free-Hand Interaction

  • David Michael Broussard
  • Christoph W Borst

In VR environments like “sandbox” applications or interactive molecule simulators for education, physics-based objects can move beyond a user’s natural reach. We evaluate four interaction methods to support efficient and intuitive retrieval of objects for such events, including one that has not been evaluated previously and three others derived from well-known techniques. Considering the proliferation of camera-based hand tracking in VR headsets, there is a large interest in contrasting controller-based and free-hand interaction methods, so our evaluation considers both input types to further understand tradeoffs. We gathered performance data and subjective impressions from 52 subjects in a representative game-like puzzle task. A hand-extension (“go-go”-type) technique was least promising, with image-plane and pointing-based techniques being more promising. The recent tether-handle technique was roughly on-par with others for controller input (details vary by metric); it simply uses the same underlying grab metaphor as the main interaction. For free-hand interaction, its performance is reduced in a way that reflects broader problems of grab detection and manipulation methods for whole-hand interaction in VR.

Ubiquitous BlowClick: Non-verbal Vocal Input for Confirmation with Hand-held Mobile Devices in the Field

  • Daniel Zielasko
  • Javier Alejandro Jaquez Lora

Mobile devices have become integral to modern life, with tasks often requiring one-handed interaction. However, conventional methods, such as tapping, face limitations, especially in scenarios where hands-free operation is crucial, like driving or using AR glasses. Alternative approaches, including speech commands and non-verbal vocal input (NVVI), have been explored to address these challenges. While speech commands suffer from inherent delays, NVVI, characterized by simple audio signatures, presents a promising solution. This study aims to integrate and evaluate NVVI on mobile devices compared to traditional tapping interaction in a field study. Participants conduct the study in uncontrolled environments using their own devices. Therefore, we use machine learning-based NVVI classification on an Android-based architecture, providing a fast and resource-efficient pipeline. In the empirical evaluation, we assess the feasibility and effectiveness of the interface utilizing a reaction time task and ISO 9241:411 Fitts’ Law selection tasks. Our findings indicate that while tapping exhibits a speed advantage in conventional use, blowing is consistently detected and executed with efficiency, resulting in significantly faster input, particularly when the hands are initially distant.

SESSION: Poster Abstracts

A Literature Review of Indoor Wayfinding in Virtual Environment: Comparability to Real Environment and Ecological Validity

  • Minghui Liu
  • Ruishen Zheng
  • Ruowen Niu
  • Yirui Zuo
  • Weiwei Zhang

Virtual Reality (VR) technology offers a novel approach to studying human wayfinding behavior under controlled conditions. However, it remains uncertain whether the findings from VR settings are ecologically valid in real environments. We conducted a systematic literature review, compiling an analysis table of experimental factors, conditions, methodologies, and results from relevant studies. We categorized ecological validity into three types: Proven, Extrapolated, and Relevant. This categorization aids wayfinding researchers in understanding the scope and limitations of VR-based studies and supports the effective use of VR technology.

A Nail-tip Device for Gesture and Force Input

  • Tsubasa Otaki
  • Hiroyuki Manabe

Micro-gestures are critical to expanding the personal computing environment. A nail-mounted device enabling micro-gesture and force inputs via the tip of a fingernail is proposed. A prototype with three touch sensors and a strain gauge to support five gestures and directional force input is fabricated. Experiments confirm successful gesture recognition. We introduce three applications to discuss the design space of the proposal.

Administering VR Questionnaires Generated in Google Forms

  • Naz Mokhamed Al Kassm
  • Jay Henderson
  • Robert J Teather

Recent VR research has advocated for deploying questionnaires in VR (in-VRQs) rather than out of VR (out-VRQs). We present a workflow and Unity plugin that streamlines the process of incorporating questionnaires in VR environments. Our approach presents questionnaires as a user-fixed watch in VR that leverages bi-manual interaction for input.

ARCube: Hybrid Spatial Interaction for Immersive Audio

  • Hyunkyung Shin
  • Henrik von Coler

The ARCube is an augmented reality (AR) interface for three-dimensional spatial control, designed to be used next to physical control devices. The user study evaluated an AR interface combined with a MIDI controller in an immersive audio environment. Key issues identified included faulty gesture detection and problems with spatial sound perception. Despite these challenges, most participants reported enhanced engagement and immersion.

Back to (Virtual) Reality: Preferences and Effects of Entry and Exit Transitions to Virtual Experiences for Older Adults

  • Lucie Kruse
  • Leah Knaack
  • Frank Steinicke

Each immersive mixed reality (MR) experience starts with a transition from the real world to virtual reality (VR) and ends with the opposite transition back. The way these transitions are performed can have a significant impact on presence and the overall user experience. However, studies have mostly been conducted with younger users, excluding older adults. In this study, four enter and exit transitions were analyzed regarding presence, user experience, and preference for older adults: (i) a direct transition, (ii) fade transition, (iii) feedback transition, and (iv) active control transition. Results indicate that older adults preferred the active control and the feedback transition over the two traditional ones, with the active control receiving the highest user experience ratings.

Dropping Hints: Visual Hints for Improving Learning using Mobile Augmented Reality

  • Nick Wittig
  • Uwe Gruenefeld
  • Lukas Glaser
  • Mak Krvavac
  • Florian Rademaker
  • Johannes Waltmann
  • Donald Degraen
  • Stefan Schneegass

In traditional learning, students frequently need help mastering mathematical concepts due to the complexity of the content and static and non-interactive teaching methods. However, the ubiquity of mobile phones and their enabling impact on Augmented Reality (AR) technology improves learning experiences. Augmented Reality (AR) specifically fosters spatial thinking through visual augmentations and enables interactions with dynamic content. In this paper, we propose a mobile Augmented Reality (AR) learning system that utilizes these visual augmentations to present educational hints of different textual- and graphical types. The system considers teachers’ and students’ perspectives and provides respective interfaces. These interfaces are a web-based authoring tool for teachers and a mobile Augmented Reality (AR) application for students. We conclude our work with ideas for future Augmented Reality (AR)-based learning systems.

Emotion Estimation Using Laban Feature Values Based on a Multi-scale Kinesphere

  • Akari Kubota
  • Sota Fujiwara
  • Satoshi Fukumori
  • Saizo Aoyagi
  • Michiya Yamamoto

Laban’s theory, which was originally proposed for body expression, describes the relationships between body movements and emotions. In our previous study, we proposed our original Laban feature values and demonstrated their effectiveness in estimating emotions. However, they have a limitation in that they were only effective in certain situations. To solve the limitation, we propose novel Laban feature values based on the size of the kinesphere (movable range of the body). We estimated emotions with Laban feature values. The results show the importance of the small kinesphere, which comprises the shoulders, back, and waist. This study shows the shoulders’ critical influence in the accurate estimation of emotions, suggesting that the central body regions are integral to expressing emotional expressions.

Enable Natural User Interactions in Handheld Mobile Augmented Reality through Image Computing

  • Qinyang Wu
  • Chen Li

Augmented Reality (AR) has transformed user interaction with the digital world by blending virtual elements with the physical environment. While various AR interaction methods have been studied, most research has concentrated on evaluating interactions within controlled experimental settings, lacking validation in practical application scenarios. This project aims to address this gap by using Unity to develop a handheld mobile AR application that simulates the pottery-making process. Two versions of the application were developed: a conventional touch-based version and a mid-air natural interaction version based on hand detection and tracking. We conducted a user study of 40 participants adopting the mixed methods design. By comparing the data from the touch-based interaction group (N=20) to the mid-air natural interaction group (N=20), the study found that the touch-based version had better usability, while the mid-air version provided a more immersive experience, as acknowledged by the participants. Future work should focus on addressing areas for improvement in the mid-air interaction, which could include incorporating better distance indications and interaction feedback, thus further bridging the gap between virtual and real-world interactions.

Enhancing Airport Wayfinding for Older Adults through Gesture-Based Navigation

  • Minghui Liu
  • Yu’an Su

As the proportion of older adults rapidly increases, significant wayfinding challenges arise, particularly in complex environments such as airports. These settings often confuse older adults due to the mix of digital, physical, and procedural information, making navigation more difficult. Although various navigation tools exist, their high cognitive load renders them less accessible for older adults. Common issues include difficulty reading maps, interpreting signage, and operating digital devices.

Gesture-based navigation systems offer a promising solution by providing an intuitive, user-friendly interface. These systems leverage natural human gestures, reducing the learning curve and minimizing cognitive load. Our research introduces a gesture-based navigation aid designed to enhance older adults’ airport experiences. This study proposes a method that reduces navigation errors, is easy to learn, and offers a natural, stress-free interaction, improving their confidence and independence during air travel.

Enhancing the Entertainment Value of Old Maid Card Game with AR Technology

  • Yuki Hamaguchi
  • Toru Abe
  • Takuo Suganuma

Non-digital games provide the tactile and spatial interaction of moving objects in real space, an aspect absent in digital games. This study considers card games as interactive media and introduces an “AR card game” using an optical see-through HMD. The proposed “AR Card Game” leverages AR to enhance digital information within the actual card game, preserving its inherent interactivity, and augmenting the entertainment value by controlling player information. This paper outlines the development of a card game incorporating AR-based functionality into the game "Old Maid".

Exploring Gaze-Based Menu Navigation in Virtual Environments

  • László Kopácsi
  • Albert Klimenko
  • Michael Barz
  • Daniel Sonntag

With the integration of eye tracking technologies in Augmented Reality (AR) and Virtual Reality (VR) headsets, gaze-based interactions have opened up new possibilities for user interface design, including menu navigation. Prior research in gaze-based menu navigation in VR has predominantly focused on pie menus, yet recent studies indicate a user preference for list layouts. However, the comparison of gaze-based interactions on list menus is lacking in the literature. This work aims to fill this gap by exploring the viability of list menus for multi-level gaze-based menu navigation in VR and evaluating the efficiency of various gaze-based interactions, such as dwelling and border-crossing, against traditional controller navigation and multi-modal interaction using gaze and button press.

Exploring Gesture Interaction in Underwater Virtual Reality

  • Alexander Marquardt
  • Marvin Lehnort
  • Hiromu Otsubo
  • Monica Perusquia-Hernandez
  • Melissa Steininger
  • Felix Dollack
  • Hideaki Uchiyama
  • Kiyoshi Kiyokawa
  • Ernst Kruijff

An underwater virtual reality (UVR) system with gesture-based controls was developed to facilitate navigation and interaction while submerged. The system uses a waterproof head-mounted display and camera-based gesture recognition, originally trained for above-water conditions, employing three gestures: grab for navigation, pinch for single interactions, and point for continuous interactions. In an experimental study, we tested gesture recognition both above and underwater, and evaluated participant interaction within an immersive underwater scene. Results showed that underwater conditions slightly affected gesture accuracy, but the system maintained high performance. Participants reported a strong sense of presence and found the gestures intuitive while highlighting the need for further refinement to address usability challenges.

Exploring User-Defined Interactions for Virtual Reality

  • Donald Degraen
  • Marco Speicher
  • André Zenner
  • Antonio Krüger

Virtual Reality (VR) is becoming increasingly common. Interactions in VR environments are typically facilitated through controllers or hand-tracking approaches, which often lack user configurability and adaptability. To address this, we explore user-defined interactions for custom proxy objects. We conducted an elicitation study to understand how users would create their own interaction metaphors for common VR tasks, i.e., text typing and color picking. Participants were presented with a 3D-printed object selected from a set of initial requirements and were asked to define their own interaction mappings. In this work, we discuss the resulting concepts and their implications for future developments in personalized interactions within immersive virtual environments.

Eye-Tracking Analysis for Cognitive Load Estimation in Wearable Mixed Reality

  • Paula López
  • Ana María Bernardos
  • José Ramón Casar

Previous research has indicated that cognitive load significantly affects performance and user experience in mixed reality applications. This paper analyses eye-tracking data to investigate their relation- ship with cognitive load in wearable mixed reality (on HoloLens2 headset). The aim is to determine whether fixation and saccadic features, together with age, visual condition and previous experience with mixed reality systems, can be used to infer cognitive load levels in a specific application scenario. The analysis is done on a user study with 17 individuals performing a discovery task in a wearable mixed reality application designed for building occupancy monitoring. Results reveal that participants can be grouped into two distinct groups that separate individuals by experience and may match two different cognitive load levels, confirming that longer fixation frequency and saccade duration appear to be significant predictors of lower cognitive load in wearable mixed reality.

FoldAR: Using the Bended Screen of Foldable Smartphones for Depth Discrimination in Augmented Reality

  • Max Teetz
  • David Petersen
  • Vimal Darius Seetohul
  • Matthias Böhmer

Mobile augmented reality is used in research and many commercial applications. Handheld-based AR has been widely investigated. Interacting in 3D space, however, has shortcomings when interacting along the depth axis on a flat 2D display. This paper presents the concept of FoldAR, which uses the two screen parts of foldable smartphones to display a camera view and a map view. With foldable smartphones users can spatially align the display’s two parts with the dimensions of interaction. This allows to manipulate objects on two differently oriented screen parts within all three dimensions at once. We contribute a prototype and discuss possibilities and applications.

Hands-on VR Approach to Teach 3D Spatial Concepts of Molecular Reactions

  • Adil Khokhar
  • David Michael Broussard
  • Christoph W Borst

Molecules require a certain relative orientation and proximity to react with each other, a concept that is often difficult for beginners to grasp. We present a virtual reality system designed to help users learn about chemistry reactions with hands-on manipulation and interaction with molecules in a simulated lab environment. Our system emphasizes intuitive molecular interactions over a full molecular dynamics simulation, minimizing erratic molecule motion and sensitivity, in order to help users better understand spatial requirements underlying chemistry reactions. The system uses a novel approach to mapping reagent and product molecules to facilitate reaction detection and animation. We propose to evaluate our system by training a machine learning classifier on motion features (head, hand, and eye) to estimate user understanding of chemistry reactions, thereby enabling adaptive system feedback, such as providing visual hints.

Improving Spatial Awareness in Video Mirror-Mediated XR Telementoring through Visual Cues

  • Jan-Michel Nöring
  • Ernst Kruijff
  • Jonas Schild

The quality of collaboration between an on-site novice user and a remote expert in cross-reality (XR) settings depends on mutual spatial awareness. Non-stereo video mirrors in Augmented Reality (AR) - Virtual Reality (VR) telementoring settings lack depth cues to meaningfully implement 3D interaction techniques like gesture guiding and gaze cues. This work introduces a visual cue for non-static video streams that acts as an add-on to these interaction techniques, drawing from principles of monocular depth perception. Descriptive results from a comparative within-design user study indicate that the "illumination cue" can improve spatial awareness primarily for the remote expert.

Minimizing Errors in Eyes-Free Target Acquisition in Virtual Reality through Auditory Feedback

  • Yota Takahara
  • Arinobu Niijima
  • Chanho Park
  • Takefumi Ogawa

This study aims to support eyes-free target acquisition in virtual reality by enhancing the user’s three-dimensional spatial awareness through sonification mapping pan, frequency, and amplitude to the x, y, and z axes, respectively. Our method provides two types of changing sounds. When multiple targets are sparsely arranged, the sound changes based on the distance between the user’s hand and the targets. When multiple targets are densely arranged, the sound changes based on the distance between the user’s hand and the central coordinate of the target group. Our two user studies, in which the targets were arranged sparsely and densely, respectively, showed that changing the sound exponentially and discretely minimized errors in eyes-free target acquisition.

Mix n' Match Senses, Add or Remove VR: A Customized Approach towards Optimal Visualization Experience

  • Shamima Yasmin

This research investigates the efficacy and usability of multisensory modeling in visualization with the option for virtual reality (VR) integration. With a flexible multisensory mapping and customized interface, users were asked to mix and match senses (visual, audio-visual, visual-haptic, audio-haptic, audio-visual-haptic, and more) and VR options (VR vs non-VR). Research results demonstrated user preference for a VR-enhanced multimodal exploration of virtual objects.

On the Impact of a Simulated Cognitive Augmentation to Detect Deception on Decision-making Confidence

  • Amali Seneviratne
  • Bethany Growns
  • Stephan Lukosch

This paper explores the simulated integration of cognitive augmentation (CA) and augmented reality (AR) in deception detection within negotiation contexts. It assesses how AR visualizations of deception probabilities impact decision-making confidence and user acceptance. The study reveals a strong positive correlation exists between users’ comfort with CA technologies and their decision-making confidence. This underscores the importance of user-centred design and familiarity with technology for effective CA implementations. The research also addresses public perceptions and ethical considerations, suggesting cautious optimism toward these technologies in high-stakes environments.

PuzzleAide: Comparing Audio and Embodied Assistants for MR Puzzle-Solving

  • Shirin Hajahmadi
  • Fariba Mostajeran
  • Kevin Heuer
  • Anton Lux
  • Gil Otis Mends-Cole
  • Pasquale Cascarano
  • Frank Steinicke
  • Gustavo Marfia

Conversational Virtual Agents (CVA) have great potential to assist users in their task performance. Whether these agents need to be embodied or not was the main research question of the present pilot study. To answer this question, we designed and developed a Mixed Reality (MR) application for solving a physical puzzle while interacting with a CVA. Eleven participants took part in our between-subject pilot study and interacted with two different representations of a CVA (i.e., voice-only and embodied agent). In this short paper, we descriptively report on the participants’ problem-solving time, the number of assistance requests they made to the CVAs, and the social presence they perceived while doing so.

Ranking Realism in Mixed Reality

  • Eric DeMarbre
  • Robert J Teather

We ranked "realism" by giving numerical values to physical plausibility, polygon count, texture detail, and shadows in Mixed Reality. The Elo ranking system shows that each graphical element has a distinct and consistent impact on overall realism ranking, suggesting such ranking systems may reliably quantify graphical realism.

Simulating Spatial Disorientation in Pilots: Integrated VR and Rotating Chair-Based Approach

  • Seoyeon Park
  • Jiyeong Hong
  • Suchun Park
  • Kyungphil Ryoo
  • Kyoungwoo Lee

Spatial Disorientation (SD) is a critical issue in aviation, often leading to severe accidents when pilots misinterpret their aircraft’s attitude. This paper proposes a novel simulator system designed to induce unrecognized SD using Virtual Reality (VR) and a rotating chair. The system focuses on two cases of SD: False Horizon and Leans. The VR scenario includes multiple phases that challenge participants’ SD by integrating visual and rotational stimuli based on an aerodynamic model. The experimental setup and procedure are designed to simulate realistic flight conditions, providing a controlled environment to study SD dynamics. Future work will involve implementing the prototype system and collecting physiological responses to enhance real-time SD detection and training protocols.

The Avatar Dressing Room - a VR environment for strengthening physical and psychological embodiment with avatars

  • Alon Rosenbaum
  • Michal Rinott

With the rise of remote teams, there is an increasing need for effective ways to foster positive team dynamics and outcomes across distances. One tool created for enhancing team dynamics is the Six Hats method, which encourages team members to tackle problems from various perspectives, each symbolized by a different colored hat [4]. Inspired by this method, we propose a VR meeting environment where members embody avatars with distinct characteristics and perspectives, enabling spatial and interpersonal dynamics. To increase the physical and psychological embodiment of participants with their avatars, we introduce the Avatar Dressing Room (ADR) - a modular virtual dressing room space for participants to gradually embody their avatars and adopt their characteristics. This poster presents the design of the ADR spaces and space components. A preliminary evaluation offers insights into fine tuning characteristics of the space, avatar choices, and durations to maximize the effectiveness of the ADR.

Training for Ultrasound Imaging: An Evaluation of SonoGame

  • Selin Guergan
  • Victoria Henze
  • Christian Gall
  • Birgitt Schoenfisch
  • Sara Brucker
  • Markus Hahn
  • Sven Bertel
  • Lukas Mayer
  • Matthias Suencksen
  • Michael Teistler

Ultrasound examination is an important imaging method used across the medical field (e.g. [1]). However, during medical education, practical training opportunities for ultrasound imaging are not yet widely available, while good potential exists for approaches that employ gamification (e.g. [3, 5]). SonoGame is a hands-on platform for medical students developed to train ultrasound skills using a game-based approach [6, 7]. It provides a step-by-step introduction to ultrasound imaging and aims at improving ultrasound skills. It consists of several mini games that challenge the player to examine, identify, and recreate 2D cross-sectional images of a 3D virtual volume underneath the table top that contains geometric objects (see Fig. 1). SonoGame focuses on learning how to orient in an ultrasound image and on developing the spatial visualization as well as hand-eye coordination skills that are needed for ultrasound imaging. In this contribution, we present and discuss a prospective, single-center study of the effects of SonoGame-based training on ultrasound skills. We recruited 56 medical students without previous practical ultrasound experience and randomly assigned them to either a control or to a SonoGame group. While the control group received no training, the SonoGame group repeatedly got to play the SonoGame over a period of four weeks (four sessions at 30 minutes each). For both groups, individual ultrasound skills as well as spatial cross-sectioning skills [2] were assessed before and after the four weeks. Results: Compared to the control group, the SonoGame group significantly improved in their ultrasound skills, particularly with complex anatomical structures (see Fig. 2), and, post-training, they reported feeling more familiar with and confident in their use of sonography. What is more, they considered SonoGame a valuable supportive tool for ultrasound skills training. For individual spatial cross-sectioning skills, both groups improved from first to second testing, though the extent of improvement did not differ between groups (see Fig. 3). We conclude that SonoGame-based training can usefully supplement conventional ultrasound education of medical students, as it promotes and supports the skill set required for ultrasound examinations.

XR4LAW: Implementing an Immersive Ergonomic User Interface for Legislative and Deliberative Institutions

  • Giuseppe Di Maria
  • Shirin Hajahmadi
  • Salvatore Sapienza
  • Gustavo Marfia
  • Monica Palmirani

XR4LAW is an innovative project included in the ERC project HyperModeLex that aims to investigate a new methodology of work of parliamentarians through the use of Virtual Reality (VR) and Mixed Reality (MR) technologies. XR4LAW creates a Virtual Dashboard, with the goal of integrating legislative work into the metaverse. The bidimensional metaphor used for navigating the legislative documentation is limited considering the complexity of the material that a member of parliament should navigate and search. For this reason, the current project intends to provide an immersive environment where to find relevant documents using an easy human-computer interaction interface. The application is connected to the eXist-db database, where all legislative documents are available in XML format using the Akoma Ntoso OASIS XML standard applied to the European legislation. The primary goal is to develop an ergonomic and intuitive user interface that capitalizes on MR’s capabilities, such as real-world visibility and utilizing physical spaces to overlay virtual elements. This immersive environment empowers end-users to explore and analyze legal documents in a whole new way, improving the accessibility and efficiency of parliamentary work.

SESSION: Demo Abstracts

Adaptive Immersion: Mixed Reality Map Visualization with Gradual Transition

  • Julia Hertel
  • Solmaz Goodarzi
  • Frank Steinicke

In this demo, we present an interactive system aiming to improve the visualization of geospatial data by allowing users to gradually transition from a conventional 2D view, to a 2D view augmented with 3D elements, up to a fully immersive 3D experience. To achieve this, we combine a multi-touch table with a mixed reality head-mounted display. Here, we utilize the Magic Leap 2 and its dynamic dimming feature, which allows both, the augmentation of the MTT with 3D elements, and a fully immersive visualization.

An Eye Tracking Concussion Assessment System with Integrated MR-based Sports Vision Training

  • Kenjiro Okada
  • Saizo Aoyagi
  • Satoshi Fukumori
  • Michiya Yamamoto
  • Hiroyuki Abutani

Concussions can be fatal or permanent if they occur twice in a short period. This risk has also been emphasized in ball games, and concussion assessment (CA) such as VOMS have been developed in recent years, but they contain ambiguities. Therefore, this study developed a system to digitalize CA from the gaze and head movement using an MR HMD with eye tracking function. In particular, we focused on the fact that the original ability may not be measured due to unfamiliarity with use or nervousness. We developed a CA system for daily use with a sports vision training (SVT) function to train sports-related eye movements to resolve or reduce these problems.

Exploration of On-skin Interfaces to Enhance Immersive Experiences and Promote Inclusive Applications

  • Bo Hui
  • Zhiyong Xing
  • Hei Chung Pong
  • Leith Kin Yip Chan
  • Yong Hong Kuo

The Cave Automatic Virtual Environment system (CAVE-like system) system [1] provides a physically immersive environment for both normal users and users with disabilities. However, the commonly used handheld controllers, while effective, limit user immersion and exclude some individuals with physical disabilities. We would like to propose an innovative interface device aimed at improving user immersion and accessibility in a CAVE-like system. The interface utilizes a nanogenerator-based pulse for signal generation and incorporates a nexus for real-time signal analysis and instruction generation with assistance of tracking system in CAVE-like system. The device provides users and designers with considerable flexibility to develop interactions tailored to specific gestures, physical capabilities, and ergonomic considerations, effectively overcoming the constraints associated with traditional handheld controllers.

FreelForce: Reel-type Force Feedback Device with HMD for Smooth and Quick Transitions

  • Leo Shimoda
  • Hiroyuki Manabe

Many force feedback devices have been proposed to enhance the immersive experience of VR. However, force feedback is not always necessary in VR and the user sometimes tries to touch physical objects with their bare hands when in the pass-through mode. We propose “FreelForce,” a device that combines an HMD and a reel strap to enable smooth and quick transitions from a hands-free state to a state where force feedback can be experienced. Three types of accessories and applications are implemented to expand the input functionality for the mixed environment of VR/MR and everyday life and to explore the design space of the technique.

OSIRITap: Whole-Body Interface that Recognizes Tap Input Using Only an HMD and Wrist-Mounted IMUs

  • Naoki Kunieda
  • Hiroyuki Manabe

Virtual and mixed reality (VR/MR) technologies have recently enhanced gesture recognition and motion tracking. While on-body interaction has been widely explored for system input, it suffers from input area limitations and the need for customized or expensive devices. Our proposal, OSIRITap, utilizes only an HMD and wrist-worn IMUs to realize full-body tap input. It performs whole body and hand tracking by the HMD when the hands are within the field of view (FoV) and by combining HMD and IMU when the hands are outside the FoV. Experiments confirm that the technique can recognize taps on various body parts. Additionally, we implement two applications to showcase the effectiveness of our approach.

Real-Time Bidirectional Head Rotation Sharing for Collaborative Interaction Enhancement

  • Wanhui Li
  • Takuto Nakamura
  • Qing Zhang
  • Jun Rekimoto

Remote collaboration is becoming more prevalent, yet it often struggles with effectively conveying the spatial orientation of a remote participant. We introduce an innovative communication method that enables users to share their head direction. While traditional methods like written text and spoken language suit most situations, new approaches are necessary for scenarios lacking sufficient visual or auditory cues. For instance, how can hearing-impaired individuals share directional information during a remote collaborative game? This research presents an interactive system that induces head rotation based on the other user’s head direction, allowing users to grasp each other’s intended direction intuitively. This system improves communication by offering an additional means to share directional cues, especially in settings where visual and auditory cues are inadequate.

RubiXR: Demonstration of dynamic task augmentation through co-design of interactive 3D content and 3D user interfaces

  • Gabriel Lipkowitz

Augmenting manual tasks with digital content is a major area of interest to HCI researchers investigating applications of augmented reality (AR). Taking a research-through-design approach in HCI, in this work we propose and demonstrate a novel state-driven and dynamic approach to augmenting such tasks with interactive 3D content, considering the case study of augmenting user experiences in manipulating a color-coded 3D puzzle cube.

Virtual Breeding Nursery: Towards a VR Digital Twin for Plant Breeding

  • Muhammad Moiz Sakha
  • Florian Daiber
  • Christoph Tieben
  • Matthias Enders

In this demo, we present the Virtual Breeding Nursery, a VR application designed for plant breeders to do remote assessment of plant traits in a virtual reality environment. Large-scale plant breeding trials are essential for identifying seed varieties with desirable traits, such as higher yield and resistance to diseases and pests. These trials, conducted over several years and across various locations, require frequent and labor-intensive evaluations. The Virtual Breeding Nursery application, developed in collaboration with plant breeders, provides photo-realistic visualizations of 3D digital twins of breeding candidates within an immersive VR environment. It allows for efficient remote assessment of plant traits, side-by-side comparisons of breeding candidates from the same or different locations, and tracking their development over time. An autonomous robotic system was developed to regularly capture the GPS-localized multimodal data of breeding candidates, including high-resolution images, 3D laser scans, and multispectral data.

Webcam-based Hand- and Object-Tracking for a Desktop Workspace in Virtual Reality

  • Sebastian Pape
  • Jonathan Heinrich Beierle
  • Torsten Wolfgang Kuhlen
  • Tim Weissker

As virtual reality overlays the user’s view, challenges arise when interaction with their physical surroundings is still needed. In a seated workspace environment interaction with the physical surroundings can be essential to enable productive working. Interaction with e.g. physical mouse and keyboard can be difficult when no visual reference is given to where they are placed. This demo shows a combination of computer vision-based marker detection with machine-learning-based hand detection to bring users’ hands and arbitrary objects into VR.