Keynotes

Keynote Speaker: Tim Dwyer

Tittle: Immersive Analytics and Embodied Sensemaking

Tim Dwyer

      Abstract: Immersive Analytics explores the use of emerging display and interaction technologies to bring data out of computers and into the world around us. Tracked VR headset displays offer some distinct advantages over traditional desktop data visualisation such as true 3D rendering of spatial data and natural physical navigation. AR headsets offer further advantages, such as the possibility of embedding data visualisations into our natural environment or workplace. Another advantage that has been touted for immersive representation of data and complex systems is the idea that "embodied interaction" supports sensemaking. However, "sensemaking" is a very high-level cognitive activity and strong links between embodiment and sensemaking are not well established. In this talk we first review systems and techniques for immersive analytics, particularly those from the Data Visualisation and Immersive Analytics lab at Monash University, and then we look more closely at developments in understanding "embodied sensemaking". We argue that a better understanding of how embodiment relates to sensemaking will be key to creating a new generation of tools that help people to work effectively in an increasingly complex and data rich world.

      Biography: Professor Tim Dwyer is a co-editor of "Immersive Analytics", which was published by Springer in 2018 and has had over 36k downloads to date. He received his PhD on "Two and a Half Dimensional Visualisation of Relational Networks" from the University of Sydney in 2005. He was a post-doctoral Research Fellow at Monash University from 2005 to 2008, Tim was also a Visiting Researcher at Microsoft Research USA until 2009. From 2009 to 2012, Tim was a Senior Software Development Engineer with the Visual Studio product group at Microsoft in the USA. Then, he returned to Monash as a Larkins Fellow where he now directs the Data Visualisation and Immersive Analytics Lab.


Keynote Speaker:
Kiyoshi Kiyokawa

Tittle: Toward smart smart glasses

Kiyoshi Kiyokawa

      Abstract: In this talk, the speaker discusses how smart glasses can be enhanced to become truly "smart." The presentation starts by addressing the limitations of smart glasses' basic functionalities, categorizing them into three classes, and discussing how each class can be advanced. Specifically, the speaker highlights the need to improve the performance of smart glasses as display devices (Hop), to introduce features that correct and adjust visual perception for users with atypical vision (Step), and to develop functionalities that allow for the customization of a user's visual perception (Jump). The speaker then argues that current smart glasses are not “smart,” as they require users to manually launch apps or select parameters, and they present the same images to all users, regardless of individual visual differences. To make smart glasses truly smart, it is imperative to incorporate environmental awareness, user state recognition, and intent estimation. Additionally, with the vision of a future where smart glasses are worn for extended periods, the speaker emphasizes the necessity for smart glasses to continuously assess a user’s visual functions to provide an optimal viewing experience. Throughout the talk, the speaker engages in discussions on these functionalities, supplemented with examples from their own research.

      Biography: Professor Kiyoshi Kiyokawa is a Professor at Nara Institute of Science and Technology, since 2017. He received my M.S. and Ph.D. degrees in information systems from Nara Institute of Science and Technology in 1996 and 1998, respectively. He was a Research Fellow of the Japan Society for the Promotion of Science in 1998. He worked for Communications Research Laboratory (current National Institute of Information and Communications Technology (NICT)) from 1999 to 2002. He was a visiting researcher at Human Interface Technology Laboratory of University of Washington from 2001 to 2002. He was an Associate Professor at Takemura Laboratory, Cybermedia Center, Osaka University from 2002 to 2017. His research interests include virtual reality, augmented reality, human augmentation, 3D user interfaces, CSCW, and context awareness. He is a Board Member and Fellow of the Virtual Reality Society of Japan. He has been involved in organizing IEEE and ACM conferences, such as IEEE International Symposium on Mixed and Augmented Reality (ISMAR), IEEE Virtual Reality, IEEE International Symposium on Wearable Computers (ISWC), IEEE Symposium on 3D User Interfaces (3DUI), and ACM Virtual Reality Software and Technology (VRST). He is an Associate Editor in Chief of IEEE TVCG. He is an inductee of the IEEE VGTC Virtual Reality Academy (Inaugural Class).


Keynote Speaker:
Misha Sra

Tittle: Augmenting Physical Abilities with AIXR

Tim Dwyer

      Abstract: In this talk, I will share my vision of integrating AI into the real world to transform learning, training, and recovering motor skills. With XR technologies, AI agents can occupy physical space, creating lifelike experiences that enhance engagement and connection. Imagine wearing AR glasses that overlay an AI coach onto your workout environment. As you perform exercises, the AI agent appears beside you, providing instant, personalized feedback on your form, intensity, and progress. It can analyze your movements and compare them to optimal techniques, guiding you to make real-time adjustments for better results. The AI coach can even adapt its approach based on your preferences and learning style, creating a truly personalized and adaptive experience. Whether you are in a gym, at home, or undergoing physical therapy in a clinic, an AI agent can complement human experts, providing continuous support and guidance whenever needed, making assistance accessible regardless of location, cost, or availability.

      Biography: Misha Sra is the John and Eileen Gerngross Assistant Professor of Computer Science at the University of California, Santa Barbara where she directs the Human-AI Integration Lab in the Computer Science department at UCSB. Misha received her PhD from the MIT Media Lab in 2018, advised by Prof. Pattie Maes in the Fluid Interfaces Group. She has published at the most selective HCI, VR, and machine learning venues such as CHI, UIST, VRST, AAAI, and CVPR where she received four best paper awards and honorable mentions. From 2014-2015, she was a Robert Wood Johnson Foundation wellbeing research fellow at the Media Lab. In spring 2016, she received the Silver Award in the annual Edison Awards Global Competition that honors excellence in human-centered design and innovation. MIT selected her as an EECS Rising Star in 2018. In 2023 she was awarded an NSF CAREER Award for her work in Human-AI Interaction Design. Her research has received extensive media coverage from leading media outlets (e.g., from Engadget, UploadVR, MIT Tech Review) and has drawn the attention of industry research, such as Toyota Research, Samsung Research, and Unity 3D.