Immersive Space to Think

Collaborative Lit Review With Synchronous and Asynchronous Awareness across Reality Virtuality Continuum ~ ISMAR 23

Ibrahim Tahmid, Francielly Rodrigues, Alexander Giovannelli, Lee Lisle, Jerald Thomas, Doug Bowman

Two colocated AR users and one VR user collaborating with annotations made both in the real and virtual world (left). Part of the final layout of the collaborative literature review (top-right). LaTeX file output from the layout (bottom-right).

Collaboration plays a vital role in both academia and industry whenever we need to browse through a big amount of data to extract meaningful insights. These collaborations often involve people living far from each other, with different levels of access to technology. Effective cross-border collaborations require reliable telepresence systems that provide support for communication, cooperation, and understanding of contextual cues. In the context of collaborative academic writing, while immersive technologies offer novel ways to enhance collaboration and enable efficient information exchange in a shared workspace, traditional devices such as laptops still offer better readability for longer articles. We propose the design of a hybrid cross-reality cross-device networked system that allows the users to harness the advantages of both worlds. Our system allows users to import documents from their personal computers (PC) to an immersive headset, facilitating document sharing and simultaneous collaboration with both colocated colleagues and remote colleagues. Our system also enables a user to seamlessly transition between Virtual Reality, Augmented Reality, and the traditional PC environment, all within a shared workspace. We present the real-world scenario of a global academic team conducting a comprehensive literature review, demonstrating its potential for enhancing cross-reality hybrid collaboration and productivity.

Here's a sneak peek into our system (to be released on October 20, 2023).

Evaluating the Feasibility of Predicting Text Relevance from Eye Gaze data during Sensemaking ~ ISMAR 23

Ibrahim Tahmid, Lee Lisle, Kylie Davidson, Kirsten Whitley, Chris North, Doug Bowman

Overview of the EyeST prototype. a) An analyst views and interacts with text documents in Augmented Reality (AR) while eye-tracking data is collected. b) A hand-registered menu allows the analyst to annotate and search the dataset. c) The system retrieves a word that the analyst paid attention to, and the analyst rates it in terms of relevance, complexity, and familiarity.

Eye gaze patterns vary based on reading purpose and complexity, and can provide insights into a reader's perception of the content. We hypothesize that during a complex sensemaking task with many text-based documents, we will be able to use eye-tracking data to predict the importance of documents and words, which could be the basis for intelligent suggestions made by the system to an analyst. We introduce a novel eye-gaze metric called `GazeScore' that predicts an analyst's perception of the relevance of each document and word when they perform a sensemaking task. We conducted a user study to assess the effectiveness of this metric and found strong evidence that documents and words with high GazeScores are perceived as more relevant, while those with low GazeScores were considered less relevant.  We explore potential real-time applications of this metric to facilitate immersive sensemaking tasks by offering relevant suggestions.

Here's a sneak peek into our system.

Immersive technologies provide an unconstrained three-dimensional space to solve sensemaking tasks by enabling rich semantic interaction to engage with documents in the environment. Sensemaking tasks require the user to create a hypothesis out of raw data from a pile of documents However, isolating the relevant documents from the pile is a vital task. To do that, the user needs to interact with multiple documents at the same time. As the user goes through the documents, she makes several groups with similar documents closer to each other. These groups of documents eventually help the user to answer questions related to the task. 

However, making the groups is a trivial task that requires a manual effort from the user. Automating this would save the user valuable time and enable her to focus on the high-level tasks of extracting insights from the documents. This raises several key questions such as how does a user create a cluster of documents in the 3D space? How to visualize the 3D clusters? How can the user interact with the whole cluster instead of single documents? 

In this study, we investigate the mechanisms of interacting with multiple documents in 3D space to answer these questions. First, we propose an algorithm that can dynamically create clusters with documents that are spatially similar. Second, we compare three different user interfaces to visually demonstrate the created cluster: 2.5D visualization, connecting link visualization, and color-labeling border technique. Third, for each of the visual feedback, we also consider the interaction techniques of selecting and manipulating clusters in the 3D environment. Finally, we propose a user study to compare the effectiveness of the manual vs. automated clustering technique.


Here's a sneak peek of what we have been up to!