Projects

Collaborative Lit Review With Synchronous and Asynchronous Awareness across Reality Virtuality Continuum ~ ISMAR 23

Ibrahim Tahmid, Francielly Rodrigues, Alexander Giovannelli, Lee Lisle, Jerald Thomas, Doug Bowman

Two colocated AR users and one VR user collaborating with annotations made both in the real and virtual world (left). Part of the final layout of the collaborative literature review (top-right). LaTeX file output from the layout (bottom-right).

Collaboration plays a vital role in both academia and industry whenever we need to browse through a big amount of data to extract meaningful insights. These collaborations often involve people living far from each other, with different levels of access to technology. Effective cross-border collaborations require reliable telepresence systems that provide support for communication, cooperation, and understanding of contextual cues. In the context of collaborative academic writing, while immersive technologies offer novel ways to enhance collaboration and enable efficient information exchange in a shared workspace, traditional devices such as laptops still offer better readability for longer articles. We propose the design of a hybrid cross-reality cross-device networked system that allows the users to harness the advantages of both worlds. Our system allows users to import documents from their personal computers (PC) to an immersive headset, facilitating document sharing and simultaneous collaboration with both colocated colleagues and remote colleagues. Our system also enables a user to seamlessly transition between Virtual Reality, Augmented Reality, and the traditional PC environment, all within a shared workspace. We present the real-world scenario of a global academic team conducting a comprehensive literature review, demonstrating its potential for enhancing cross-reality hybrid collaboration and productivity.

Here's a sneak peek into our system (to be released on October 20, 2023).

Evaluating the Feasibility of Predicting Text Relevance from Eye Gaze data during Sensemaking ~ ISMAR 23

Ibrahim Tahmid, Lee Lisle, Kylie Davidson, Kirsten Whitley, Chris North, Doug Bowman

Overview of the EyeST prototype. a) An analyst views and interacts with text documents in Augmented Reality (AR) while eye-tracking data is collected. b) A hand-registered menu allows the analyst to annotate and search the dataset. c) The system retrieves a word that the analyst paid attention to, and the analyst rates it in terms of relevance, complexity, and familiarity.

Eye gaze patterns vary based on reading purpose and complexity, and can provide insights into a reader's perception of the content. We hypothesize that during a complex sensemaking task with many text-based documents, we will be able to use eye-tracking data to predict the importance of documents and words, which could be the basis for intelligent suggestions made by the system to an analyst. We introduce a novel eye-gaze metric called `GazeScore' that predicts an analyst's perception of the relevance of each document and word when they perform a sensemaking task. We conducted a user study to assess the effectiveness of this metric and found strong evidence that documents and words with high GazeScores are perceived as more relevant, while those with low GazeScores were considered less relevant.  We explore potential real-time applications of this metric to facilitate immersive sensemaking tasks by offering relevant suggestions.

Here's a sneak peek into our system.

CLUE HOG ~ IEEEVR 23

Alexander Giovannelli, Francielly Rodrigues, Shakiba Davari, Ibrahim Tahmid, Logan Lane, Cherelle Connor, Kylie Davidson, Gabriella N. Ramirez, Brendan David-John, Doug A. Bowman

CLUE HOG is An Immersive Competitive Lock-Unlock Experience using Hook On Go-Go Technique for Authentication in the Metaverse. This paper presents our solution to the 2023 3DUI Contest challenge. Our goal was to provide an immersive VR experience to engage users in privately securing and accessing information in the Metaverse while improving authentication-related interactions inside our virtual environment. To achieve this goal, we developed an authentication method that uses a virtual environment's individual assets as security tokens. To improve the token selection process, we introduce the HOG interaction technique. HOG combines two classic interaction techniques, Hook and Go-Go, and improves approximate object targeting and further obfuscation of user password token selections. We created an engaging mystery-solving mini-game to demonstrate our authentication method and interaction technique.

Here's a video presentation of our work.

Clean The Ocean ~ IEEEVR 22

Lee Lisle, Feiyu Lu, Shakibna Davari, Ibrahim Asadullah Tahmid, Alexander Giovanneli, Cory Ilo, Leonardo Pavanatto, Lei Zhang, Luke Schlueter, Doug Bowman

Winner of the 3DUI Contest at IEEEVR 2022!

The 3DUI Contest for IEEEVR 2022 asked for entries that showcase an XR application with the theme of "Arts, Science, Information, and Knowledge-Visualized and Interacted". This had the further goal of taking a classic or well-known 3D user interaction techniques and enhancing or modifying them to be more effective in a given scenario.

We developed an application that focused on the trash accumulating in the ocean as an important global issue that needs to be addressed. Our application puts the user in the role of a research scientist that has created a new submarine that can clear trash more easily using 2 new 3D user interaction techniques that enhance the “Go-go” and “World in Miniature” techniques. We dubbed these Relative Mapping X-ray Go-Go (ReXGoGo) and Rabbit-out-of-hat World in Miniature (RoHWiM). More details can be seen in our published contest entry here.

To try it yourself, download for Oculus Quest here, and for SteamVR here.


Here's a sneak peek of our entry


Here's a walkthrough of our application


Immersive Space to Think

Ibrahim Asadullah Tahmid

Immersive technologies provide an unconstrained three-dimensional space to solve sensemaking tasks by enabling rich semantic interaction to engage with documents in the environment. Sensemaking tasks require the user to create a hypothesis out of raw data from a pile of documents However, isolating the relevant documents from the pile is a vital task. To do that, the user needs to interact with multiple documents at the same time. As the user goes through the documents, she makes several groups with similar documents closer to each other. These groups of documents eventually help the user to answer questions related to the task. 

However, making the groups is a trivial task that requires a manual effort from the user. Automating this would save the user valuable time and enable her to focus on the high-level tasks of extracting insights from the documents. This raises several key questions such as how does a user create a cluster of documents in the 3D space? How to visualize the 3D clusters? How can the user interact with the whole cluster instead of single documents? 

In this study, we investigate the mechanisms of interacting with multiple documents in 3D space to answer these questions. First, we propose an algorithm that can dynamically create clusters with documents that are spatially similar. Second, we compare three different user interfaces to visually demonstrate the created cluster: 2.5D visualization, connecting link visualization, and color-labeling border technique. Third, for each of the visual feedback, we also consider the interaction techniques of selecting and manipulating clusters in the 3D environment. Finally, we propose a user study to compare the effectiveness of the manual vs. automated clustering technique.


Here's a sneak peek of what we have been up to!


Fantastic Voyage 2021 ~ IEEEVR 21

Lei Zhang, Feiyu Lu, Ibrahim Asadullah Tahmid, Lee Lisle, Shakiba Davari, Nicolas Gutkowski, Luke Schlueter, Doug Bowman

Winner of the 3DUI Contest at IEEE VR 2021!

We created a fictional story themed on currently approved COVID-19 vaccines, which have been developed using the new mRNA manufacturing method by extracting messenger RNA genetic information from the virus’s spike proteins and packing it into vaccine particles. Vaccines developed with such a method are unstable in the human body and break down quickly once injected. On the other hand, research in targeted immunotherapies shows that vaccine delivery directed to antigen-presenting cells (APCs) like dendritic cells has the potential to maximize the effectiveness of the vaccine because APCs play a critical role in activating other immune cells and triggering an immune response in the human body. 

The user in our VR experience is assigned a fictional top-secret mission to drive a nanobot ship loaded with COVID-19 vaccine particles inside the human lymphatic system, search for dendritic cells, and deliver the vaccine to them once they are found and identified. The user has to complete the mission under a time constraint due to the short shelf-life of the vaccine. Once the mission is accomplished, the user has to find a way to exit the lymphatic system without being detected by activated immune cells and attacked by antibodies. 


Here's a teaser of our project! If you want to experience the game yourself feel free to reach out to me and we will figure something out


Martian Geology Subsurface Visualization and Sketching

Ibrahim Asadullah Tahmid, Michael Goldsworthy, Nathaniel Llorens, Doug Bowman

We worked closely with the JPL team from NASA to develop a virtual reality tool for Mars Geologists

A number of recent missions to explore Mars have been interested in understanding the subsurface geology of the planet. This is critical in finding evidence for the possibility of life, finding sites for future exploration, and understanding planetary geology in general. Current techniques for visualizing hypotheses about subsurface geology are largely two-dimensional, which means that perception of scale and context is lost. 

In this project, we design and implement a prototype of a three-dimensional subsurface geological tool, designed specifically for Martian terrain. Our design focuses on two main areas: sketching and visualization. The former is the process of creating the hypothesized subsurface layers in the 3D environment. We designed a tool based on placing cross-section walls in the terrain, marking points on them, and joining points from multiple cross-sections together to form the sub-layers. Visualization is the technique for coherently viewing the sub-layers, with a focus on clarity and context. We used partial transparency to make a visualization technique similar to classic geological block diagrams. After many discussions with engineers from NASA’s Jet Propulsion Laboratory, our design was found to be beneficial as a prototype for more complex future work, and open up the window for some interesting possibilities for the paths ahead. You can read about the whole project from here.

Here's a presentation of the features from our project

T-Miner: Mining trojan in text classification models ~ USENIX 21

Ahmadreza Azizi, Ibrahim Asadullah Tahmid, Asim Waheed, Neal Magaokar, Jiameng Pu, Mobin Javed, Chandan K. Reddy, Bimal Viswanath

We developed a generative approach to defend against Trojan attacks in DNN-based text classification models

Deep Neural Network (DNN) classifiers are known to be vulnerable to Trojan or backdoor attacks, where the classifier is manipulated such that it misclassifies any input containing an attacker-determined Trojan trigger. Backdoors compromise a model's integrity, thereby posing a severe threat to the landscape of DNN-based classification. While multiple defenses against such attacks exist for classifiers in the image domain, there have been limited efforts to protect classifiers in the text domain.


We present Trojan-Miner (T-Miner), a defense framework for Trojan attacks on DNN-based text classifiers. T-Miner employs a sequence-to-sequence (seq-2-seq) generative model that probes the suspicious classifier and learns to produce text sequences that are likely to contain the Trojan trigger. T-Miner then analyzes the text produced by the generative model to determine if they contain trigger phrases, and correspondingly, whether the tested classifier has a backdoor. T-Miner requires no access to the training dataset or clean inputs of the suspicious classifiers and instead uses synthetically crafted "nonsensical" text inputs to train the generative model. 

We extensively evaluate T-Miner on 

1100 model instances 

3 ubiquitous DNN model architectures 

5 different classification tasks

We show that T-Miner detects Trojan and clean models with a 

98.75% accuracy

We also show that T-Miner is robust against a variety of targeted, advanced attacks from an adaptive attacker. The paper for this project will appear in the proceedings of the 30th USENIX Security Symposium to be held on August 11-13, 2021.

You can read the paper from here.