PrISM-Q&A: Step-Aware Question Answering with Large Language Models Enabled by Multimodal Procedure Tracking using a Smartwatch.
Voice assistants capable of answering user queries during various physical tasks have shown promise in guiding users through complex procedures. However, users often find it challenging to articulate their queries precisely, especially when unfamiliar with the specific terminologies required for machine-oriented tasks. We introduce PrISM-Q&A, a novel question-answering (QA) interaction termed step-aware QA, which enhances the functionality of voice assistants on smartwatches by incorporating Human Activity Recognition (HAR). Specifically, it continuously monitors user behavior during procedural tasks via audio and motion sensors on the smartwatch and estimates which step the user is at. When a question is posed, this information is supplied to Large Language Models (LLMs) as context, which generate responses, even to inherently vague questions like “What should I do next with this?”. Our studies confirmed users’ preference for our approach for convenience compared to existing voice assistants and demonstrated the technical feasibility evaluated with newly collected QA datasets in cooking, latte-making, and skin care tasks. Our real-time system represents the first integration of LLMs with real-world sensor data to provide situated assistance during tasks without camera use. Our code and datasets will facilitate more research in this emerging domain.