Abstract
Abstract:
In this talk, I will discuss ongoing work in developing scalable tools for seamlessly interactive accessibility systems. The proposed data-driven tools enable more broadly applicable, ability-based and needs-aware interaction with diverse people across various platforms and novel scenarios.
For example, consider an interactive system (e.g., a wearable system or mobile robot) for assisting individuals with visual impairments. Generally, effective operation by such systems heavily relies on careful design, iteration, and validation to consider the specific ability profile of an intended user. Yet, post-real-world deployment, the system may encounter broad and highly diverse interaction scenarios, including novel users and environmental contexts. In such concrete and potentially safety-critical scenarios, each user’s reactions can differ significantly based on various personal factors (type of visual impairment, preferences, background). Thus, manually designing interactive systems at scale can be quite challenging. Instead, I will present a more robust and effective approach for automatically adapting a user model on-the-fly by continually observing the user. I will demonstrate how the proposed approach can be understood quickly; modeling to a new user who has a handful of interactions to meet real-time needs flexibly (e.g., various skills, mobility aids, unseen conditions) when providing step-by-step indoor navigation assistance. To further facilitate scalability of accessible autonomous systems, I will present an extensive realistic accessibility-centered simulation environment. The environment aims to address inherent data scarcity (e.g., the rarity of pedestrians with disabilities in current datasets for autonomous driving applications) as well as prohibitive costs of accessibility studies. In particular, the introduced interactive environment enables the training of more robust, adaptive, and inclusive intelligent systems. Ultimately, the introduced fine-grained personalized interaction tools provide a shared framework for the development and deployment of accessible systems at scale.
Improving Zoom accessibility for people with hearing impairments People with hearing impairments often use lipreading and speechreading to improve speech comprehension. This approach is helpful but only works if the speaker’s face and mouth are clearly visible. For the benefit of people with hearing impairments on Zoom calls, please enable your device’s camera whenever you are speaking on Zoom, and face the camera while you speak. (Feel free to disable your camera when you aren’t speaking.)