Abstract
Blindness imposes constraints on the acquisition of sensory information from the environment. To mitigate those constraints, some blind people employ active echolocation, a technique in which self-generated sounds, like tongue “clicks,” produce informative reflections. Echolocating observers integrate over multiple clicks, or samples, to make perceptual decisions that guide behavior. What information is gained in the echoacoustic signal from each click? Here, I will draw from similar work in eye movements and ongoing studies in our lab to outline our approaches to this question. In a psychoacoustic and EEG experiment, blind expert echolocators and sighted control participants localized a virtual reflecting object after hearing simulated clicks and echoes. Left-right lateralization improved on trials with more click repetitions, suggesting a systematic precision benefit to multiple samples even when each sample delivered no new sensory information. In a related behavioral study, participants sat in a chair but otherwise moved freely while echoacoustically detecting, then orienting toward a reflecting target located at a random heading in the frontal hemifield. Clicking behavior and target size (therefore sonar strength) strongly influenced the rate and precision of orientation convergence toward the target, indicating a dynamic interaction between motor-driven head movements, click production, and the resulting echoacoustic feedback to the observer. Taken together, modeling these interactions in blind expert practitioners suggests similar properties, and potential shared mechanisms, between active sensing behavior in visual and echoacoustic domains.