TSH_demonstration (VIDEO) University of Washington This video is under embargo. Please login to access this video. Caption A University of Washington team has developed an artificial intelligence system that lets a user wearing headphones look at a person speaking for three to five seconds and then hear just the enrolled speaker’s voice in real time even as the listener moves around in noisy places and no longer faces the speaker. In this video, co-lead authors Malek Itani, a UW doctoral student in the electrical and computer engineering department, and Bandhav Veluri, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering, demonstrate the system. Credit Kiyomi Taguchi/University of Washington Usage Restrictions For reuse with appropriate credit License Original content Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.