TSH_demonstration (VIDEO)
Caption
A University of Washington team has developed an artificial intelligence system that lets a user wearing headphones look at a person speaking for three to five seconds and then hear just the enrolled speaker’s voice in real time even as the listener moves around in noisy places and no longer faces the speaker. In this video, co-lead authors Malek Itani, a UW doctoral student in the electrical and computer engineering department, and Bandhav Veluri, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering, demonstrate the system.
Credit
Kiyomi Taguchi/University of Washington
Usage Restrictions
For reuse with appropriate credit
License
Original content