Scientists from the University of Bonn have developed software that can look minutes into the future: The program learns the typical sequence of actions, such as cooking, from video sequences. Then it can predict in new situations what the chef will do at which point in time. Researchers will present their findings at the world's largest Conference on Computer Vision and Pattern Recognition, which will be held June 19-21 in Salt Lake City, USA.
Method created in Brazil combines mass spectrometry analysis of blood serum with an algorithm that recognizes patterns associated with diseases from various origins. Adoption of machine learning technique allows the program to adapt itself to possible viral mutations.
At SIGGRAPH 2018, attendees will have the chance to test a new computational system that effectively mimics the natural way the human eye corrects focus, specifically while viewing objects that are closer rather than farther away.
MIT CSAIL's wireless smart-home system could help detect and monitor disease and enable the elderly to 'age in place.'
Researchers at the University of Colorado Boulder have designed a new technique for spotting nasty personal attacks on social media networks like Instagram.
PlinyCompute, a big data platform designed specifically for developing high-performance and data-intensive codes, will be unveiled by Rice University computer scientists at this week's 2018 ACM SIGMOD conference in Houston.
MIT researchers have developed a novel transmitter that frequency hops each individual 1 or 0 bit of a data packet, every microsecond, which is fast enough to thwart even the quickest hackers.
Recent research published in a paper by the Biomedical Engineering Department of Rutgers University have developed an end-to-end blood testing device that integrates robotic phlebotomy with downstream sample processing. This platform device performs blood draws and provides diagnostic results in a fully automated fashion and has the potential to expedite hospital work-flow, allowing practitioners to devote more time to treating patients.
Researchers have developed a novel audio-visual model for isolating and enhancing the speech of desired speakers in a video. The team's deep network-based model incorporates both visual and auditory signals in order to isolate and enhance any speaker in any video, even in challenging real-world scenarios, such as video conferencing, where multiple participants oftentimes talk at once, and noisy bars, which could contain a variety of background noise, music, and competing conversations.
Algorithm provides networks with the most current information available while avoiding data congestion.