News Release

vmTracking enables highly accurate multi-animal pose tracking in crowded environments

Researchers develop Virtual Marker Tracking (vmTracking) to study the movement patterns of multiple animals in crowded spaces

Peer-Reviewed Publication

Doshisha University

vmTracking enables accurate identification in crowded environments

image: 

Conventional markerless tracking methods struggle with body part misestimations or missing estimates in crowded spaces. In vmTracking, markerless multi-animal tracking is performed on a video containing multiple individuals. The resulting tracking output may not always be fully accurate. However, since some of these markers are extracted and used as virtual markers for individual identification, high overall accuracy at this stage is not required. By applying single-animal DeepLabCut to the generated virtual marker video, more accurate pose-tracking results can be obtained compared to conventional methods.

view more 

Credit: Hirotsugu Azechi from Doshisha University, Japan

Studying the social behavior of animals in their natural environments is necessary for advancing our understanding of neurological processes. To achieve this, tracking multiple individuals simultaneously and accurately as they interact in shared spaces is crucial. Traditional multi-animal tracking systems, such as multi-animal DeepLabCut (maDLC) and Social LEAP Estimates Animal Poses (SLEAP), use frame-by-frame identification to predict movements without the need for markers. While these tools effectively track poses, such as head direction, in simple scenarios, they become ineffective in crowded environments where animals cluster or obscure each other.

To address these challenges, Research Assistant Professor Hirotsugu Azechi and Professor Susumu Takahashi from the Graduate School of Brain Science, Doshisha University, Japan, developed a method called ‘Virtual Marker Tracking’ (vmTracking) that assigns virtual markers to the markerless animal video subjects to enable consistent identification, which is then used for posture tracking without relying on physical markers. Their groundbreaking work on the vmTracking system was published online in the prestigious PLOS Biology journal on February 10, 2025. Previously, it was possible to track multiple visually distinguishable animals, like black and white mice, using single-animal DLC (saDLC). The concept of vmTracking evolved from this idea: it would be possible to track visually indistinguishable animals if they could be assigned virtual identities using markers. Dr. Azechi explains the process, “For this purpose, we decided to use the labels obtained from conventional multi-animal pose tracking as "virtual" markers, which facilitated the tracking of multiple animal poses while keeping the animals markerless in reality. Thus, in vmTracking, two consecutive tracking processes were conducted for different purposes: assigning virtual markers and tracking those markers.”

The first step in vmTracking is to track the multiple animals and create an output file with the tracking results. Using this video, consistent virtual markers were assigned to individual animals for accurate identification frame by frame. Then, using a single-animal pose tracking tool like saDLC, videos of multiple virtually marked animals were analyzed. To evaluate its ability to track individuals in low-contrast environments, vmTracking was applied to track black mice against a black background.  Even under conditions where occlusion and crowding can hinder individual identification, vmT-DLC (integration of vmTracking with DLC) outperformed maDLC in terms of matches and generally improved pose tracking accuracy as the identification match rate increased. Furthermore, when black mice were tracked against a white background, vmTracking significantly outperformed maDLC in matches and demonstrated tracking accuracy beyond the precision of virtual markers, even in occluded and crowded spaces. “vmTracking minimized manual corrections and annotation frames needed for training, efficiently tackling occlusion and crowding,” elaborates Dr. Azechi on the outcomes of their experiments with the vmTracking system.

To demonstrate the applicability of vmTracking across species, the researchers conducted tracking experiments with a school of 10 fish. Using vmT-DLC, the target match rate and predicted match rate exceeded 99%, proving highly accurate pose tracking of fish schools. These experiments were extended to tracking the coordinated poses and movements of human dancers, yielding similar high-accuracy results. “This suggests vmTracking’s potential application in human scenarios such as sports analysis, including highly dynamic contact sports like soccer and basketball, where player interactions are frequent,” explains Dr. Azechi, emphasizing the versatility and real-world potential of their tracking system.

This study demonstrates that by applying virtual markers to videos of markerless animals, accurate and efficient tracking is possible even in complex and crowded environments. vmTracking simplifies the tracking process, significantly reducing reliance on manual annotation and training, making it more user-friendly for practical applications. It helps overcome tracking errors in crowded environments, enhancing collective behavior research accuracy. “Overall, vmTracking is a robust alternative to traditional tracking methods and a useful tool in the study of animal behavior, ecology, and related fields. It provides an effective and efficient solution to some of the most persistent challenges in multi-animal pose tracking, with a focus on delivering accurate and reliable tracking outcomes essential for research,” concludes Dr. Azechi.

Further research on how factors such as the number, color, size, and position of virtual markers affect vmTracking’s accuracy will help refine this method and open possibilities for a deeper understanding of herd dynamics and social behavior.


About Assistant Professor Hirotsugu Azechi from Doshisha University, Japan
Dr. Hirotsugu Azechi is a Research Assistant Professor at the Graduate School of Brain Sciences, Doshisha University, Japan. He has been a research assistant at the Organization for Research Initiatives and Development, Doshisha University, and the Center for Information and Neural Networks, National Institute of Information and Communications Technology. He has extensive academic research experience at multiple universities, including Osaka University and Tezukayama University, and was a postdoctoral fellow at the Brain Research Institute, Department of Cellular Neurobiology, Niigata University. He has published over 40 research papers in the fields of neuroscience, physiology, behavioral sciences, and experimental psychology.

Funding information
This work was supported by the Japan Society for the Promotion of Science (JSPS) (JP24K15711 and JP21H04247 to HA, and JP23H00502 and JP21H05296 to ST) and by Core Research for Evolutional Science and Technology (CREST) under the Japan Science and Technology Agency (JST) (JPMJCR23P2 to ST).

Media contact:
Organization for Research Initiatives & Development
Doshisha University
Kyotanabe, Kyoto 610-0394, JAPAN
E-mail: jt-ura@mail.doshisha.ac.jp


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.