News Release

An online game reveals something fishy about mathematical models

Peer-Reviewed Publication

Uppsala University

How can you tell if your mathematical model is good enough? In a new study, researchers from Uppsala University implemented a Turing test in the form of an online game (with over 1700 players) to assess how good their models were at reproducing collective motion of real fish schools. The results are published in Biology Letters.

Mathematical models allow us to understand how patterns and processes in the real world are generated and how complex behaviour, such as the collective movement of animal groups, can be produced from simple individual level rules. Fitting models based on the large scale properties of the data is one way to choose between different models, but can we be satisfied with our model when this has been achieved? How can we apply other methods to see how good our model fit is?

James Herbert-Read, researcher at the Department of Mathematics at Uppsala University, and his colleagues highlight and propose a solution to this problem by implementing a Turing test to assess how good their models were at reproducing collective motion.

They designed an online game where members of the public (over 1700 players online) and a small group of experts were asked to differentiate between the collective movements of real fish schools and those simulated by a model.

'By putting the game online, and though crowd sourcing this problem, the public have not only become engaged in science, they have also helped our research,' says James Herbert-Read.

Even though the statistical properties of the model matched those of the real data, both experts and members of the public could differentiate between simulated and real fish. The researchers asked the online players that answered all six questions correctly to give feedback on how they differentiated between the real schools and the simulated ones.

'These players commonly suggested that the spatial organization of the groups and smoothness of the trajectories appeared different between the simulated and real schools. These are aspects of the model we can try to improve in the future', says James Herbert-Read.

'Our results highlight that we can use ourselves as Mechanical Turks through 'citizen science' to improve and refine model fitting'.

###

For more information, please contact James Herbert-Read, Tel: +46 18 471 3195, +46 76 337 2666, e-mail: james.herbert.read@gmail.com

Herbert-Read JE, Romenskyy M, Sumpter DJT. 2015 A Turing test for collective motion. Biol. Lett. 20150674. http://dx.doi.org/10.1098/rsbl.2015.0674

See the online game that was used in the study:

http://www.collective-behavior.com/apps/fishgame/

also see the authors' new game:

http://www.collective-behavior.com/apps/fishindanger/webgl

Turing test

Alan Turing provided a means of assessing whether a machine's behaviour was equivalent or indistinguishable from that of a human. In the Turing test, if a human observer could not determine between which one of two interacting players was a machine (the other a human), then the machine had passed the test and exhibited intelligent behaviour. The test is designed to assess the ability of a model (the machine) to reproduce the real world (human behaviour).


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.