Shuert, C. R., Pomeroy, P. P., & Twiss, S. D. (2018). Assessing the utility and limitations of accelerometers and machine learning approaches in classifying behaviour during lactation in a phocid seal. Animal Biotelemetry, 6(1), 14. [Open Access]
New technology to study behaviour
Welcome to the 21st century – technology is quickly evolving. Our phones are now more powerful than the computers that took us to the Moon. At the touch of our hands, we can track every meal we eat, every ride we take, and track our activity and get reminders to stay active – “We’ve noticed you’ve only taken 5,000 steps today, how about you get up and go for a walk?” pings across your smart phone or watch. Ever wonder how it knows that you just spent 3 hours watching Netflix?
One of the things that I’ve been looking into over the last few years, and has quickly become a central theme of my PhD work, is exploring how we can use these new technologies to better understand grey seal behaviour and pry into the intimate lives of a mother and pup. Even at our most dedicated, we as observers are limited by the hours in the daylight and the amount of technology (cameras) we can mobilize to track behaviour on a large number of individuals. But what if there was a better way to gather this data while we kip inside with a warm cuppa or are enjoying what little sleep we get in the field?
Smart phones and other wearable technology regularly incorporate a variety of sensors into your phone that apps can access and use to track your activity levels. Some of these might include an accelerometer (measuring movement in three dimensions over time), gyroscopes (tracking orientation around a sphere of movement), magnetometers (tracking your heading and direction of movement), and of course, GPS capabilities (how else do you think google tracks traffic backups?). Many companies have invested lots of time and money into perfecting these technologies and making sure that they accurately interpret the data that the sensors are receiving. Here, we’ve aimed to do the same.
How do we get ‘behaviour’ from a sensor?
As I’m sure you’ve heard me discuss before, there are two main pieces of information that we can extract from accelerometers to inform us on the behaviour of the animals that they are attached to. First, we can get an idea of the relative position, or posture, of the animal in space (three dimensions) by extracting out the relative contribution of acceleration due to gravity in all three axes of movement. Secondly, we can then attribute the remaining signal as that being due to movement of the individual, both in terms of directed movements and those relating to the changes in posture over time. From these two main components, we can derive a variety of features that describe the signal in space and time. For each behaviour, ideally, we should have a unique set of features to describe them. Have you ever put together an ethogram of behaviour in text? Here we are essentially doing the same thing, but instead of describing a behaviour by words and contexts, we are using numbers.
So how does one translate these feature characteristics into repeatable behaviours on unknown data? Tech companies strive to accurately determine how many steps you’ve taken and how long you’ve been running. How do they do this? Machine learning! If I were to manually label each female’s 45,000,000 line data set based on derived acceleration features, I would definitely never finish this PhD. Machine learning algorithms essentially aim to speed up this process of decoding this massive dataset. In a nutshell, the user inputs a set of training data to the algorithm that contains the parameters that we’ve used to define a behaviour of interest. It then, using a lot of optimization and averaging, builds a model that should, in a perfect world, correctly classify our behaviours of interest in a separate set of unknowns. But how do we know it’s accurately telling us what it should? As was the core focus of this paper, we did just that. By training and testing on a set of known data, we can have confidence that this model will accurately capture our behaviours of interest for the duration of the sensor’s deployment and give us detailed, 24/7 activity budgets of our study subjects. Here, we used a method known as Random Forests, however there are a variety of machine learning algorithms on the market to suit a variety of needs and are worth exploring if you need to utilize any form of classification or predictive modelling. And, most importantly, you will have a good idea of how the technology arose that led to our subjugation under robot overlords. Just kidding, hopefully.
What about grey seal behaviour?
As I’m sure you’ve gathered, grey seals don’t move around a whole lot. There is, however, a reason for this. As grey seals fast for the duration of lactation, they lose anywhere from 40-60% of their body mass over a 2.5 week period. As a result, females should prioritize inactivity to conserve energy and maximize the energy available to fuel her pup’s rapid growth. Being able to classify behaviour at high-resolution over the course of this intensive lactation period should give us stronger clues as to the demands and trade-offs that females must navigate in order to successfully rear their pups. So what did we find?
- Core lactation behaviours were classified well. Among the behaviours investigated, the four that were classified well, Resting, Alert, Nursing, and Flippering pup, constitute the majority of a female’s daily activity budget (~95% of total activity). While behaviours like aggression and movement are, arguably, more interesting to look at, they do not constitute a large part of a female’s activity. As a result of their inconsistent motion signature in space or time, they were not well classified.
- Individuals may execute behaviours differently between each other. We were able to quantify the amount of variance between and within individuals that may confound future study designs. Individuals varied significantly between each other, but did not vary significantly within themselves across multiple seasons. Classification tends to drop if the bounds of a defined behaviour vary significantly across cases. While this is unavoidable when using real animals in a real environment, it is something that needs to be balanced when considering using a larger pool of data (across multiple individuals) vs. fitting algorithms to a single individual to get higher accuracy in classification (at the loss of a larger pool of data to fit the model to). Training rare behaviours may be difficult when fitting models to the individual rather than the pool.
- Grey seal females may show individual preferences in lateralization. Since we were able to reliably classify a form of mother-pup interaction, Flippering pup, I was able to explore the possibility that females may preferentially lay on one side or the other. Most females did, though the side varied, and may fit in with a growing body of evidence of lateralization in non-human animals.
Being a cornerstone to my work, this algorithm will allow me to characterize behaviour across the entire duration of a deployment in a matter of minutes (well, with the help of Durham supercomputing cluster Hamilton), allowing me to examine differences in day and night behaviour, fine-scale temporal changes in behaviour, and how females manage energy usage over time. Some of the fine points that we’ve discussed in the text as well will hopefully be useful to other researchers in building an acceleration ethogram in other pinnipeds during lactation.
Written by: Courtney Shuert