Project participants are filming their dogs to teach Whistle new tricks

Your social media feeds are probably filled with videos of dogs - those videos could be helping to advance pet care!

Hundreds of project participants have signed up to help teach Whistle devices to identify new behaviors by taking and sharing videos of their dogs with our research team. Each video improves the ability of Whistle devices to accurately identify new dog behaviors – improving the capabilities of Whistle for project participants, advancing understanding of dog health, and contributing financial resources to pet charities through our donation-for-video program.

If you have a Whistle device, click here to sign up and become a "citizen scientist" that advances Pet Insight Project research by taking and sharing videos of your dog.

IMG_4282.jpg

Daisy, our top dog actress in April

Daisy's mom has shared 26 videos of Daisy eating, drinking, scratching, walking, and living a good dog life. 

 

HERE'S HOW IT WORKS

Step 1: Filming & Labeling
We tell our "citizen scientists" the specific behaviors we’re trying to detect (like scratching, eating, and drinking), and they use their smartphones to take videos when they see it happening. Smartphones are the perfect devices for the job - they take great videos, keep accurate time, have internet connection for easy sharing, and are often already in our hands when our dogs do something unexpected! When a user submits a video of their dog wearing a Whistle device, we watch and label every second of film with what the dog is doing. Try submitting one yourself – it’s easy!

video_thumbnail.png
video_label.png
 

Step 2: Pairing the Whistle Data
Because the video file automatically includes information about the time it was filmed, we can match up the labeled behavior with the sensor data collected from Whistle and know what behavior the sensor data is actually showing. Note that the Whistle data isn’t the ‘minutes of activity’ you see in the Whistle app – it’s raw information about how the Whistle (and therefore your dog) is moving in 3 directions (up-down, left-right, and forward-backwards), collected 50 times per second.

label_activity.png
 

Step 3: Developing the Detection Algorithm
Our data science team can’t just look at this raw sensor data and tell you what behavior it represents like Morpheus in The Matrix. We’re designing algorithms to examine the 150 motion data points collected every second and tell us (and eventually you) what the dog is doing. To build these algorithms, we use machine learning to look at thousands of verified examples of the behavior provided by other “citizen scientists” participating in the project. Through the magic of machine learning (more on that in a later post), the algorithms learn the unique “signatures” of the verified behavior by looking for similarities across the different examples  (like a common head position or repeated movement). For example, when dogs are drinking, their head is often angled to the floor and the lapping and swallowing creates similar, repetitive movement patterns.

activity_compare.png
 

Step 4: Machine Learning
The algorithm then “tests” itself by examining sensor data it hasn’t seen before, looking for behavior signatures it has learned, and trying to predict what behavior the data is showing. It can then check the correct “answer” from the video labels and try a different approach if it was wrong - kind of like taking practice tests when studying.

prediction_test.png
 

Step 5: Applying the Technology
When the algorithm begins to correctly predict a behavior with high levels of accuracy, our research team can apply the technology to all Whistle devices, enabling new behavior detection capabilities without ever having to physically touch the device. Every new behavior Whistle can detect provides a more complete picture of a dog’s life and gives our research team new tools to investigate the relationship between the way a pet behaves and its health. We can also use the technology to create new features for you and other Whistle users to immediately benefit from.  

detection_output.png
 

This process requires hundreds of examples to use in training,  which means we need a lot of videos of dogs of different shapes, sizes, and environments. A 10"-tall Corgi will have a very different head angle while eating than a 3'-tall Great Dane; picky eaters will show different chewing patterns than hungry Labradors who instantly inhale food. We need examples of this dog diversity to make sure the algorithm can learn these nuances and be accurate for the broadest possible range of dogs - and you can help!

 

Help by submitting videos of your own dog - sign up to get your first weekly video challenge here (and if you’ve already signed up, keep sharing them with us!). In addition to supporting the research, you’re helping dogs in need, as we donate $1 to pet charities like American Humane and the Banfield Foundation for every user-submitted video we’re able to use.

Plus, we love watching dog videos and every submission makes us smile.

By Rob Chambers
 Pet Insight Project Data Scientist

rob.png