Thanks to 'Citizen Scientists', Whistle is beginning to detect new behaviors


Back in May, we shared a blog post on how Project participants are filming their dogs to teach Whistle new tricks. It’s been almost 6 months since we began our crowdsourcing video program, and since then we’ve received 4,000 videos (and donated $4,000 to the Banfield Foundation, $1 for every video submitted). Our progress isn’t possible without these videos, and so we owe a massive THANK YOU to everyone who has contributed video and time to these efforts.

Because of your contributions, and the work of our labelers and engineers, we’ve taught our algorithm a few new tricks, and are able to predict behaviors like eating, scratching, and drinking for most dogs with increasingly high confidence. We’ve also begun training on new behaviors like rubbing and licking, and are improving posture predictions; is the dog laying down, sitting, standing still, or walking?

On that note, we’d like to share with you some footage of the algorithm at work, and show you how - in real time - the algorithm is able to translate accelerometer data from the Whistle FIT into accurate predictions of dog behavior:

Eating, drinking, and scratching aren’t the only tricks we’ve taught our algorithm. We’re also making progress on new behavior signatures for ear infections, vomiting, urinating and defecating, though we still need a lot more help from all of our crowdsource contributors to keep moving forward.

One of the massive challenges in developing algorithms like this one is the problem of False Positives when detecting rare events. For instance, consider an algorithm for a detecting behavior like a dog ‘lying down’ or ‘vomiting’. Assume that it has 95% accuracy. That is, if the dog is doing that behavior, our algorithm detects it correctly 95% of the time, and if the dog is not doing the behavior, we also detect that correctly, 95% of the time. Pretty good!

But, there’s a wrinkle: imagine that your dog lies down 90% of its day, but it only vomits for about a minute per year. It turns out that in this hypothetical case, if the algorithm detected that your dog is ‘lying down’, it would be right about 99.4% of the time. But, if it detects that your dog is vomiting, it will only be right about 0.004% of the time. This challenge is fundamental in fields like medical testing, fraud detection, and counter-terrorism. The simple truth is that it’s really hard to detect rare events, because there are so many chances to get it wrong, and so few chances to get it right.

To make matters worse, these rare events are not just the hardest to detect -- they’re also usually the hardest events for you to capture on video! And, the algorithm is only ever as good as the data we use to train and test it. So, while we have lots of tricks up our sleeves, this is a really hard problem to solve, and we can only solve it with your help.

So, if you have the opportunity to capture video of your dog shaking his head because of an uncomfortable ear infection, or marking a fire hydrant, or beginning to squat on the morning walk, please pull out the camera and hit record! Sure, you might get some funny looks, but you’re contributing to science!


Haven’t signed up and want to contribute videos of your own dog? Sign up here to get started and we’ll email you your first video challenge right away. All you’ll need is quick hands, keen eyes, and a smartphone to start contributing.

If you’ve already signed up, keep sharing your videos with us! In addition to supporting the research, you’re helping dogs in need, as we donate $1 to pet charities like American Humane and the Banfield Foundation for every user-submitted video we’re able to use.