top of page

CRY WOLF Project

Better Understanding Wolf Communication

Wapiti Pack at Elk Creek

Footage Courtesy of Bob Landis

50+ sites across 500k acres

77,423 hours of audio

1,814 wolf detections

182 other species detected

1,000 wolf observation hours

Yellowstone National Park, in collaboration with Grizzly Systems and Wild Livelihoods Business Coalition have partnered to deploy autonomous recording units (ARUs) that capture acoustic and visual data for efforts to monitor the presence and distribution of wolves (including 7 current packs) across roughly 500,000 acres. Accurate population and occupancy estimates play a vital role in shaping state and federal management policies. Through the use of artificial intelligence algorithms, scientists can efficiently analyze large data sets to identify wolf vocalizations as well as track the presence of other species native, including endangered ones, in the Greater Yellowstone ecosystem. Because wolf vocalizations carry for relatively long distances, ARU's can serve as low-cost tools that enhance existing census efforts.

IMG-0838.jpg
Screenshot 2024-03-13 183410.png

Passive Acoustic Monitoring (PAM) has emerged as a cost-effective and noninvasive technique for wolf surveys, providing detection probabilities exceeding those attained through camera trapping. We are building ARU's with classifiers for real-time detection, as well as ML models for post-processing analysis of the behavioral functions of wolf vocalizations. While bioacoustic monitoring is not a novel concept, the advent of advanced AI algorithms has opened up new possibilities to reduce costs and enhance researcher productivity in telemetry monitoring (for more information see Using machine learning to decode animal communication). The Greater Yellowstone region holds realistic, lower-cost potential for bioacoustic research, due to the long-term knowledge already gained from radio collaring, flight surveys, camera traps, and field surveys. As such, this collaborative research project aims to collect 24x7x365 bioacoustics data at pre-determined locations in the GYE which can be set aside, similar to genetic data, and used later for research of any species that vocalizes below 12khz.

Questions Being Asked

 

Some of the fundamental questions driving the research objectives include:
 

  1. When and how often do wolves vocalize?

  2. Can we identify individuals or packs via idiolects and dialects?

  3. Are there different functions for different types of wolf howls or chorus howls?

  4. Can we count the number of wolves in a chorus howl?

  5. Can low-cost and low-touch acoustic recorders inform population estimates?

  6. Can acoustic recorders be practically used for mitigation of livestock conflict?

hat.png

Objectives

 

​The aim of this collaborative research project is to explore and evaluate bioacoustics parameters of wolf vocalizations that will:
 

  • build a 24x7x365 bioacoustics and observations dataset in the GYE for any species or soundscape below 12khz

  • test systems that automate real-time and non real-time collection and classification (by species and individuals) of bioacoustics and imagery data in the cloud (see t.ly/2dQ0q and t.ly/o0_xO)

  • model wolf occupancy, distribution, and abundance from acoustic data

  • evaluate behavioral and ecological questions about the purpose and flexibility of communication in wolves ("come here", "where are you", "this is me/us") - (see t.ly/JeZBD)

  • create GAN AI models to identify sound elements, patterns, and groupings in wolf howls that will facilitate identity of ecological and behavioral correlates and thereby the sounds’ potential communicative significance (see t.ly/o9ke1)

  • provide opportunities for education and outreach on this aspect of animal communication and its applications for conservation and stewardship

  • test non-lethal use cases for livestock-conflict scenarios

Management & Financial Benefits

  • Reduce labor costs associated with manually gathering population data.

  • Reduce redundancy of acoustic data collection across species

  • Reduce the costs and safety issues associated with the use of flying craft to gather population information.

  • Aid in the growing list of tools for predator-livestock conflict mitigation.

 

Conservation Benefits

Screenshot 2023-11-24 164939.png

Collaboration Partners

turner.jpg
syn.png
university-of-cambridge-logo-1.png
Gordon_and_Betty_Moore_Foundation_logo.svg.png
conservation nation.png
logo.jpg
BRG logo (002).jpg
json-ld-logo.png
Screenshot 2023-02-24 173430.jpg
logo.png
Screenshot 2023-12-31 154954.png
download.png

Access the Data in our Cloud Platform

What Have We Learned So Far

  • wolves predominantly vocalize during nighttime hours

  • wolves increase daytime vocalizations during the winter breeding season

  • wolves rapidly modulate their howls during "stressful" situations (e.g. when interacting with a rival pack)

  • wolves respond to coyote vocalizations, but do not silence the coyotes

  • wolf individuals can be identified by the pitch of their howl

  • wolves almost always initial a chorus howl with one or more individuals howling, and often the chorus howl (when multiple wolves join in) is initiated by a higher pitched howl

  • during breeding season, individual wolves will howl in a way that the pitch rises, plateaus and then falls

Screenshot 2023-06-16 132821.png

Gabe, a highschool intern annotating a wolf chorus howl for our machine learning algorithm

A Little about the Technology

 

Supervised Wolf Bioacoustic Detection

There is extensive precedent for applying ML for supervised bioacoustic detection tasks; examples include a sperm whale click detector, a humpback detector, and a model that detects and classifies birdsong, among many others. Employing similar methods, we can train a convolutional neural network (CNN) either from scratch or using pretrained weights to classify an acoustic window as non-signal or wolf signal depending on the absence or presence of a wolf vocalization in the given acoustic segment.

 

Further, we take advantage of collaborative work done by other teams, such as PNW_CnetSynature, and BirdNet (D Sassover) regarding AI-based wolf detection.  We encourage academic institutions to combine efforts with our public and private institutions to iterate more quickly on the best general classifier, focusing on a common pipeline and growing the dataset (and its relevant ambient correlates) across canids and eventually all large carnivore species. 

YNP1_20230524_222502

​​

Supervised Wolf Chorus Counting

To our knowledge, there are no attempts at automated acoustic counting of overlapping signals, though there are several approaches that may be promising. 

  • Train a model (e.g. LSTM-CRF) to predict the number of overlapping spectral elements at fine timescales using open-source data. Assess the model’s ability to generalize to new datasets. Train a model to predict the number of wolves in a chorus based on human annotation of the number of wolves vocalizing concurrently.

  • Train a model to align video with acoustic data, as in examples of human music instrument playing.

 

Unsupervised Wolf Source Separation

Using previous work in source separation and emphasizing the unsupervised MixIT training algorithm used to separate overlapping birdsong mixtures, we can attempt to separate wolf choruses into predictions for the individuals present in the chorus. Though not functionally limited in the number of sources it can handle, it is unclear how the model will perform as the number of concurrently vocalizing wolves increases.

Unsupervised Meaning Discovery in Wolf Vocalizations

The CETI project has produced machine learning models, with little or no understanding of a species vocal repertoire, can be used to reveal meaningful units in the sounds. The approach in this paper, APPROACHING AN UNKNOWN COMMUNICATION SYSTEM BY LATENT SPACE EXPLORATION AND CAUSAL INFERENCE, with modification for wolf vocalizations, is promising.

 

Our Data Set

  • Wolf Recordings (video optional) annotated with start and stop times of wolf howl events. ​

    • Chorus Howls: 19.6 hours of uninterrupted recordings spanning 20 years
    • Individual Howls: 5.3 hours of uninterrupted recordings spanning 20 years
  • Ambient and Similar-to-Wolf Data: 

    • The classifier was also trained on 10 hours of ambient recordings from 5 locations in the GYE, as well as on elk vocalizations, coyotes, planes, vehicles to enable optimal model performance in classifying wolf vs. non-wolf/background sounds. Airplanes are one of the top false positives.

Our Github Repositories (email us for access)

Related Scientific Research

Some Types of Wolf Vocalizations

Screenshot 2023-02-20 165049.jpg
Screenshot 2023-02-20 165459.jpg

Some situations that can evoke wolf howls

Screenshot 2023-02-19 093515.jpg
bottom of page