At a time when millions of Americans have turkey on their minds, a team of researchers led by an animal scientist at Penn State has successfully tested a new way for poultry producers to keep their turkeys in sight.
Crucial for productivity and animal welfare, monitoring behavior and health of poultry animals on large, commercial farms is a costly, time-consuming and labor-intensive task. To help producers keep track of how the birds are behaving, the researchers tested a new method using a small drone equipped with a camera and computer vision - a form of artificial intelligence (AI) that enables recognition and processing of visual information - to automatically recognize what turkeys are doing.
Their study is available online now ahead of publication in the December issue of Poultry Science.
The research was the first to test whether a drone combined with a computer vision model could automatically detect different turkey behaviors from overhead video, according to study senior author Enrico Cassela, assistant professor of data science for animal systems in the College of Agricultural Sciences. He also is affiliated with the Penn State Institute of Computational and Data Sciences.
"This work provides proof of concept that drones plus AI can potentially become an effective, low-labor method for monitoring turkey welfare in commercial production," Casella said. "It lays the groundwork for more advanced, scalable systems in the future."
The researchers used a commercially available drone with a regular color camera to record video four times a day of 160 young turkeys from five to 32 days old at the Penn State Poultry Education and Research Center. The drone's trajectories were designed to ensure full area coverage from the camera footage during each flight.
From these videos, the researchers took individual image frames and manually labeled the turkeys' behaviors. They created a dataset of over 19,000 instances of labeled behaviors, including feeding, drinking, sitting, standing, perching, huddling and wing flapping. Then they used the images to train, test and validate a computer vision model called YOLO - you only look once - commonly used to detect objects and actions in images.