When you think of "time-lapse video," what usually springs to mind is a camera fixed on a tripod taking image after image at predetermined intervals. But what if you could do the same thing by taking out your phone and snapping a picture every time you walk past a certain tree on your way to work? No tripod necessary.
A Cornell research group has developed software that could let anyone with a camera-equipped mobile phone capture subtle changes over time - of, say, a construction site or the changing seasons - and turn them into a panoramic time-lapse video.
Abe Davis, assistant professor of computer science in the Cornell Ann S. Bowers College of Computing and Information Science, is senior author of "Pocket Time-Lapse," which his research group will present at the Association for Computing Machinery's SIGGRAPH 2025 conference, Aug. 10-14 in Vancouver, British Columbia.
Eric Chen '24, now a doctoral student at the Massachusetts Institute of Technology, is the lead author of the paper. Other contributors include Žiga Kovačič '26 and Madhav Aggarwal, M.S. '24.
Davis arrived at Cornell in the summer of 2020, at the beginning of the pandemic, when lockdowns made setting up a typical lab with traditional equipment impossible. So instead, he started taking pictures.
"Even under lockdown, we could still go outdoors," he said, "so I started thinking a lot about how to support field work - applications where people need to collect data outside, in uncontrolled settings."
To explore that interest, he began snapping cellphone pictures "from places that I happened to visit every day." Over the course of several years, he amassed more than 50,000 images from various spots around Ithaca - scenes from outside his apartment window, the bus stop, and the parking garage overlooking construction of the new Bowers CIS building. From these pictures, the researchers built time-lapse video.
Davis started this work with Ruyu Yan '23, now a doctoral student at Princeton University, and recruited Chen shortly thereafter. Chen helped develop the new techniques to align and visualize the growing dataset, correcting for slight inconsistencies in how pictures were taken. Their new technique is believed to be the first to handle registration for thousands of panoramic images into a single consistent time-lapse video.
"We had to figure out a new trick for doing this alignment that takes advantage of the fact that we have so much data over such a long period of time that sees so many different conditions," Chen said. "We link together high-quality alignments from photos that aren't necessarily in sequential order, so we can connect a daytime photo with one taken at night by inserting a picture taken during twilight on a different day."
Another key innovation - a reconstruction method they call "time splatting" - involves matching the position of the sun and local GPS coordinates, along with local weather data for each photography session, to inform the positioning of shadows in the video. "From that," Chen said, "we're able to do things like re-light a photo from sunrise to sunset, or change where the shadows are, or maybe change the lighting from daytime to nighttime."
Davis sees Pocket Time-Lapse as being potentially applicable in a range of areas - field sciences, construction, health care monitoring - and also as a way to look at one's environment in a new light.
"This tool gives you a different way to look at the world," he said. "Most of the data for this project comes from places I see every day, but there are so many little details I never noticed until I saw them in a time-lapse. It's like a different window into what's around you."
Support for this work came from the National Science Foundation Graduate Research Fellowship Program; a NSF Faculty Early Career Development Grant; and Meta.