The varied topography of the Western United States—a patchwork of valleys and mountains, basins and plateaus—results in minutely localized weather. Accordingly, snowfall forecasts for the mountain West often suffer from a lack of precision, with predictions provided as broad ranges of inch depths for a given day or storm cycle.
The crux of the problem lies in the snow-to-liquid ratio (SLR), which varies widely in the West.
"If you don't have a good snow-to-liquid ratio, your snowfall forecasts are not going to be as good," said Peter Veals, a research assistant professor of atmospheric sciences.
New research by Veals and a group of University of Utah scientists aims to improve methods of forecasting by applying machine learning to manually collected snowfall data from 14 mountain sites by snow-safety professionals employed by ski areas and transportation departments.
It's all about snow density
The single most important predictor of SLR is the snow-water equivalent , or SWE, according to Veals.
"It's because the more SWE you have, the more the storm's snow weighs and it densifies itself. It compacts under its own weight," said Veals, the study's first author. Other factors including elevation, temperature, and wind speed also play a crucial role in determining what the SLR will be in each storm.
The U research team, led by atmospheric sciences professor Jim Steenburgh , has another study coming out soon applying this method across the continental United States, using snowfall data gathered at 900 locations.
The main reason why snowfall forecasting in the West is so much harder is that the amount of snow piling on the ground depends on not just on how much water the snow contains, but how dense or powdery the snow is, that is its snow-to-liquid ratio. SLRs can be as low as 2-to-1, or two inches of snow equaling an inch of liquid water, typical of slush. Or up to 100-to-1 seen in ultralight powder.
The commonly cited 10-to-1 "rule of thumb" for SLR was a product of Eastern weather forecasting.
"Somebody cooked that up in some place a long time ago. It's easy to use. It's just multiplying by 10," Steenburgh said. And it has little relevance to the West.
To build a forecast model tailored to the West, Steenburgh and Veals's team acquired manually gathered snowfall data from 14 mountain sites across the Western states, spots where avalanches are a winter hazard that must be managed with care.
Three sites were in Utah: Alta in Little Cottonwood Canyon; the Spruces campground in Big Cottonwood Canyon; and Aspen Grove in Provo Canyon. The others were in the Cascade and Sierra Nevada ranges, Colorado, Idaho, Wyoming and Montana.
The importance of manually collected snowfall data
Over the six-year study window, the team tapped a data source that was already being gathered regularly by experts tasked with keeping mountain corridors and recreation areas safe from avalanches.
"There's a bit of a mad scientist component to this," Steenburgh said. "A lot of this work is what we call data wrangling. Just getting the data sets together."
During every storm cycle, trained professionals made daily or twice-daily manual measurements, recording the height of the accumulated snow, its water content and time of day. Snow was measured by hand because automated gauges can't always accurately record snowfall in windy conditions.
"No one's out there twice a day with a tube on a board, taking the core, weighing the snow, recording it, sweeping the board for the next day, recording the time they took the observation," Veals said. "You have to be more meticulous than the average person to do this work. Ski patrols do that because they want to know how much snow is in their avalanche paths."
The research team used this high-quality data to train new machine-learning models using an array of weather variables, including temperature, wind speed and specific humidity, to predict SLR with greater accuracy.
Their models, particularly the version created by a machine learning technique known as a "random forest," greatly outperformed existing methods.
"There was one algorithm that was slightly more skillful, but it was 10 times the processing power, so that wasn't feasible," Veals said. "You need to run this stuff on a huge data set every six hours and you need it to be done in two minutes."
Yet the random forest model could explain nearly half of the variability in snow density compared to less than a quarter for current operational models. In simple terms, the new approach will enhance the reliability of snowfall forecasts, which would be helpful for the West's water resource managers, highway officials, weather forecasters and avalanche professionals who depend on knowing how much water the snow holds.