Skip to main content

Why are hurricane forecasts still so rough?

By Kerry Emanuel, Special to CNN
tzleft.emanuel_kerry_hed.jpg
STORY HIGHLIGHTS
  • Kerry Emanuel: With hurricane bearing down, people want good predictions of trajectory
  • He says huge advances made in prediction science in recent years but still big gaps
  • He says Lorenz's "butterfly effect" shows big changes from small shifts in weather patterns
  • Emanuel: Forecasters can offer probabilities but still a long way off from precise forecasts

Editor's note: Kerry Emanuel is a professor of meteorology at the Massachusetts Institute of Technology.

(CNN) -- At one time or another, Hurricane Irene posed a risk to almost everyone living along the Eastern Seaboard, from Florida to the Canadian Maritimes. Where would Irene track? Which communities would be affected, and how badly?

Millions of lives and billions of dollars were at stake in decisions made by forecasters, emergency managers, and all of us who lived in or owned property in harm's way. It was natural to wonder how good the forecasts were likely to be: to what extent could we trust the National Hurricane Center, local professional forecasters, and emergency managers to tell us what would happen and what to do?

Undeniably, enormous progress has been made in the skill with which hurricanes and other weather phenomena are predicted. Satellites and reconnaissance aircraft monitor every hurricane that threatens the U.S., collecting invaluable data that are fed into computer models whose capacity to simulate weather is one of the great wonders of modern science and technology.

And the human effort and taxpayer funds that have been invested in this endeavor have paid off handsomely: A three-day hurricane track forecast today is as skillful as a one-day forecast was just 30 years ago. This gives everyone more time to respond to the multiple threats that hurricanes pose.

And yet we did not know for sure whether Irene would make landfall in the Carolinas, Long Island, or New England, or stay far enough offshore to deliver little more than a windy, rainy day to East Coast residents. Nor did we have better than a passing ability to forecast how strong Irene would get: in spite of decades of research and greatly improved observations and computer models, our skill in forecasting hurricane strength is little better than it was decades ago. Why is this so, and how should we go about making decisions in the context of uncertain forecasts?

Explain it to me: Weather forecasting
Saved within seconds of waterfall plunge
Aerials show Irene damage in Vermont
Married and stranded in Vermont
RELATED TOPICS

Since the pioneering work of Edward N. Lorenz in the early 1960s, we have known that weather, including hurricanes, is an example of a chaotic process. Small fluctuations (Lorenz's "butterfly effect") that cannot be detected can quickly amplify and completely change the outcome in just a few days. Lorenz's key insight was that even in principle, one cannot forecast the evolution of some kinds of chaotic systems beyond some time horizon.

In the case of weather, meteorologists think that time horizon is around two weeks or so. Add to this fundamental limitation that we measure the atmosphere imperfectly, sparsely and not often enough, and that our computer models are imperfect, and you arrive at the circumstance that everyone knows from experience: weather forecasts are not completely reliable, and their reliability deteriorates rapidly the further out in time the forecast is made. A forecast for a week from today is dicey at best, and no one even tries to forecast two weeks out.

But in the past decade or two, meteorologists have made another important advance of which few outside our profession are aware: We have learned to quantify just how uncertain any given forecast is.

This is significant, because the degree of uncertainty itself varies greatly from one day to the next. On one occasion, we might be able to forecast a blizzard five days out with great confidence; on another, we might have very little faith in tomorrow's forecast.

We estimate the level of confidence in a particular forecast by running many different computer models many times, not just once.

Each time we run it, we feed it a slightly different but equally plausible estimate of the current state of the atmosphere, given that our observations are few, far between and imperfect. In each case, we get a different answer; the differences are typically small to begin with but can grow rapidly so that by a week or so, the difference between any two forecasts is as great as the difference between any two arbitrary states of the weather at that time of year. No point in going any further!

But we observe that sometimes and in some places, the differences grow slowly, while at other times and places, they may grow much more rapidly. And by using different computer models, we can take into account our imperfect understanding of the physics of the atmosphere. By these means, we can state with some accuracy how confident we are in any particular forecast for any particular time and place.

Today, one of the greatest challenges faced by weather forecasters is how best to convey their estimates of forecast confidence to the public.

Ideally, we would like to be able to say with full scientific backing something like "the odds of hurricane force winds in New York City sometime between Friday and Sunday are 20%." We have far to go to perfect these, but probabilistic statements like this are the best for which we can hope.

We know from experience that everyone will deal with such probabilistic forecasts in their own way: People have a very broad range of risk aversion. But the next time you are inclined to criticize weather forecasters for assigning probabilities to their forecasts, remember this essay and consider how much better off you are than with other types of forecasters you rely on. Your stockbroker, for example.

The opinions expressed in this commentary are solely those of Kerry Emanuel.