Wednesday, January 28, 2015

On blizzards, communicating uncertainty, hype, and YOUR responsibility

The "Blizzard of 2015" has been the biggest weather story so far this winter due to multiple factors - from the promises of a "historic, epic, life-threatening, crippling" storm, to the equally stunning "forecast bust" for the media epicenter of the world, to the claims of hype, to apologies by officials. After a day or so to let the dust settle and the hotheads get it all off their chest, I figured I'd put my thoughts on the record.

The Blizzard of 2015 Forecast

First let me qualify my interest in this non-local story (we tend to stick to Memphis-centric events). My main job is as a meteorologist for a Fortune 500 company and my responsibilities include tracking and forecasting large-scale national (and some international) weather events. So I had done my own forecast for this event, even though it didn't take place locally. As my regular readers know, another great interest of mine is in communication, specifically the communication of weather information in a manner that the lay person (you) can clearly understand and take action on. Upon review and analysis, and you may be surprised by this, the repercussions of the blizzard were actually more about (perhaps poor) communication than the actual forecast.

I'll set the stage with the snow accumulation forecast (from the official government source, the National Weather Service - top image) and the results, also from NOAA/NWS (bottom image). I've focused on the NYC-Boston corridor, since that's where the majority of people affected live and work. You'll notice that, generally, the NWS did a very nice job with a high-impact, upper-end event that resulted in 30"+ in 5 separate states and several "top 10s" for single event snowfall. You'll also notice that west of a line from VT to western MA to western CT to NY/NJ excluding Long Island (or basically the blue areas on the bottom map), the forecast went awry, in some cases by a lot.



NWS forecast snowfall accumulation for this Blizzard of 2015 as of early Monday morning (5am). The event peaked from Monday evening through Tuesday morning. Widespread 18-24"+ totals were expected from NJ well north into New England.

A zoom of actual snowfall amounts from the blizzard. Eastern New England was buried in 1 1/2 to near 3 feet of snow, but there was a sharp "back edge" that extended from NYC to VT (top center of image). Graphic courtesy NOAA/WxBell.
So as far as the actual snow forecast was concerned (I'm not getting into the wind, coastal flooding, etc), the biggest issue was with the back side of the storm, where a tight gradient in snowfall totals set up. Notice in particular the disparity between Long Island, with 12-22" of snow, and far eastern NJ, with 2-5". It just so happened that the "all-or-nothing" line was just east of Manhattan. (Of course, as a Memphian, I would argue that 4-8" doesn't classify as "nothing," but I digress...) Actual totals in the NYC metro included 6.5" at Newark, NJ on the west to 11" at JFK and LaGuardia Airports on the east.


First of all, I argue that the chief complaint with this storm being a "bust" was due to where the "bust" occurred - in the most populous city with the loudest media in the world (that's not said as an insult to NYC or national media, but simply as fact). How many networks had positioned cameras 50 miles east on Long Island where 18" was common exactly as forecast? If that "back edge" had set up in the middle of the Dakotas and some cattle ranchers or oil workers got 8" less than predicted, it would have been a non-event and the Weather Service would never have been dragged through the mud and felt it necessary to send their chief to "do some 'splainin'."

However, given how it worked out, there is some good that can come from this event once the weather community is allowed to stop defending themselves and can get to analyzing the actual event and their messaging.

Computer models present a dilemma

You hear about weather models all the time. You know that there are several, that they all provide different forecasts of the environment, and that, like siblings, they sometimes agree well, but most of the time have their differences of opinion. Sometimes they just plain old duke it out. You also know that usually, as an event gets closer in time, they tend to arrive at a common solution, even if the details are slightly different.

In this case, all of the models had their version of how the blizzard would play out. They were similar in the areas where the forecast and actual conditions matched well (eastern MA to Long Island). They were not so similar on the western side of the system, which happened to include NYC. The European model (Euro), which everyone knows was the darling after Hurricane Sandy, and the North American model (NAM), which tends to do very well at short-term forecasts, had a fairly common solution that the NWS (and subsequently most of the private forecasting companies) latched onto. It was one that produced 15-25" of snow in NYC.

Then there was the newly-upgraded American stalwart, the Global Forecast System (GFS), which had a long track record of good forecasts until it was updated earlier this month. It's still a great model, but the weather community as a whole hasn't seen it perform in these conditions in its "upgraded state." It was forecasting around 8-12" for NYC. As you can see, the GFS beat the pants off the Euro/NAM solution in one area where it mattered most (metro NYC). The NWS (and many others) picked the wrong model.


How we (the weather community) can, and must, do better

So what should the weather community do with a high-impact, high-visibility forecast in which the range of possible solutions are far apart? Pick one and cross their collective fingers? (I'm oversimplifying a great deal here, by the way. That's not exactly what the NWS did, but they definitely leaned in that direction.)  The better method (and this is not a new idea, it's been discussed many times over the past 24 hours from the head of NWS on down) is to communicate something other than a specific total, or even range, for a location.

There was a measure of uncertainty in the forecast. The uncertainty was inherently much higher in the NYC area than the Boston area because the available data being used as input had a wider spread. So why not COMMUNICATE THE UNCERTAINTY? Use the range of model options available and your expertise and training to say "is the most likely outcome, but there is a chance we could see and a lesser chance we could see Z."

Every forecast we create has a measure of uncertainty or confidence associated with it. This can be expressed in probabilities or using words. However, especially with winter forecasts, continuing to call for Y inches will only reduce the public's confidence in the forecast because the chances are better you'll be wrong than right, even if it's by an inch or two. To their credit, a couple of NWS offices in the northeast are already doing this in an experimental state, providing a "most likely" outcome and probabilities of other outcomes. These products must be expanded and made more visible. (I didn't know about it until after the storm had ended and would've been very helpful in my forecasting efforts.)

An experimental "snow accumulation potential" forecast issued by the Philly NWS office.

In addition, in this world of soundbites and headlines and 140-character tweets, hype drives clicks and viewers. I'm not insinuating that the NWS, or any other forecast agency, hyped the event. But in the media-driven culture of NYC, it's very easy to take the "high end" forecast and run with it. "TWO FEET OF SNOW ON BROADWAY" draws eyeballs. No matter that that forecast had a very low probability of actually verifying. (That also leads into an entirely separate topic of amateurs with access to model data proliferating the worst case scenarios on social media and making the weather community as a whole look bad... but again, I digress...)

Should forecasters apologize for their mistakes?

There was also some discussion of whether those who issued the forecasts had the responsibility, or felt pressured in some way, to publicly apologize, which a few did. Personally, I don't mind the NWS Director holding a press conference to help educate others what goes on in the decision-making and forecast-creation process. It's insightful and, I believe, lends confidence in and credence to the process, building trust along the way. However, I don't necessarily find it mandatory to be remorseful for the errant forecast produced, especially when the event is over-forecast. Again, while explaining oneself can be educational and build trust, we provide the best forecast we can with the information, knowledge, and experience we have accumulated. It's a forecast and we're human. Sometimes we'll be wrong! 



I truly believe that providing a confidence factor or probability forecast instead of a cut-and-dry "this is what it'll do" can be extremely useful in these situations. Some may see it as waffling, but smart consumers will use the information to make informed decisions depending on their risk or tolerance level. After all, that's what we do every day with our rain forecasts right? If there's a 20% chance, you may leave the umbrella at home and take your chance. But if you or the event you are involved in can't tolerate a stray shower overhead for whatever reason, you'll take precautions. This same probability forecasting is now being done with severe weather, why can't it also apply to winter weather scenarios?

The weather consumer's responsibility

My last comment will be regarding the consumer's (i.e., YOUR) responsibility. As an example, too often someone will read a forecast that indicates a very low likelihood of rain in two days. Then they get busy, don't pay any attention to the forecast, and are shocked when it rains on that day that "was supposed to be dry." The meteorologist was wrong again! I say bull.

I honestly believe that meteorologists have gotten so GOOD that the public is surprised when we're wrong! And so the myth is perpetuated that we missed it *again.* In actuality, the weather pattern changed a bit, we updated our forecasts the day before (or the morning of) that rain, you didn't see the change, and then blame the weather guy/gal. While we have many new ways of providing our information on demand (social media, apps, etc.) compared to 10-20 years ago, when you had to wait until noon, 5pm or 10pm to "watch the weather," it is NOT our responsibility to hand deliver it to you. I believe that as long as we make it available and keep it current, it's your responsibility to keep up with it. If you don't, it's on you.

Closing argument

I'll close with this. For the vast majority of us in the weather enterprise, we pride ourselves on being accurate. We spend HOURS pouring over data, considering all potential solutions, and using our previous experiences to provide the most likely solution to our customers.

Could we communicate the information more effectively? You better believe it.

Are we learning as we go? Absolutely.

Do we tire of hearing how great a job we have because "we get paid even if we're wrong"? Would you? :-)

But we also KNOW when we botch a forecast. Because we tend to be, well, anal, and pride ourselves on hitting the high temp to the degree or timing the rain to the hour, we are harder on ourselves than anyone else and their criticism. So we've gotten good at brushing off the haters, learning from the experience, and applying it to the next event! :-) After all, we're only as good as our last forecast right?

Thanks for reading, and feel free to let me know what you think in the comments or hit MWN up on social media!

Erik Proseus
MWN Meteorologist

----
Follow MWN on Facebook, Twitter, and Google+
Visit MemphisWeather.net on the web or m.memphisweather.net on your mobile phone.
Download our iPhone or Android apps, featuring StormWatch+ severe weather alerts!

1 comment:

Eddie Holmes said...

Erik, absolutely well presented, accurate and fully appreciated.

Eddie Holmes, CBM Meteorologist
West Tennessee Weather Online
Jackson, TN