• Bromwich, D. H., , Cassano J. J. , , Klein T. , , Heinemann G. , , Hines K. M. , , Steffen K. , , and Box J. E. , 2001: Mesoscale modeling of katabatic winds over Greenland with the Polar MM5. Mon. Wea. Rev., 129, 22902309.

    • Search Google Scholar
    • Export Citation
  • Bromwich, D. H., , Monaghan A. J. , , Powers J. G. , , Cassano J. J. , , Wei H. L. , , Kuo Y. H. , , and Pellegrini A. , 2003: Antarctic Mesoscale Prediction System (AMPS): A case study from the 2000–01 field season. Mon. Wea. Rev., 131, 412434.

    • Search Google Scholar
    • Export Citation
  • Bromwich, D. H., , Monaghan A. J. , , Manning K. W. , , and Powers J. G. , 2005: Real-time forecasting for the Antarctic: An evaluation of the Antarctic Mesoscale Prediction System (AMPS). Mon. Wea. Rev., 133, 579603.

    • Search Google Scholar
    • Export Citation
  • Cassano, J. J., , Box J. E. , , Bromwich D. H. , , Li L. , , and Steffen K. , 2001: Verification of Polar MM5 simulations of Greenland’s atmospheric circulation. J. Geophys. Res., 106, 13 86713 890.

    • Search Google Scholar
    • Export Citation
  • Cassano, J. J., , Uotila P. , , Lynch A. H. , , and Cassano E. N. , 2007: Predicted changes in synoptic forcing of net precipitation in large Arctic river basins during the 21st century. J. Geophys. Res., 112, G04S49, doi:10.1029/2006JG000332.

    • Search Google Scholar
    • Export Citation
  • Grell, G. A., , Dudhia J. , , and Stauffer D. R. , 1995: A description of the fifth-generation Penn State/NCAR Mesoscale Model (MM5). NCAR Tech. Note TN-398+STR, 122 pp. [Available from UCAR Communications, P.O. Box 3000, Boulder, CO 80307.]

    • Search Google Scholar
    • Export Citation
  • Guo, Z., , Bromwich D. H. , , and Cassano J. J. , 2003: Evaluation of Polar MM5 simulations of Antarctic atmospheric circulation. Mon. Wea. Rev., 131, 384411.

    • Search Google Scholar
    • Export Citation
  • Higgins, M. E., , and Cassano J. J. , 2009: Impacts of reduced sea ice on winter Arctic atmospheric circulation, precipitation and temperature. J. Geophys. Res., 114, D16107, doi:10.1029/2009JD011884.

    • Search Google Scholar
    • Export Citation
  • Hines, K. M., , and Bromwich D. H. , 2008: Development and testing of Polar Weather Research and Forecasting (WRF) Model. Part I: Greenland Ice Sheet meteorology. Mon. Wea. Rev., 136, 19711989.

    • Search Google Scholar
    • Export Citation
  • Keller, L. M., , Lazzara M. A. , , Thom J. E. , , Weidner G. A. , , and Stearns C. R. , 2009: Antarctic Automatic Weather Station data for the calendar year 2010. Rep. UW SSEC 10.10.K1, Space Science and Engineering Center, University of Wisconsin—Madison, 44 pp. [Available from Space Science and Engineering Center, University of Wisconsin—Madison, Madison, WI 53706.]

    • Search Google Scholar
    • Export Citation
  • Kohonen, T., 2001: Self-Organizing Maps. 3rd ed. Springer, 501 pp.

  • Monaghan, A. J., , Bromwich D. H. , , Powers J. G. , , and Manning K. W. , 2004: The climate of McMurdo, Antarctica, region as represented by one year of forecasts from the Antarctic Mesoscale Prediction System. J. Climate, 18, 11741189.

    • Search Google Scholar
    • Export Citation
  • Powers, J. G., 2007: Numerical prediction of an Antarctic severe wind event with the Weather Research and Forecasting (WRF) model. Mon. Wea. Rev., 135, 31343157.

    • Search Google Scholar
    • Export Citation
  • Powers, J. G., , Monaghan A. J. , , Cayette A. M. , , Bromwich D. H. , , Kuo Y. , , and Manning K. W. , 2003: Real-time mesoscale modeling over Antarctica: The Antarctic Mesoscale Prediction System. Bull. Amer. Meteor. Soc., 84, 15331545.

    • Search Google Scholar
    • Export Citation
  • Reusch, D. B., , Alley R. B. , , and Hewitson B. C. , 2005: Relative performance of self-organizing maps and principal component analysis in pattern extraction from synthetic climatological data. Polar Geogr., 29, 188212.

    • Search Google Scholar
    • Export Citation
  • Seefeldt, M. W., , Tripoli G. J. , , and Stearns C. R. , 2003: A high-resolution numerical simulation of the wind flow in the Ross Island region, Antarctica. Mon. Wea. Rev., 131, 435458.

    • Search Google Scholar
    • Export Citation
  • Uotila, P., , Lynch A. H. , , Cassano J. J. , , and Cullather R. I. , 2007: Changes in Antarctic net precipitation in the 21st century based on Intergovernmental Panel on Climate Change (IPCC) model scenarios. J. Geophys. Res., 112, D10107, doi:10.1029/2006JD007482.

    • Search Google Scholar
    • Export Citation
  • View in gallery

    Map of the AMPS domains. Domains 1, 2, 3, 4, and 6 are labeled in the upper-right corner of the respective outlined boxes. Domain 5 is located within the outline of domain 3 and is represented by an unlabeled box. (Figure courtesy of NCAR/Mesoscale and Microscale Meteorology Division; information online at http://www.mmm.ucar.edu/rt/wrf/amps/information/configuration/maps.html).

  • View in gallery

    (a) Analysis region for synoptic climatology (black box) and (b) location of AWS sites used for model evaluation.

  • View in gallery

    Master SOM of sea level pressure anomalies over the RIS.

  • View in gallery

    (a) Verification plot of average pressure for the Lettau AWS location calculated for the AWS observations (black dotted line) and the following forecast categories: 0–9 (black solid line), 12–21 (red solid line), 24–33 (orange solid line), 36–45 (green solid line), 48–57 (light blue solid line), and 60–69 h (dark blue solid line). (b) Verification plot of pressure bias calculated for the Lettau AWS location for the same forecast categories (same lines and colors) presented in (a). (c) Verification plot of pressure RMSE calculated for the Lettau AWS location for the same forecast categories (same lines and colors) presented in (a) and (b).

  • View in gallery

    (a) Verification plot of average wind speed for the Pegasus North AWS location calculated for the AWS observations (black dotted line) and the following forecast categories: 0–9 (black solid line), 12–21 (red solid line), 24–33 (orange solid line), 36–45 (green solid line), 48–57 (light blue solid line), and 60–69 h (dark blue solid line). (b) Verification plot of wind speed bias for the Pegasus North AWS location calculated for the same forecast categories (same lines and colors) presented in (a).

  • View in gallery

    As in Fig. 5, but for the Ferrell AWS location.

  • View in gallery

    As in Fig. 5, but for the Willie Field AWS location.

  • View in gallery

    As in Fig. 5, but for the (a)Vito AWS location and verification plot of average wind direction for the Vito AWS location calculated for the AWS observations (black dotted line) and the same forecast categories (same lines and colors) as presented in (a).

  • View in gallery

    (a) Verification plot of average pressure for the Cape Bird AWS location calculated for the AWS observations (black dotted line) and the following forecast categories: 0–9 (black solid line), 12–21 (red solid line), 24–33 (orange solid line), 36–45 (green solid line), 48–57 (light blue solid line), and 60–69 h (dark blue solid line). (b) Verification plot of average wind speed for the Cape Bird AWS location calculated for the AWS observations (black dotted line) and the same forecast categories (same lines and colors) as presented in (a).

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 40 40 7
PDF Downloads 41 41 8

A Weather-Pattern-Based Approach to Evaluate the Antarctic Mesoscale Prediction System (AMPS) Forecasts: Comparison to Automatic Weather Station Observations

View More View Less
  • 1 Cooperative Institute for Research in Environmental Sciences, Department of Atmospheric and Oceanic Sciences, University of Colorado, Boulder, Colorado
© Get Permissions
Full access

Abstract

Typical model evaluation strategies evaluate models over large periods of time (months, seasons, years, etc.) or for single case studies such as severe storms or other events of interest. The weather-pattern-based model evaluation technique described in this paper uses self-organizing maps to create a synoptic climatology of the weather patterns present over a region of interest, the Ross Ice Shelf for this analysis. Using the synoptic climatology, the performance of the model, the Weather Research and Forecasting Model run within the Antarctic Mesoscale Prediction System, is evaluated for each of the objectively identified weather patterns. The evaluation process involves classifying each model forecast as matching one of the weather patterns from the climatology. Subsequently, statistics such as model bias, root-mean-square error, and correlation are calculated for each weather pattern. This allows for the determination of model errors as a function of weather pattern and can highlight if certain errors occur under some weather regimes and not others. The results presented in this paper highlight the potential benefits of this new weather-pattern-based model evaluation technique.

Corresponding author address: Melissa A. Nigro, University of Colorado, 216 UCB, Boulder, CO 80309. E-mail: melissa.richards@colorado.edu

Abstract

Typical model evaluation strategies evaluate models over large periods of time (months, seasons, years, etc.) or for single case studies such as severe storms or other events of interest. The weather-pattern-based model evaluation technique described in this paper uses self-organizing maps to create a synoptic climatology of the weather patterns present over a region of interest, the Ross Ice Shelf for this analysis. Using the synoptic climatology, the performance of the model, the Weather Research and Forecasting Model run within the Antarctic Mesoscale Prediction System, is evaluated for each of the objectively identified weather patterns. The evaluation process involves classifying each model forecast as matching one of the weather patterns from the climatology. Subsequently, statistics such as model bias, root-mean-square error, and correlation are calculated for each weather pattern. This allows for the determination of model errors as a function of weather pattern and can highlight if certain errors occur under some weather regimes and not others. The results presented in this paper highlight the potential benefits of this new weather-pattern-based model evaluation technique.

Corresponding author address: Melissa A. Nigro, University of Colorado, 216 UCB, Boulder, CO 80309. E-mail: melissa.richards@colorado.edu

1. Introduction

Typical model evaluation strategies evaluate models over large periods of time (months, seasons, years, etc.) or for single case studies such as severe storms or other events of interest. The technique described below, which is based on the use of self-organizing maps (SOMs; Kohonen 2001), involves objectively identifying the synoptic weather patterns that influence the region of interest, classifying each model forecast time as matching one of these weather patterns, and calculating model validation statistics for each weather pattern. This allows for the determination of model errors as a function of weather pattern and can highlight if certain errors occur under some weather regimes and not others.

As mentioned above, there are currently two basic methods that are used to evaluate numerical weather prediction models. These methods will be reviewed with relevant Antarctic examples cited. The first method commonly calculates statistics [e.g., bias, root-mean-square error (RMSE), and correlation] for comparisons of model forecasts and observations of numerous variables (e.g., temperature, pressure, wind speed, and wind direction) for a given location. There are many variations that can be made to this technique. For instance, the analysis can be categorized temporally by seasons, months, times of day, etc., or spatially by region, elevation, proximity to the coast, etc. Guo et al. (2003) used this method to evaluate the polar-modified version of the fifth-generation Pennsylvania State University–National Center for Atmospheric Research Mesoscale Model (MM5; Grell et al. 1995) over Antarctica. The research involved analyzing several variables, such as surface pressure, temperature, wind speed, and wind direction, on annual, monthly, daily, and hourly time scales. The study concluded that this version of MM5 was able to successfully predict the surface pressure, temperature, wind direction, and water vapor mixing ratio on all time scales; and that improvements to the model were necessary in order to accurately predict some of the other atmospheric variables, such as wind speed. Similarly, Bromwich et al. (2005) analyzed the performance of the polar-modified version of the MM5 model used in the Antarctic Mesoscale Prediction System (AMPS). The analysis looked at the spatial variability, seasonal variability, and forecast hour variability associated with the model performance. The study found that the surface temperature predictions are most accurate during the winter months, the surface pressure is well represented by the model, and the surface winds are strongly dependent on the topography of the region. The type of analysis shown in these two examples provides important information on how the model performs on large scales, both spatially and temporally, but does not identify model errors that occur under specific synoptic patterns.

The second method that is commonly employed to evaluate numerical weather prediction models makes use of case studies. Case studies usually involve an isolated weather event where some knowledge of the atmospheric conditions is known and the ability of a model to simulate the event is assessed. For instance, Powers (2007) evaluated the ability of the Advanced Research Weather Research and Forecasting Model (ARW-WRF) running in AMPS to simulate a windstorm that impacted McMurdo Station, Antarctica, in May 2004. The analysis used Moderate Resolution Imaging Spectroradiometer (MODIS) infrared satellite imagery and surface observations as a comparison for the model output. The model in AMPS was run with different data types assimilated, and it was determined that the best model representation of the windstorm used the assimilation of filtered MODIS data as an input to the model. Another example of the case study method of model evaluation is presented in Bromwich et al. (2003). In this study, surface observations, ship observations, and satellite images were used in order to evaluate the polar-modified version of the MM5 model in AMPS under conditions of mesoscale cyclogenesis during 13–17 January 2001. Each of the case studies mentioned here were able to provide evidence of successes and weaknesses within the model for a very specific time and place.

The weather-pattern-based method of model evaluation, as presented in this paper, combines advantages of both methods discussed above. The weather-pattern-based method allows for analysis over long periods of time, while providing information about model errors under a variety of synoptic situations. Due to the fact that models do not perform with equal accuracy under various synoptic weather patterns, this can be an important level of detail for all model users to understand.

To conduct a weather-pattern-based model evaluation, the prominent weather patterns of the region of interest must first be identified. The SOM technique will be used for this process because of its ability to objectively group large amounts of data into categories that can easily be analyzed. Researchers have used this technique to relate synoptic patterns to a variety of atmospheric features such as precipitation and temperature. For instance, the SOM technique was used to analyze changes in net precipitation in the Arctic (Cassano et al. 2007) and the Antarctic (Uotila et al. 2007) by using various model scenarios from the Intergovernmental Panel on Climate Change (IPCC). Additional examples include the use of the SOM technique to analyze the winds over the Ross Ice Shelf (RIS) from a synoptic climatology perspective (Seefeldt and Cassano 2008) and to analyze the changes in atmospheric circulation, temperature, and precipitation caused by a reduction in Arctic sea ice (Higgins and Cassano 2009).

The SOM-based analysis presented here has been completed as a preliminary study to understand the benefits of using SOMs as a tool for evaluating operational numerical weather prediction forecasts. The analysis for this paper was conducted over the Ross Ice Shelf, Antarctica, using forecasts from AMPS.

2. Data and methods

a. Model data

Forecasts from the ARW-WRF running within AMPS, a numerical weather prediction system tailored for use in the Antarctic (Powers et al. 2003), are evaluated in this paper. AMPS was originally developed to provide high-resolution, polar-specific forecasts over Antarctica in support of U.S. Antarctic Program (USAP) operations. AMPS is currently used as an experimental real-time numerical weather prediction model by USAP and other Antarctic programs.

AMPS has been based on two different numerical weather prediction models since its origin. The original version of AMPS was based on the polar-modified version of MM5 (Bromwich et al. 2001; Cassano et al. 2001). The more recent version of AMPS uses a polar-modified version of the WRF (Hines and Bromwich 2008) and was introduced in 2006. During the time period used for this study (the operational field seasons, October–February, of 2005–06, 2006–07, and 2007–08), the two models in AMPS (MM5 and WRF) were run simultaneously. Therefore, the training of the SOM will use both the MM5 and WRF output, while the analysis presented in this paper is conducted on the WRF forecasts from AMPS only.

The WRF in AMPS is run with a set of six two-way interactively nested domains with horizontal grid spacing of 60, 20, 6.7, 6.7, 2.2, and 6.7 km for the time period of this study (see Fig. 1). The analysis for this study is conducted on the AMPS output for domain 2 (20 km), but could easily be applied to the other available domains. AMPS is initialized twice daily, with simulations on domain 2 run for 72 h with the output archived every 3 h. For more information on the physics packages used in the operational version of AMPS refer to Powers (2007) and the AMPS Web site (http://www.mmm.ucar.edu/rt/wrf/amps/).

Fig. 1.
Fig. 1.

Map of the AMPS domains. Domains 1, 2, 3, 4, and 6 are labeled in the upper-right corner of the respective outlined boxes. Domain 5 is located within the outline of domain 3 and is represented by an unlabeled box. (Figure courtesy of NCAR/Mesoscale and Microscale Meteorology Division; information online at http://www.mmm.ucar.edu/rt/wrf/amps/information/configuration/maps.html).

Citation: Weather and Forecasting 26, 2; 10.1175/2010WAF2222444.1

b. Weather pattern identification

To use the weather-pattern-based method of model evaluation, a set of common weather patterns that occur in the region of interest must first be identified. For the identification process, the self-organizing maps technique was used to objectively determine the weather patterns that influence the RIS region of Antarctica. The SOM training process uses a neural network algorithm that includes an unsupervised, iterative learning process to identify general patterns in a dataset. The training process organizes similar data records into a user-specified number of groupings. Kohonen (2001) provides a thorough explanation of the SOM algorithm and training process. For the evaluation of AMPS, the SOM technique was used to create a synoptic climatology of the weather patterns that occur in the Ross Sea sector of Antarctica during the operational field season (1 October–15 February). To do so, the SOM was trained with the 20-km-grid MM5 forecasts of sea level pressure from the AMPS archives for the 2005–06, 2006–07, and 2007–08 USAP operational field seasons and the 20-km-grid WRF forecasts of the sea level pressure from the AMPS archives for the 2006–07 and 2007–08 USAP operational field seasons. Figure 2a shows the section of the 20-km-grid output that was used for the training of the SOM. Specifically, the sea level pressure anomaly fields were used for the training process. The use of the sea level pressure anomalies is preferred for this analysis, because atmospheric circulation is dependent on pressure gradient, not the magnitude of the sea level pressure. To calculate the anomaly field for each forecast, the sea level pressure at each model grid point was retrieved from the model output for the forecast. The sea level pressure values for elevations greater than 500 m were removed due to the difficulty in calculating accurate sea level pressure over the high terrain of Antarctica. From this dataset, the domain-averaged sea level pressure was calculated, which was then subtracted from the sea level pressure at each grid point, resulting in a field of sea level pressure anomalies. The resultant sea level pressure anomaly fields were used to train the SOM.

Fig. 2.
Fig. 2.

(a) Analysis region for synoptic climatology (black box) and (b) location of AWS sites used for model evaluation.

Citation: Weather and Forecasting 26, 2; 10.1175/2010WAF2222444.1

During the training process, the user must specify the number of weather patterns that the SOM should identify. For the model evaluation application presented here, the SOM technique was used to identify a number of different patterns, but it was found that using 20 weather patterns, or a 5 × 4 grid size, was sufficient to capture the range of synoptic conditions that influence the RIS region while maintaining a sufficient number of data points associated with each weather pattern for calculating statistics. These 20 weather patterns, when shown together as in Fig. 3, are referred to as the master SOM and each of the weather patterns identified on the master SOM is referred to as a node. Reusch et al. (2005) provides a good explanation for determining the number of patterns the SOM should identify. The authors define the quantization error as the difference between the model forecast and the SOM-identified pattern that it is mapped to, or the pattern that it most closely resembles. The paper concludes that the quantization error decreases with an increase in grid size. Essentially, the larger the grid size, the closer the forecast matches one of the weather patterns that make up the SOM. With an increase in the number of weather patterns, the number of forecasts that map to each weather pattern decreases, and thus the robustness of the statistics calculated for each weather pattern decreases. To calculate meaningful statistics on the accuracy of the model within AMPS for a given weather pattern, a large enough number of forecasts with valid observations must be mapped to each weather pattern. When choosing the SOM size for this paper, it was determined that either a 5 × 4 or a 6 × 4 grid size would be sufficient to capture the variability of weather patterns present over the RIS, based on the authors previous experience analyzing weather patterns over this region for studies of the low-level winds (Seefeldt and Cassano 2008) and in working with USAP operational weather forecasters. Ultimately, the 5 × 4 grid size was chosen in order to maximize the number of forecasts and observations mapped to each weather pattern to allow for calculation of meaningful statistics.

Fig. 3.
Fig. 3.

Master SOM of sea level pressure anomalies over the RIS.

Citation: Weather and Forecasting 26, 2; 10.1175/2010WAF2222444.1

As mentioned above, Fig. 3 depicts each of the weather patterns identified by the SOM training, showing the sea level pressure anomalies for each weather pattern. The nodes on the SOM are referenced by their column number (one is the leftmost column) and by their row number (one is the top row), such that node [1, 1] indicates the node in the top-left corner of the SOM and node [5, 4] indicates the node in the bottom-right corner of the SOM. In this plot of the sea level pressure anomalies, the blue colors indicate areas of lower pressure and the red colors indicate areas of higher pressure. Therefore, it can be seen that node [1, 1] has a strong cyclone in the northeastern corner of the domain, over the Ross Sea. Similarly, node [5, 4] has a very weak cyclone in the southeastern corner of the domain, over the southern part of the RIS. One advantage of the SOM methodology is that the nodes on the master SOM are arranged such that similar patterns are adjacent, while the least similar patterns are located in opposite corners of the plot (Fig. 3). A general analysis of the plot reveals that by moving across the top row the cyclone strength decreases and the cyclone center shifts toward the western Ross Sea. Other general observations include the following: the strongest low pressure systems are found on the left side of the SOM, cyclones centered in the northern part of the domain are found on the left and top sides of the SOM, southerly cyclones are represented by patterns in the lower-right portion of the SOM, and the weakest cyclones are found in the center-right portion of the SOM. This broad grouping of the weather patterns shown on the SOM will be useful when analyzing the performance of AMPS for the different weather regimes.

For the SOM evaluation of the WRF running in AMPS, the 20-km WRF forecasts from the operational field seasons of 2006–07 and 2007–08 were used. This process involves matching each model forecast to the weather pattern on the master SOM that most closely resembles it. Specifically, the squared differences between the sea level pressure anomalies in the forecast and the sea level pressure anomalies of each SOM node are calculated. The node that results in the smallest squared difference is the weather pattern that the forecast is matched to. This process is known as “mapping” the forecasts to the master SOM and is repeated for each model forecast. Once the mapping is complete, a set of model forecasts are associated with each node and can be used in conjunction with a set of observations to evaluate the performance of the model for each weather pattern.

c. Observational data

To evaluate the model, the forecasts must be compared to a set of observations. In this instance, surface observations from the University of Wisconsin automatic weather stations (AWSs) were used. The AWSs take measurements of temperature, pressure, wind speed, and wind direction at an interval of 10 min. These observations are then processed by the Antarctic Meteorological Research Center of the University of Wisconsin where the data are quality controlled and a time series of 3-hourly observations is created. The quality control is a manual process where erroneous observations are removed from the dataset. The process of creating the 3-hourly observations involves extracting the observation that is made closest to the 3-hourly time, ±40 min (Keller et al. 2009). This process is used due to the scarcity of observations in Antarctica and the high occurrence of missing data points from AWS observations. The ±40-min window is the standard used by the AWS research group at the University of Wisconsin when creating the quality controlled 3-hourly AWS data, and as such is the dataset used for the model evaluation presented in this paper.

Prior to comparison, the model surface pressure was adjusted to the elevation of the AWS by using the hypsometric equation. This is necessary due to the fact that the model resolution spatially smoothes the topography and the model grid point elevation may not match the actual elevation at the AWS sites. For the majority of the AWS sites, the differences between the actual elevation and the model gridpoint elevation were small, resulting in pressure adjustments on the order of a few tenths of a millibar, although a few AWS sites had much larger differences, with a maximum elevation difference of 117 m at the Cape Bird AWS. The pressure adjustment associated with this elevation difference was approximately 14 mb.

Given that the analysis presented in this paper is intended primarily as a demonstration of the utility of the weather-pattern-based model evaluation technique, a variety of AWS sites from across the RIS will be used. The locations of these sites are shown in Fig. 2.

d. Model evaluation

As mentioned above, after the mapping of the model forecasts a set of model forecasts exists for each weather pattern shown in Fig. 3. Subsequently, the observations that match the forecast valid times (±40 min) can be obtained, resulting in a set of model forecasts and AWS observations for each weather pattern. These datasets can further be grouped into the following forecast categories: 0–9, 12–21, 24–33, 36–45, 48–57, and 60–69 h in order to evaluate how the model performs as a function of forecast hour. Due to the fact that the model is initialized twice daily, extracting 12-h segments (four 3-h segments) from each model run is sufficient to create a continuous time series. From these datasets, the average temperature, pressure, wind speed, and wind direction of both the model output and the AWS observations can be calculated for various locations within the domain. Additionally, statistics such as the model bias, the model root-mean-square error, and the model correlation can also be calculated from the paired model and observation time series. The results can be displayed in what will be referred to as a verification plot. The verification plot will show each of the statistics mentioned above as a function of the SOM nodes (weather patterns). In these plots, the associated node is labeled along the x axis. Additionally, statistics have been calculated for an “ALL” node. The ALL node represents the statistics calculated for all 20 weather patterns. This is representative of the traditional method of model evaluation discussed in the introduction where the statistic of interest would be calculated for all weather patterns occurring over the period of interest. An example of a verification plot is shown in Fig. 4 and displays the average wind speed, the wind speed bias, and the wind speed RMSE at Lettau.

Fig. 4.
Fig. 4.

(a) Verification plot of average pressure for the Lettau AWS location calculated for the AWS observations (black dotted line) and the following forecast categories: 0–9 (black solid line), 12–21 (red solid line), 24–33 (orange solid line), 36–45 (green solid line), 48–57 (light blue solid line), and 60–69 h (dark blue solid line). (b) Verification plot of pressure bias calculated for the Lettau AWS location for the same forecast categories (same lines and colors) presented in (a). (c) Verification plot of pressure RMSE calculated for the Lettau AWS location for the same forecast categories (same lines and colors) presented in (a) and (b).

Citation: Weather and Forecasting 26, 2; 10.1175/2010WAF2222444.1

3. Results

The main goal of this paper is to demonstrate the utility of the weather-pattern-based model evaluation technique. Figure 4a shows the verification plot for the average pressure at the Lettau AWS (LET in Fig. 2b). In the verification plot, the average pressure for the 0–9-, 12–21-, 24–33-, 36–45-, 48–57-, and 60–69-h model forecast categories and the average pressure for the AWS observations that match the 0–9-h valid times are shown. Viewing the information plotted in Fig. 4a, from left to right, shows the pressure averaged over all of the weather patterns and then averaged for each individual weather pattern (node) shown in Fig. 3, progressing from the top-left corner of Fig. 3, down the leftmost column, and then proceeding down through each remaining column. The vertical lines in Fig. 4a indicate weather patterns at the top of each column in Fig. 3. The pressures plotted in Fig. 4a can be related to the master SOM in Fig. 3. For instance, the results that relate to the first column of the master SOM (nodes [1, 1], [1, 2], [1, 3], and [1, 4] in Fig. 4a) show that the AMPS-forecasted pressure at Lettau increases when moving from the weather pattern in node [1, 1] to the weather pattern in node [1, 2]. Subsequently, the AMPS-forecasted pressure decreases when moving to the weather patterns in nodes [1, 3] and [1, 4]. A careful analysis of Fig. 3 will reveal the same results, as the cyclone is positioned farther north and east in node [1, 2] than in node [1, 1] resulting in a higher pressure at Lettau for the weather pattern in node [1, 2]. Subsequently, in nodes [1, 3] and [1, 4] the cyclone shifts farther south causing the pressure at Lettau to decrease for both of these weather patterns. A similar analysis can be conducted for the remaining columns. By doing so, it will be seen that Lettau experiences the highest sea level pressure for the weather patterns identified in the top row of the master SOM and the lowest sea level pressure for the weather patterns in the bottom row of the master SOM. Lettau is located toward the south-central portion of the RIS and, therefore, these results are consistent with the positioning of the cyclones for the different weather patterns as discussed when Fig. 3 was introduced. The cyclones centered in the northern part of the domain are found in the top row of the SOM, causing the pressure at the more southern Lettau site to be higher in these instances. Additionally, the southerly cyclones are represented by the weather patterns in the bottom row of the master SOM, which causes lower sea level pressure at Lettau for those weather patterns.

As mentioned above, Fig. 4a shows the verification plot for the average pressure at the Lettau AWS. It can be seen that the average pressure at Lettau ranges from approximately 968 to 987 hPa depending on the weather pattern present and the model forecast hour. This indicates that the pressure at Lettau is quite variable and strongly correlated to the weather pattern present over the RIS (as seen by the sawtooth pattern in the average pressure verification plot). There are other AWS sites on the southern ice shelf that display similar pressure results (plots not shown). These sites include Elaine, Gill, Marilyn, and Schwerdtfeger. To better understand how the model performs for different weather patterns, the model forecasts are compared to the AWS observations. In Figs. 4b and 4c, respectively, the model bias and the model RMSE verification plots for the pressure at Lettau are shown. It can be seen that even though the pressure at Lettau is variable and dependent on the weather pattern (Fig. 4a), the model bias and RMSE for the shorter forecast hours are fairly consistent over the different weather patterns identified by the master SOM (Figs. 4b and 4c). This is shown in the verification plots by the lack of variation in the pressure bias and RMSE from left to right across Figs. 4b and 4c for the shorter forecast hours, which indicates small variation in the model error from one weather pattern to the next. Therefore, for the short-term forecasts, the model predicts the pressure at Lettau with a consistent bias. As a result, the bias value indicated in Fig. 4b could be used as an approximation to adjust the forecasted model pressure at this location as a postprocessing correction. As the model forecast times increase to 45–57 and 60–69 h, the model bias and RMSE are no longer consistent between weather patterns. This is shown in the results for the 60–69-h model bias and RMSE (the dark blue line in Figs. 4b and 4c), where the bias ranges from approximately −4 to −9 hPa and the RMSE ranges from approximately 5 to 9.5 hPa. This indicates that for the longer forecast hours, the model predicts the pressure more accurately in some weather patterns than others. In this instance, a standard bias cannot be applied to adjust the forecasted model pressure. Instead, the bias would need to be applied as a function of the synoptic weather pattern.

A similar analysis can be conducted for the wind speed at Pegasus North. Figure 5 shows the verification plot for the average wind speed and the model bias at the Pegasus North AWS. It can be seen that AMPS predicts the strongest wind speeds at Pegasus North for the weather patterns identified at the top of the three leftmost columns in the master SOM (Fig. 3), with wind speed decreasing sharply when moving down each column. Referring back to the sea level pressure anomalies plot of the master SOM (Fig. 3), this is consistent with the strong pressure gradient that exists over Pegasus North for nodes [1, 1], [2, 1], and [3, 1] due to the location of the cyclone in the northern portion of the domain for these weather patterns with a weakening pressure gradient found by moving down each of these columns. Conversely, the average AWS wind speeds at Pegasus North are not as strong as those simulated by AMPS and do not show the strong variability when moving from the top to the bottom of the SOM column. Analysis of the wind speed bias verification plot (Fig. 5b) at Pegasus North indicates that the model results are not consistent over the identified weather patterns (Fig. 5b), as seen by the variability in the wind speed bias between different weather patterns. The plot shows that on average the model greatly overpredicts the wind speed for the weather patterns in the first row of the SOM and that the model bias for the remaining weather patterns is quite variable, but generally small.

Fig. 5.
Fig. 5.

(a) Verification plot of average wind speed for the Pegasus North AWS location calculated for the AWS observations (black dotted line) and the following forecast categories: 0–9 (black solid line), 12–21 (red solid line), 24–33 (orange solid line), 36–45 (green solid line), 48–57 (light blue solid line), and 60–69 h (dark blue solid line). (b) Verification plot of wind speed bias for the Pegasus North AWS location calculated for the same forecast categories (same lines and colors) presented in (a).

Citation: Weather and Forecasting 26, 2; 10.1175/2010WAF2222444.1

A situation where the model bias is variable over the identified weather patterns provides an excellent example of the advantage to using the weather-pattern-based method of model evaluation. For instance, one of the traditional methods for analyzing forecasting models involved calculating an average of the forecasted wind speeds and an average of the observed wind speeds at Pegasus North. Subsequently, the bias would have been calculated as the difference between these two averages. The ALL node that is labeled in the verification plots is the equivalent of the traditional method described above. The ALL node represents the statistic calculated over all of the 20 weather patterns. The ALL model bias in Fig. 5a reveals that a traditional method of model evaluation would have indicated that on average the model overpredicts the wind speed at Pegasus North by about 1.5 m s−1. Using the weather-pattern-based method for model evaluation reveals that depending on the weather pattern the model can overpredict the wind speed by almost 7.5 m s−1 or underpredict the wind speed by 1.5 m s−1. In this instance, the use of the weather-pattern-based method of model evaluation provides information to the end user that can be valuable in analyzing the forecasted wind speed at Pegasus North.

The weather-pattern-based method of model evaluation can also be used to determine if model errors associated with a given weather pattern are spatially consistent. Figure 6 shows the wind speed average and bias verification plots for Ferrell AWS (FER in Fig. 2b). It can be seen in the average wind speed verification plot that the model-forecasted wind speed is connected to the synoptic weather pattern in a fashion similar to that of the model-forecasted wind speed at Pegasus North. The strongest wind speeds at Ferrell are also forecasted for the synoptic patterns found in the top-left corner of the master SOM (as seen by the sawtooth pattern in the average wind speed verification plot).

Fig. 6.
Fig. 6.

As in Fig. 5, but for the Ferrell AWS location.

Citation: Weather and Forecasting 26, 2; 10.1175/2010WAF2222444.1

Ferrell is located approximately 100 km to the northeast of Pegasus North. Pegasus North is positioned to the south of Ross Island in a region of complex terrain while Ferrell is situated east of Ross Island in an area away from significant topographic influences. Taking a look at the Ferrell wind speed bias verification plot (Fig. 6b), it can be seen that unlike at Pegasus North, the model forecasts the wind speeds at Ferrell with a fairly consistent bias over each of the weather patterns. Therefore, the model is correctly adjusting the forecasted wind speed at Ferrell as the synoptic weather patterns change. The weather-pattern-dependent bias at Pegasus North is thus likely due to a poorly handled representation of the interaction of the synoptic flow with the local complex topography in AMPS at this coarse resolution.

Expanding on the spatial analysis of model performance, the Willie Field AWS (WFD in Fig. 2) is located about 16 km to the northeast of Pegasus North (see Fig. 2b). Even though Willie Field is situated somewhat in between Pegasus North and Ferrell, the average wind speed verification plot shows a much less defined sawtooth pattern for the forecasted wind speed over the different weather patterns (Fig. 7a). Similar to Ferrell and Pegasus North, the strongest wind speed at Willie Field is forecasted for the weather patterns in the top-left corner of the master SOM, but the magnitude of these wind speeds is much less than at Pegasus North or at Ferrell. For instance, the average forecasted wind speed for node [1, 1] for the 12–21-h forecast category at Pegasus North is 10.8 m s−1, at Ferrell it is 11.8 m s−1, and for Willie Field it is 5.6 m s−1. The lower wind speeds at Willie Field can be attributed to the topographic influence of Ross Island within the model. Monaghan et al. (2004) analyzed the average wind speed and direction from the 3.3-km-grid WRF output from AMPS in the area of Ross Island. Figure 3 in Monaghan et al. (2004) shows that the Willie Field AWS is located in an area of climatologically low wind speeds due to the persistent southerly winds in the area and the blocking effect of Ross Island. Figure 7b shows the wind speed bias for the Willie Field AWS and indicates that the model predicts the wind speed at this site with decent accuracy. The model bias is relatively close to 0 m s−1, when compared to other AWS locations, for each forecast hour category and for all of the 20 identified weather patterns. Conversely, the Pegasus North verification plot in Fig. 5b shows that the model has a strong positive wind speed bias at this location for nodes [1, 1], [1, 2], [2, 1], [2, 2], [3, 1], and [3, 2], which corresponds with southerly flow at Pegasus North. Furthermore, comparing the AWS observations by weather pattern for Pegasus North (Fig. 5a) and Willie Field (Fig. 7a), it can be seen that the observed wind speeds at the two locations are quite similar, indicating that for a given weather pattern Pegasus North experiences wind speeds similar to those at Willie Field. Figure 3 of Monaghan et al. (2004) shows that the Pegasus North AWS is located, according to the model, in an area of slightly stronger average wind speeds than those predicted at Willie Field. Therefore, for weather patterns with southerly flow (nodes [1, 1], [1, 2], [2, 1], [2, 2], [3, 1], and [3, 2]), it appears that the model is under-representing the blocking effects of Ross Island at the location of the Pegasus North AWS. This type of information provided by the weather-pattern-based method of model evaluation can be useful for model developers when making improvements to the model.

Fig. 7.
Fig. 7.

As in Fig. 5, but for the Willie Field AWS location.

Citation: Weather and Forecasting 26, 2; 10.1175/2010WAF2222444.1

The wind direction at the Vito AWS is another example of where the weather-pattern-based model evaluation technique is beneficial in evaluating the performance of the model for different weather patterns. Figure 8b shows the average wind direction verification plot for the Vito AWS (VTO in Fig. 2b). Analyzing the average model forecasted wind direction from Fig. 8b, it can be seen that on average AMPS predicts a fairly consistent wind direction of approximately 190° for each weather pattern. The AWS observations (shown in Fig. 8b by the black, dotted line) indicate that the average wind direction varies more than the modeled wind direction and that the wind direction varies clearly as a function of weather pattern. For instance, by inspecting the bottom row of the node-averaged sea level pressure fields of the master SOM (nodes [1, 4], [2, 4], [3, 4], and [4, 4] in Fig. 3) it can be seen that by moving across the row the cyclone shifts from the northeast corner to the southeast corner of the domain. This would locate the cyclone to the east of the Vito AWS in node [1, 4], causing a southwesterly wind to be expected at Vito for this weather pattern. Moving through the other weather patterns associated with the bottom row of the SOM, the cyclone progressively shifts to the southeast of the Vito AWS location. Therefore, the winds at Vito would be expected to become more westerly and eventually northwesterly as this progression is made. The AWS observations (shown by the black, dotted line) in the wind direction verification plot (Fig. 8b) indicate that the wind direction associated with the nodes from the bottom row of the SOM does in fact shift from a southwesterly direction in the bottom-left corner of the master SOM (node [1, 4]) to a more westerly direction in the bottom center of the master SOM (node [3, 4]) and eventually a northwesterly direction in the bottom-right corner of the master SOM (node [5, 4]). The AMPS forecasts do no indicate this progression of wind direction as the synoptic patterns changes, but it should be noted that in nodes [4, 4] and [5, 4] where northwesterly wind would be expected, and where the strong wind direction bias exists, the pressure gradient is extremely weak. This weak pressure gradient results in lower wind speeds (see Fig. 8a) and thus less predictable wind directions. This, in part helps explain the large wind direction biases, but it appears that the model is still not adequately representing the impacts of the varying direction of the pressure gradient for these patterns. In this example, the use of the weather-pattern-based method of model evaluation provides us with the ability to easily identify the weather patterns that the model will forecast the wind direction with good accuracy and the weather patterns for which the model will struggle with the accuracy of the wind direction forecasts.

Fig. 8.
Fig. 8.

As in Fig. 5, but for the (a)Vito AWS location and verification plot of average wind direction for the Vito AWS location calculated for the AWS observations (black dotted line) and the same forecast categories (same lines and colors) as presented in (a).

Citation: Weather and Forecasting 26, 2; 10.1175/2010WAF2222444.1

The weather-pattern-based model evaluation technique can also be used to find areas of interesting phenomenon for further research. For instance, at the Cape Bird AWS (CBD in Fig. 2b), the SOM analysis can be used to investigate potential connections between model errors for different weather parameters (e.g., pressure, wind speed, etc.). Figure 9 shows the average pressure and average wind speed verification plots for Cape Bird AWS. The average pressure plot indicates that the model overpredicts the pressure at Cape Bird for each of the identified weather patterns (the AWS observations are shown by the black, dotted lines and the forecasts are shown by solid lines). The plot also indicates that the forecast pressure decreases for the longer forecast hours. The average wind speed plot indicates that the model predicts the wind speed fairly well for the shorter forecast hours, but greatly overpredicts the wind speed for the longer forecast hours. For example, looking at node [1, 1], the forecast pressure drops from 987.5 mb during the 0–9-h forecast to 979.8 mb during the 60–69-h forecast with the AWS observations averaging at 977.6 mb. Doing a similar analysis on the wind speed, shows that the forecast wind speed increases from 6.4 m s−1 during the 0–9-h forecast to 13.9 m s−1 during the 60–69-h forecast with the AWS observations averaging at 4.6 m s−1. A possible explanation for the change in the model output over forecast time could be a connection between the decrease in model pressure with time and an increase in the model wind speed with time. For instance, Cape Bird is located on the leeward side of Ross Island for weather patterns with southerly flow (nodes in the first three columns of the master SOM). Powers et al. (2003) discusses the tendency for von Kármán vortices to spin up in the lee of Ross Island under conditions of strong southerly flow. Therefore, the presence of these vortices as the model is given time to spin up could potentially explain the strong decrease in model pressure and strong increase in model wind speed with increased forecast time, especially for weather patterns with southerly flow. Although further research would need to be conducted in order to determine the cause of the model errors associated with this scenario. Therefore, this is an example where the SOM evaluation method does not provide all of the information necessary to understand why the model performs better in some instances than others, but can indicate areas where further research could help us to better understand the model results and the atmospheric dynamics in the region.

Fig. 9.
Fig. 9.

(a) Verification plot of average pressure for the Cape Bird AWS location calculated for the AWS observations (black dotted line) and the following forecast categories: 0–9 (black solid line), 12–21 (red solid line), 24–33 (orange solid line), 36–45 (green solid line), 48–57 (light blue solid line), and 60–69 h (dark blue solid line). (b) Verification plot of average wind speed for the Cape Bird AWS location calculated for the AWS observations (black dotted line) and the same forecast categories (same lines and colors) as presented in (a).

Citation: Weather and Forecasting 26, 2; 10.1175/2010WAF2222444.1

4. Conclusions

In this study the model in AMPS was evaluated using a weather-pattern-based method of model evaluation. The purpose of the study was to determine the benefits of using a weather-pattern-based method of model evaluation. The analysis involved using SOMs to identify 20 weather patterns that represent the range of weather patterns that occur over the RIS. Subsequently, each model forecast was associated with one of the 20 identified weather patterns in order to evaluate the model performance for each weather pattern. AWS observations of pressure, wind speed, and wind direction were used to validate the model forecasts and statistics, such as average wind speed and model wind speed bias, were calculated for various locations on the RIS. The results showed instances where model performance was a function of the weather pattern, confirming the weather-pattern-based method of model evaluation as a useful evaluation technique. Furthermore, it was also shown that the model performance as a function of weather pattern is dependent on the atmospheric variable (pressure, temperature, wind speed, or wind direction) and on the location of the area being analyzed. The model dependency on weather pattern for wind speed in one location did not imply model dependency on weather pattern in a different location. Therefore, the weather-pattern-based method of model evaluation provides specific results for each individual atmospheric variable of interest and for each individual location of interest. It was also determined that the weather-pattern-based method of model evaluation has the ability to identify large model biases that may go undetected when using other methods of model evaluation. In conclusion, the weather-pattern-based method of model evaluation presented here has the ability to identify model errors as a function of weather pattern and therefore has the ability to provide model developers and users with additional guidance regarding model performance.

Acknowledgments

NSF Grants ANT-0636811 and ATM-0404790 supported this work. The AMPS data were retrieved from the National Center for Atmospheric Research’s Computational and Information Systems Laboratory. The AWS data were retrieved from the Automatic Weather Station Program and Antarctic Meteorological Research Center (Matthew Lazzara, Linda Keller, Jonathan Thom, and George Weidner; NSF Grants ANT-0537827, ANT-0636873, and ANT-0838834).

REFERENCES

  • Bromwich, D. H., , Cassano J. J. , , Klein T. , , Heinemann G. , , Hines K. M. , , Steffen K. , , and Box J. E. , 2001: Mesoscale modeling of katabatic winds over Greenland with the Polar MM5. Mon. Wea. Rev., 129, 22902309.

    • Search Google Scholar
    • Export Citation
  • Bromwich, D. H., , Monaghan A. J. , , Powers J. G. , , Cassano J. J. , , Wei H. L. , , Kuo Y. H. , , and Pellegrini A. , 2003: Antarctic Mesoscale Prediction System (AMPS): A case study from the 2000–01 field season. Mon. Wea. Rev., 131, 412434.

    • Search Google Scholar
    • Export Citation
  • Bromwich, D. H., , Monaghan A. J. , , Manning K. W. , , and Powers J. G. , 2005: Real-time forecasting for the Antarctic: An evaluation of the Antarctic Mesoscale Prediction System (AMPS). Mon. Wea. Rev., 133, 579603.

    • Search Google Scholar
    • Export Citation
  • Cassano, J. J., , Box J. E. , , Bromwich D. H. , , Li L. , , and Steffen K. , 2001: Verification of Polar MM5 simulations of Greenland’s atmospheric circulation. J. Geophys. Res., 106, 13 86713 890.

    • Search Google Scholar
    • Export Citation
  • Cassano, J. J., , Uotila P. , , Lynch A. H. , , and Cassano E. N. , 2007: Predicted changes in synoptic forcing of net precipitation in large Arctic river basins during the 21st century. J. Geophys. Res., 112, G04S49, doi:10.1029/2006JG000332.

    • Search Google Scholar
    • Export Citation
  • Grell, G. A., , Dudhia J. , , and Stauffer D. R. , 1995: A description of the fifth-generation Penn State/NCAR Mesoscale Model (MM5). NCAR Tech. Note TN-398+STR, 122 pp. [Available from UCAR Communications, P.O. Box 3000, Boulder, CO 80307.]

    • Search Google Scholar
    • Export Citation
  • Guo, Z., , Bromwich D. H. , , and Cassano J. J. , 2003: Evaluation of Polar MM5 simulations of Antarctic atmospheric circulation. Mon. Wea. Rev., 131, 384411.

    • Search Google Scholar
    • Export Citation
  • Higgins, M. E., , and Cassano J. J. , 2009: Impacts of reduced sea ice on winter Arctic atmospheric circulation, precipitation and temperature. J. Geophys. Res., 114, D16107, doi:10.1029/2009JD011884.

    • Search Google Scholar
    • Export Citation
  • Hines, K. M., , and Bromwich D. H. , 2008: Development and testing of Polar Weather Research and Forecasting (WRF) Model. Part I: Greenland Ice Sheet meteorology. Mon. Wea. Rev., 136, 19711989.

    • Search Google Scholar
    • Export Citation
  • Keller, L. M., , Lazzara M. A. , , Thom J. E. , , Weidner G. A. , , and Stearns C. R. , 2009: Antarctic Automatic Weather Station data for the calendar year 2010. Rep. UW SSEC 10.10.K1, Space Science and Engineering Center, University of Wisconsin—Madison, 44 pp. [Available from Space Science and Engineering Center, University of Wisconsin—Madison, Madison, WI 53706.]

    • Search Google Scholar
    • Export Citation
  • Kohonen, T., 2001: Self-Organizing Maps. 3rd ed. Springer, 501 pp.

  • Monaghan, A. J., , Bromwich D. H. , , Powers J. G. , , and Manning K. W. , 2004: The climate of McMurdo, Antarctica, region as represented by one year of forecasts from the Antarctic Mesoscale Prediction System. J. Climate, 18, 11741189.

    • Search Google Scholar
    • Export Citation
  • Powers, J. G., 2007: Numerical prediction of an Antarctic severe wind event with the Weather Research and Forecasting (WRF) model. Mon. Wea. Rev., 135, 31343157.

    • Search Google Scholar
    • Export Citation
  • Powers, J. G., , Monaghan A. J. , , Cayette A. M. , , Bromwich D. H. , , Kuo Y. , , and Manning K. W. , 2003: Real-time mesoscale modeling over Antarctica: The Antarctic Mesoscale Prediction System. Bull. Amer. Meteor. Soc., 84, 15331545.

    • Search Google Scholar
    • Export Citation
  • Reusch, D. B., , Alley R. B. , , and Hewitson B. C. , 2005: Relative performance of self-organizing maps and principal component analysis in pattern extraction from synthetic climatological data. Polar Geogr., 29, 188212.

    • Search Google Scholar
    • Export Citation
  • Seefeldt, M. W., , Tripoli G. J. , , and Stearns C. R. , 2003: A high-resolution numerical simulation of the wind flow in the Ross Island region, Antarctica. Mon. Wea. Rev., 131, 435458.

    • Search Google Scholar
    • Export Citation
  • Uotila, P., , Lynch A. H. , , Cassano J. J. , , and Cullather R. I. , 2007: Changes in Antarctic net precipitation in the 21st century based on Intergovernmental Panel on Climate Change (IPCC) model scenarios. J. Geophys. Res., 112, D10107, doi:10.1029/2006JD007482.

    • Search Google Scholar
    • Export Citation
Save