FROST-2014: The Sochi Winter Olympics International Project

Dmitry Kiktev Hydrometcentre of Russia, Moscow, Russia

Search for other papers by Dmitry Kiktev in
Current site
Google Scholar
PubMed
Close
,
Paul Joe Environment and Climate Change Canada, Toronto, Ontario, Canada

Search for other papers by Paul Joe in
Current site
Google Scholar
PubMed
Close
,
George A. Isaac Environment and Climate Change Canada, Toronto, Ontario, Canada

Search for other papers by George A. Isaac in
Current site
Google Scholar
PubMed
Close
,
Andrea Montani Regional Agency for Prevention, Environment and Energy in the Emilia-Romagna Region, Bologna, Italy

Search for other papers by Andrea Montani in
Current site
Google Scholar
PubMed
Close
,
Inger-Lise Frogner MET Norway, Oslo, Norway

Search for other papers by Inger-Lise Frogner in
Current site
Google Scholar
PubMed
Close
,
Pertti Nurmi Finnish Meteorological Institute, Helsinki, Finland

Search for other papers by Pertti Nurmi in
Current site
Google Scholar
PubMed
Close
,
Benedikt Bica Central Institute for Meteorology and Geodynamics, Vienna, Austria

Search for other papers by Benedikt Bica in
Current site
Google Scholar
PubMed
Close
,
Jason Milbrandt Environment and Climate Change Canada, Dorval, Quebec, Canada

Search for other papers by Jason Milbrandt in
Current site
Google Scholar
PubMed
Close
,
Michael Tsyrulnikov Hydrometcentre of Russia, Moscow, Russia

Search for other papers by Michael Tsyrulnikov in
Current site
Google Scholar
PubMed
Close
,
Elena Astakhova Hydrometcentre of Russia, Moscow, Russia

Search for other papers by Elena Astakhova in
Current site
Google Scholar
PubMed
Close
,
Anastasia Bundel Hydrometcentre of Russia, Moscow, Russia

Search for other papers by Anastasia Bundel in
Current site
Google Scholar
PubMed
Close
,
Stéphane Bélair Environment and Climate Change Canada, Dorval, Quebec, Canada

Search for other papers by Stéphane Bélair in
Current site
Google Scholar
PubMed
Close
,
Matthew Pyle National Centers for Environmental Prediction, College Park, Maryland

Search for other papers by Matthew Pyle in
Current site
Google Scholar
PubMed
Close
,
Anatoly Muravyev Hydrometcentre of Russia, Moscow, Russia

Search for other papers by Anatoly Muravyev in
Current site
Google Scholar
PubMed
Close
,
Gdaly Rivin Hydrometcentre of Russia, Moscow, Russia

Search for other papers by Gdaly Rivin in
Current site
Google Scholar
PubMed
Close
,
Inna Rozinkina Hydrometcentre of Russia, Moscow, Russia

Search for other papers by Inna Rozinkina in
Current site
Google Scholar
PubMed
Close
,
Tiziana Paccagnella Regional Agency for Prevention, Environment and Energy in the Emilia-Romagna Region, Bologna, Italy

Search for other papers by Tiziana Paccagnella in
Current site
Google Scholar
PubMed
Close
,
Yong Wang Central Institute for Meteorology and Geodynamics, Vienna, Austria

Search for other papers by Yong Wang in
Current site
Google Scholar
PubMed
Close
,
Janti Reid Environment and Climate Change Canada, Toronto, Ontario, Canada

Search for other papers by Janti Reid in
Current site
Google Scholar
PubMed
Close
,
Thomas Nipen MET Norway, Oslo, Norway

Search for other papers by Thomas Nipen in
Current site
Google Scholar
PubMed
Close
, and
Kwang-Deuk Ahn National Institute for Meteorological Sciences, Seogwipo, South Korea

Search for other papers by Kwang-Deuk Ahn in
Current site
Google Scholar
PubMed
Close
Open access

Abstract

The World Meteorological Organization (WMO) World Weather Research Programme’s (WWRP) Forecast and Research in the Olympic Sochi Testbed program (FROST-2014) was aimed at the advancement and demonstration of state-of-the-art nowcasting and short-range forecasting systems for winter conditions in mountainous terrain. The project field campaign was held during the 2014 XXII Olympic and XI Paralympic Winter Games and preceding test events in Sochi, Russia. An enhanced network of in situ and remote sensing observations supported weather predictions and their verification. Six nowcasting systems (model based, radar tracking, and combined nowcasting systems), nine deterministic mesoscale numerical weather prediction models (with grid spacings down to 250 m), and six ensemble prediction systems (including two with explicitly simulated deep convection) participated in FROST-2014. The project provided forecast input for the meteorological support of the Sochi Olympic Games. The FROST-2014 archive of winter weather observations and forecasts is a valuable information resource for mesoscale predictability studies as well as for the development and validation of nowcasting and forecasting systems in complex terrain. The resulting innovative technologies, exchange of experience, and professional developments contributed to the success of the Olympics and left a post-Olympic legacy.

© 2017 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

CORRESPONDING AUTHOR: Dmitry Kiktev, kiktev@mecom.ru

Abstract

The World Meteorological Organization (WMO) World Weather Research Programme’s (WWRP) Forecast and Research in the Olympic Sochi Testbed program (FROST-2014) was aimed at the advancement and demonstration of state-of-the-art nowcasting and short-range forecasting systems for winter conditions in mountainous terrain. The project field campaign was held during the 2014 XXII Olympic and XI Paralympic Winter Games and preceding test events in Sochi, Russia. An enhanced network of in situ and remote sensing observations supported weather predictions and their verification. Six nowcasting systems (model based, radar tracking, and combined nowcasting systems), nine deterministic mesoscale numerical weather prediction models (with grid spacings down to 250 m), and six ensemble prediction systems (including two with explicitly simulated deep convection) participated in FROST-2014. The project provided forecast input for the meteorological support of the Sochi Olympic Games. The FROST-2014 archive of winter weather observations and forecasts is a valuable information resource for mesoscale predictability studies as well as for the development and validation of nowcasting and forecasting systems in complex terrain. The resulting innovative technologies, exchange of experience, and professional developments contributed to the success of the Olympics and left a post-Olympic legacy.

© 2017 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

CORRESPONDING AUTHOR: Dmitry Kiktev, kiktev@mecom.ru

Six nowcasting systems, nine deterministic mesoscale numerical weather prediction models, and six ensemble prediction systems took part in the FROST-2014 project.

d119706310e257

The area of mountain cluster of Olympic sport venues and meteorological station “Aibga” from above.

Citation: Bulletin of the American Meteorological Society 98, 9; 10.1175/BAMS-D-15-00307.1

The Olympic Games are one of the most successful social inventions made in ancient Greece—like democracy, academia, or theater. As was done thousands of years ago, the modern Olympics bring people together from across the world for peaceful competition and invaluable human interactions. Meteorologists have not stayed away from these events. Since 2000, a number of meteorological projects have been organized in connection with the Olympic Games (Keenan et al. 2003; Wilson et al. 2010; Duan et al. 2012; Isaac et al. 2014; Golding et al. 2014). Most of them were conducted under the umbrella of the World Meteorological Organization’s (WMO) World Weather Research Programme (WWRP) as forecast demonstration projects (FDPs) and/or research and development projects (RDPs). FDPs implement scientifically established technologies in practice and demonstrate their capabilities. RDPs aim to advance meteorology and develop new forecasting methods and technologies. Both provide excellent opportunities for meteorologists from many countries to showcase and further develop their forecast technologies, compare the capabilities of different prediction systems, take advantage of enhanced observation coverage in the area of the Olympic Games, and, last but not least, provide operational meteorological support of sport events.

The Forecast and Research in the Olympic Sochi Testbed (FROST-2014) RDP/FDP was associated with the 2014 XXII Winter Olympic and XI Paralympic Games (henceforth the Games) held in Sochi, Russia, from 7 to 23 February and from 7 to 16 March 2014, respectively. FROST-2014 (Kiktev et al. 2015) dealt with complex terrain forecasting during winter ranging from nowcasting to short-range numerical weather prediction (NWP). Recently, a new RDP/FDP was initiated in connection with the 2018 Winter Olympics in Pyeongchang, South Korea.

This paper provides a general overview of the FROST-2014 project, outlines its achievements in nowcasting and short-range deterministic and ensemble forecasting, presents some assessments of the automated project forecasts performance versus manual forecasts, and concludes with a summary of the lessons learned and legacy left.

OLYMPIC DEMANDS AND WEATHER CHALLENGES.

Timely provision of high-quality meteorological forecasts is very important to organizers, participants, and spectators of Olympic events because unfavorable weather conditions can lead to delays or even cancellations of open-air competitions. The general logistics of the Olympic infrastructure is also weather sensitive. The Sochi Olympic venues were divided into two clusters: a coastal cluster for indoor ice sport competitions and a mountain cluster for snow sport outdoor events. The latter was located in the area of Krasnaya Polyana township about 45 km away from the coast (see Fig. 1). Sport activities in the mountain cluster were especially weather dependent.

Fig. 1.
Fig. 1.

(a) The Sochi Olympic area on the global map, (b) the magnified map with locations of the meteorological equipment, and (c) the mountain cluster with the stations and five sport venues. Red bulbs designate the automatic meteorological stations, the radar icon is the Doppler radar, green bulbs are the micro rain radars, blue bulbs are the temperature profilers, and the yellow bulb is the wind profiler.

Citation: Bulletin of the American Meteorological Society 98, 9; 10.1175/BAMS-D-15-00307.1

The weather in the mountains can be notoriously capricious. In Sochi, this is exacerbated by the proximity of the Black Sea, a source of heat and moisture. Sharp weather contrasts and high variability are typical for the region. In winter, severe weather conditions include heavy precipitation, freezing rain, fog, and strong winds. The nearby Achishkho Ridge (10–15 km to the northwest of Krasnaya Polyana) experiences annual precipitation of up to 4.5 m and is the wettest place in Russia. During the winter, daily snowfall totals as large as 92 cm and snow intensities of up to 30 cm h−1 have been registered in the mountain cluster area. Conversely, sometimes the presence of snow might be limited, affecting snow sports. For example, a strong heat wave in mid-February 2014, with maximum temperatures of up to 19°С in Krasnaya Polyana, affected the snow cover and posed a serious concern for the ski slope managers. Table 1 presents other interesting weather situations and challenges worth further analysis.

Table 1.

List of the most interesting weather cases during the Sochi Games.

Table 1.

Within the context of the Olympic Games, high-impact weather (HIW) is not necessarily restricted to common severe weather events. Because of the specificity of snow sports, HIW also includes transitions of meteorological variables through sport-specific decision-making thresholds (e.g., there are wind speed restrictions for ski jumping and visibility limitations for biathlon and mountain skiing). Accurate prediction of these sport-specific HIW conditions was as important and challenging as skillful traditional weather forecasts.

PROJECT SCOPE, GOALS, AND PARTICIPANTS.

The main focus of FROST-2014 was on nowcasting and high-resolution short-range numerical prediction, both deterministic and ensemble, of winter weather over complex terrain. The following project goals were identified:

  1. development of a comprehensive information resource of alpine winter weather observations and forecasts;

  2. development of nowcasting systems, as well as mesoscale deterministic and ensemble forecasting systems, for winter weather conditions in complex terrain with a focus on HIW phenomena;

  3. operational meteorological support of the Games;

  4. improvement of our understanding of regional HIW phenomena physics/mechanisms; and

  5. evaluation of the developed forecasting systems and assessing the benefits of their use (verification and societal impacts).

A list of participating institutions and consortia is given in Table 2.

Table 2.

Institutions participating in FROST-2014.

Table 2.

METEOROLOGICAL OBSERVATION NETWORK.

The observational network in the region of Sochi was substantially expanded before the Games. Thirty-eight automatic weather stations (Fig. 1) were installed. In addition to temperature, humidity, atmospheric pressure, liquid precipitation, and wind speed and direction, some of these stations measured solid precipitation intensity and amount (15 stations), visibility (21 stations), cloud-base height (11 stations), radiation balance (6 stations), and snow cover parameters (19 stations). The network strategy was for each sport venue to have one basic station and up to five supplementary stations with a reduced list of observed parameters. The primary sampling interval was 10 min. At five stations it was enhanced to 1 min. In addition, a high vertical resolution radiosonde was launched in Sochi daily at 0000, 0600, 1200, and 1800 UTC.

A Vaisala WRM200 C-band Doppler dual-polarization radar was installed on Mount Akhun (Fig. 1) at an altitude of 680 m above sea level. This position was chosen to ensure optimal surveillance coverage and to monitor cloud and precipitation systems approaching the Olympic venues from the Black Sea. In winter 2013/14, data from two C-band Doppler radars (located in Samsun and Trabzon, Turkey) and two X-band radars (located in Simferopol and Donetsk, Ukraine) were kindly provided by the Turkish Meteorological Service and the Ukrainian State Air Traffic Service, respectively. The latter four radars were invaluable as they provided upstream coverage over the sea (Fig. 2). For the first time a nearly complete swath of radar coverage of the Black Sea was produced. These data were supplemented by measurements from an RPG humidity and temperature profiler (RPG-HATPRO), a Scintec LAP3000 sodar, an ATTEX MTP-5 temperature profiler, and two METEK micro rain radars (Fig. 1). These instruments were helpful for monitoring the lower-atmospheric layers in the valleys shaded from the Akhun radar by the mountains (Fig. 2).

Fig. 2.
Fig. 2.

Example of the radar reflectivity composite for the region of the Games. The Akhun, Trabzon, Samsun, Donetsk, and Simferopol radar coverage patterns are shown by circles.

Citation: Bulletin of the American Meteorological Society 98, 9; 10.1175/BAMS-D-15-00307.1

The Sochi observations also included images from seven webcams and snow surveys by local avalanche-protection troops. The real-time observation data were available to the FROST-2014 participants via Internet from the project server (see the “FROST-2014 Archive” section of this paper below).

FORECAST VERIFICATION AND INTERCOMPARISON SETUP.

The verification setup for the FROST-2014 weather prediction systems has been introduced in Murav’ev et al. (2013, 2015) and Nurmi et al. (2014, 2015). Predictions were compared with near-surface station data for a period from 15 January to 18 March 2014, if not indicated otherwise. Some nowcasting systems produced predictions at the observation locations, whereas other forecasting systems provided gridded fields. For gridded predictions, observations were compared with the closest grid points without vertical adjustment and not accounting for slope orientation. As the models’ computational grids were different, some models were in a more favorable position for some stations. This effect could be significant in complex terrain. To reduce the resulting noise in the verification scores, an aggregation for groups of similar stations was performed. The verification results are presented in the following sections.

NOWCASTING SYSTEMS.

The FROST-2014 participants provided the project with various kinds of prognostic information that was made available to the Sochi forecasting team for elaboration of official forecasts and meteorological support of the Olympic events (see the section below on “Manual and Automated Forecasts”). Six nowcasting systems contributed to the project (Table 3). They are briefly characterized as follows.

Table 3.

Nowcasting systems used in FROST-2014.

Table 3.

The Adaptive Blending of Observations and Model system (ABOM; Bailey et al. 2014) produces nowcasts combining observations, observation trends, and trends from a single NWP model, while an integrated weighted forecasts (INTW; Huang et al. 2012, 2014a,b) approach provides integrated nowcasts from blending observation data and weighted forecasts from several NWP models. Both systems use observations from the previous 6 h to train the algorithms and generate an improved point forecast. In FROST-2014,ABOM and INTW predicted 2-m temperature (T2m), relative humidity (RH2m), 10-m wind, and visibility for selected points at 10-min intervals for the first hour and then either hourly (ABOM) or every 10 min (INTW) for up to 8 h. INTW used model output from the Global Environmental Multiscale Model (GEM) with grid spacing of 1 and 0.25 km, the COSMO model versions with horizontal spacing of 7 and 2.2 km, and the Advanced Research version of the Weather Research and Forecasting Model of the National Institute for Meteorological Sciences (WRF-ARW-NIMS) (Table 4) to produce the integrated forecast. ABOM produced nowcasts based on each of these models. Both systems employ the visibility prediction algorithm described in Boudala and Isaac (2009) and Boudala et al. (2012) using nowcast RH2m to help overcome the model humidity errors.

Table 4.

FROST-2014 deterministic forecasting systems.

Table 4.

The Canadian Radar Decision Support (CARDS) system is a radar-processing nowcasting system based on Lagrangian radar data extrapolation (Joe et al. 2003). During the Games, CARDS mosaicked the data from the Akhun, Trabzon, Samsun, Donetsk, and Simferopol radars every 15 min. Point forecasts of precipitation were produced using a cross-correlation nowcasting technique (Bellon and Austin 1978). The uncertainty in the precipitation intensity was conveyed by back-trajectory and estimating the upstream intensity along the mean of the track and the maximum intensity in the swath of ±8°. This has proved to be highly reliable (Ebert et al. 2004) and easily interpreted.

Integrated Nowcasting through Comprehensive Analysis (INCA; Haiden et al. 2011; Kann et al. 2012) is a gridded analysis and nowcasting system that uses different kinds of observation and model forecast data. The FROST-2014 INCA domain was 180 km × 140 km. The system predicted the precipitation and precipitation type at 10-min resolution. The wind speed and direction, T2m, RH2m, dewpoint, ground temperature, freezing level, and snow line were predicted at hourly resolution. The INCA nowcasting fields were merged into the NWP fields with a linearly decreasing weighting factor. The analysis background and model forecasts were provided by ALARO (Aire Limitee Adaptation/Application de la Recherche a l’Operationnel), a version of ALADIN model (Wang et al. 2011) with a physics package designed for a horizontal grid spacing of around 5 km. For precipitation, the analysis background was derived from the Akhun radar.

The JOINT system generated nowcasts and short-range forecasts at station locations as weighted NWP multimodel means adjusted to the latest observations. The system aggregated all the latest deterministic model forecasts available from the project participants (Table 4), and also employed lagged average forecasting (Hoffman and Kalnay 1983), adding several overlapping consecutive model forecasts from earlier analyses. For the Games period, JOINT was implemented only for continuous meteorological variables, not including precipitation.

The MeteoExpert system combined several nowcasting tools including a radar-processing component and a numerical model of the atmospheric boundary layer fed by observations and external NWP background data. Cross-correlation tracking, averaged Doppler velocity, and prognostic wind at the 700-hPa level were combined to estimate the precipitation advection. Site-specific 4-h forecasts of T2m, RH2m, dewpoint temperature, wind, precipitation intensity, cloud-base height, and visibility were provided by the system with a 10-min update (Bazlova 2014).

Most nowcasting systems have been developed for the prediction of summer convective phenomena and for regions with relatively flat topography. Experience with winter nowcasting in mountains has been very modest. The Science of Nowcasting Olympic Weather for Vancouver-2010 (SNOW-V10) was the first winter Olympic nowcasting project in complex terrain conducted under the WWRP that involved international researchers (Isaac et al. 2014). Several model-based nowcasting approaches tested during SNOW-V10 were adopted to the Sochi test bed. Testing of the systems in the different environments disclosed some local specificity in their behavior. For example, during the 2010 Winter Olympics in Vancouver, British Columbia, most cases with reduced visibility were associated with snowfall. By contrast, in Sochi, low visibility was mostly caused by fog or low clouds. As a result of considerable errors in the numerical predictions of humidity, visibility reductions in fog were predicted less successfully than visibility reductions in precipitation.

Figure 3 displays the mean absolute errors (MAEs) of the point-specific NWP-based nowcasts of T2m and RH2m (INCA and CARDS are not shown in the figure as INCA is not a point-based system and CARDS does not predict the considered variables). The persistence forecasts were still competitive as compared to the more sophisticated techniques. For T2m, persistence was overtaken by the model-based systems only after 2–3 h. The nowcasts for T2m were more successful than for RH2m, which was probably caused by the better skill of the temperature NWP contributions relative to the model humidity input into the nowcasting systems. After 1 h, the lowest MAEs of the T2m predictions were demonstrated by JOINT. For RH2m, INTW performed better than the other systems.

Fig. 3.
Fig. 3.

MAEs of point-specific forecasts aggregated over stations at the sport venues of the mountain cluster. ABOM for COSMO-Ru2 is the ABOM system based on COSMO-Ru2 forecasts. Aggregation period ranges from 15 Jan to 18 Mar 2014, averaged over hourly runs.

Citation: Bulletin of the American Meteorological Society 98, 9; 10.1175/BAMS-D-15-00307.1

In mountainous regions, the Lagrangian radar echo extrapolation does not properly capture the orographic effects. The orographic impact on precipitation fields is complex and depends on the speed of the incoming flow and the stratification of the atmosphere (e.g., Medina et al. 2005). This impact is manifested in the general increase of precipitation on the windward side and weakening on leeside slopes. Some preliminary assessments of the orographic forcing on precipitation intensity were obtained from a series of Akhun radar precipitation rate fields (not shown). Сross-correlation tracking of reflectivity fields at 1.5-km height above the radar with a 5-min update and at 1-km horizontal resolution was used to generate about 5,000 nowcasts for the 2013 winter season. Reflectivity was converted to the precipitation rate using the Marshall–Palmer relationship (Marshall and Palmer 1948; Marshall and Gunn 1952). However, quantifying systematic differences between the precipitation intensity in upstream areas and at the forecast locations has been inconclusive. Challenges include the objective identification and separation of orographic enhancement from other phenomena, the proper conversion of reflectivity to precipitation rate considering the precipitation type, extrapolation to the surface, and determining the upstream location and precipitation value. Nevertheless, the CARDS radar nowcasting products (90-min point predictions of precipitation intensity) proved very useful by the Sochi Olympics forecasters for intensity and the start and cessation times. The strong point of the radar approach with respect to the NWP-based nowcasting is the more accurate initial locations of the meteorological features. Further work on the intercomparison of the radar and NWP-based nowcasts is ongoing.

An inherent part of nowcasting is the diagnosis of weather phenomena. In particular, precipitation type is of special interest for winter sport events. Environment and Climate Change Canada modified a radar dual-polarization algorithm (Park et al. 2009) for the C-band Akhun radar and compared it to the Vaisala hydrometeor classification algorithm (Liu and Chandrasekar 2000). For rain, present weather detectors (PWD-20 and PWD-22 by Vaisala) at different weather stations within the mountain cluster showed that the EC algorithm compared better than the Vaisala algorithm for rain (with occurrence rates of 82% and 40%, respectively, versus the observed 90% for rain detection) at 500–750 m above sea level. The EC algorithm overestimated wet snow over rain in the bottom part of the melting layer and underestimated wet snow at the top (Fig. 4). The Vaisala algorithm tended to produce deeper layers of wet snow where the EC algorithm reported graupel and dry snow (Reid et al. 2014). The main difference between the two algorithms is in the determination of the height of the melting level.

Fig. 4.
Fig. 4.

The (top) EC and (bottom) Vaisala particle classifications for a 1.1° scan at 1755 UTC 26 Feb. The EC shows more rain than the Vaisala classification (red tones) and the opposite for wet snow (blue tones).

Citation: Bulletin of the American Meteorological Society 98, 9; 10.1175/BAMS-D-15-00307.1

DETERMINISTIC NUMERICAL WEATHER FORECASTING.

Nine deterministic NWP systems contributed to the project (Table 4). Their descriptions can be found in Baldauf et al. (2011), Rivin et al. (2015), Milbrandt et al. (2016), Niemelä et al. (2014), and Janjić and Gall (2012).

Figures 5 and 6 give an impression of the general performance of the 1-km deterministic forecasting systems in the mountain cluster. More specific validation results are reported in Murav’ev et al. (2013, 2015). Figure 5 shows the MAEs for T2m, RH2m, and 10-m wind direction and speed as functions of lead time. Figure 6 presents the verification statistics for 1-h precipitation in terms of the equitable threat score (ETS) (WMO 2008). Both MAE and ETS are pointwise scores here and thus can suffer from the double-penalty problem (if an observed event is misplaced with respect to its predicted location, then this forecast is penalized twice: at both the actual and the predicted locations). Verification results with spatial methods are intended to be published in follow-up papers.

Fig. 5.
Fig. 5.

MAEs of 1-km-resolution model forecasts. The scores are aggregated over all model runs (COSMO-Ru1 and HARMONIE AROME, 0000, 0600, 1200, and 1800 UTC; GEM-1, 2300 UTC; NEMS/NMMB, 0000 and 1200 UTC) and over 22 stations in the mountain cluster. Here and in Figs. 68 the period is from 15 Jan to 18 Mar 2014.

Citation: Bulletin of the American Meteorological Society 98, 9; 10.1175/BAMS-D-15-00307.1

Fig. 6.
Fig. 6.

As in Fig. 5, but for the ETS of 1-h precipitation >1 mm (the higher the better).

Citation: Bulletin of the American Meteorological Society 98, 9; 10.1175/BAMS-D-15-00307.1

Forecast error growth.

In Fig. 5 any visible forecast error growth with the lead time is hardly visible. For some of the models this error evolution was compared with the error growth in flat terrain. Over flatlands, the initial MAE was usually lower than in the mountains and the error growth was more pronounced (not shown). This difference might be important for some practical purposes; for example, the lagged average forecasting approach may appear more efficient over complex terrain than over flat terrain. A number of studies revealed the similar forecast error patterns of evolution over complex terrain (Colman et al. 2013). It is conjectured (Anthes et al. 1985) that at least in some cases physical forcing at the land surface, such as mountains, may contribute to the extended atmospheric predictability. Some of the mechanisms behind this effect were investigated by Vukicevic and Errico (1990).

Intermodel differences.

From Figs. 5 and 6, one can see that the performance of a model with respect to the other models depends on the predicted variable. The National Oceanic and Atmospheric Administration (NOAA) Environmental Modeling System (NEMS)/Nonhydrostatic Multiscale Model on the B Grid (NMMB) model manifested the best T2m MAEs and good precipitation scores, while it had the worst RH2m and wind speed MAEs among the 1-km models when averaged over all runs. HIRLAM–ALADIN Research on Mesoscale Operational NWP in Euromed (HARMONIE) AROME model performed very well for wind; however, its T2m and precipitation scores were poor. Precipitation was better forecasted by GEM-1, but its wind direction MAEs were the largest. In most cases, the scores of COSMO were in between the other models and never the worst. These intermodel differences can be caused by multiple reasons. For example, in case of the T2m and RH2m forecast scores, it can be linked to distinctions in the employed land surface models, different vertical resolutions in the lower boundary layer, etc. The differences in the wind scores can be attributed to differences in the model orography and roughness parameters. A more focused experimental setup is needed to identify the sources of individual distinctive features of model behavior more confidently.

Aggregation of the verification scores over all forecast start times masks some features in model behavior. Later in this paper, more details will be drawn out and discussed (see Figs. 15 and 16, which are primarily devoted to the comparison of the numerical schemes with the human forecasts in the section on Manual and Automated Forecasts) for forecasts started from 1200 UTC. Specifically, the diurnal cycle of T2m MAE was different for various models: with the daytime maximum for COSMO-Ru7, COSMO-Ru2, NEMS/NMMB, and INCA (which transited to ALARO forecasts at these lead times) and daytime minimum for WRF-ARW-NIMS and HARMONIE AROME. The odd behavior of HARMONIE AROME (poor T2m scores at night and the best ones at daytime) was investigated in Niemelä et al. (2014). It appeared that the large nighttime errors were mostly caused by the CANOPY turbulence scheme (Masson and Seity 2009). Without it, the temperature had a more moderate underestimation of 1°–2°C. For precipitation (Fig. 16), all of the models exhibited poorer forecasts during the daytime than at night.

Data assimilation.

There were several efforts to benefit from data assimilation for deterministic NWP in FROST-2014:

HARMONIE AROME used three-dimensional variational data assimilation (3DVAR) for upper-air quantities and optimum interpolation for surface variables. Only observations from regular (i.e., not including stations from the enhanced Olympic Sochi network) near-surface stations, radiosondes, and aircraft observations were utilized. The background error statistics were created by using an ensemble method (Niemelä et al. 2014).

The nudging scheme (Schraff 1997) was implemented to assimilate near-surface data and radiosondes with the COSMO model at 7- and 2.2-km resolutions.

A limited-area 3DVAR method was developed at Roshydromet to assimilate near-surface, radiosonde, aircraft, and satellite wind data into the COSMO-Ru2 model.

The attempts to use data assimilation with COSMO-Ru2 and COSMO-Ru7 did not result in substantial forecast improvements in the Sochi test bed. This can be interpreted as follows. First, over a small domain, the information from the initial conditions is quickly swept out from the domain and is largely replaced by information propagated from the lateral boundary conditions; as a result, data assimilation in limited area applications is in general not as beneficial as it is on the global scale. Second, land surface data assimilation, which affects the important surface forcing, was lacking in these experiments. Third, many more observations (radar and satellite) are needed to impact a model with tens of millions of degrees of freedom. This particularly concerns the vast upstream areas of the Black Sea that are poorly covered with contact observations.

Role of resolution.

Both the COSMO-Ru and GEM systems were available at three different horizontal grid spacings (Table 4). This made it possible to evaluate the effects of the horizontal grid spacing on the quality of the forecasts (Figs. 7 and 8). The MAE and the extremal dependence index (EDI; Ferro and Stephenson 2011) were selected as verification metrics. The EDI was recommended as a good estimator of forecast accuracy for all thresholds and for rare events, in particular. The EDI is positively oriented (the higher the better) and ranges from −1 to 1 with 0 corresponding to the level of random forecast. Note that in Figs. 7 and 8 the numbers of model runs per day were significantly different for COSMO-Ru and GEM (see Fig. 7 caption). This may explain the flatter curves for COSMO-Ru compared to the GEM models, where the larger variability in the scores might be attributed to the diurnal cycle effects.

Fig. 7.
Fig. 7.

The role of the horizontal grid spacing for the (left) COSMO and (right) GEM model families. Scores are MAEs and are aggregated over all model runs (COSMO, 0000, 0600, 1200, and 1800 UTC; GEM-2.5, 2100 UTC; GEM-1, 2300 UTC; and GEM-0.25, 0000 UTC) and over 22 stations in the mountain cluster.

Citation: Bulletin of the American Meteorological Society 98, 9; 10.1175/BAMS-D-15-00307.1

Fig. 8.
Fig. 8.

As in Fig. 7, but for 1-h precipitation occurrence forecasts. Score is the EDI (the higher the better).

Citation: Bulletin of the American Meteorological Society 98, 9; 10.1175/BAMS-D-15-00307.1

The near-surface forecast errors partly originate from the differences between the actual and model orographies. With smaller horizontal grid spacings, these errors are expected to be reduced. Indeed, the refinement of the COSMO model resolution from 7 to 2.2 km was beneficial for T2m, RH2m, and 10-m wind direction forecasts (Fig. 7). The further refinement of the COSMO-Ru model horizontal grid from 2.2 to 1.1 km appeared to be positive mainly for wind speed. For the GEM model, the improvement at higher resolution is clear for T2m. The transition to 250-m grid spacing was also quite beneficial for the nighttime wind direction, but made the wind speed forecast worse. In some cases, the effect of resolution enhancement was less evident.

A low-visibility event.

One of the most serious weather impacts on the Games was caused by the low clouds and related visibility reduction in the mountain cluster during 16–17 February. The men’s biathlon mass start was postponed from 16 to 17 February and further to 18 February, and the snowboard qualification was postponed from 17 to 18 February. Both the long-lasting visibility reduction due to fog on 16 February and the subsequent window of relatively good visibility during the afternoon of 17 February (before the next visibility reduction due to heavy snowfall) were captured in the official forecast bulletins issued daily at 15 h local time (LT).

Figure 9 shows COSMO-Ru1 and COSMO-Ru2 forecasts starting at 0600 UTC 16 February, along with observations. In Fig. 9 one can see the growth of RH2m on 16 February (the onset of the event), then reaching 100% RH2m for about 24 h (fog) with a subsequent decrease in the late afternoon of 17 February (the good visibility window). It is remarkable that all of the phases of the event were reasonably well predicted by both COSMO-Ru versions (Shatunova et al. 2015) in terms of relative humidity (COSMO-Ru does not predict visibility directly). This numerical guidance was very helpful in elaborating on the official forecast of this HIW event of 16 February, and the planned women’s biathlon mass start was held during the predicted window of good visibility on 17 February.

Fig. 9.
Fig. 9.

The RH2m forecasts by COSMO-Ru2 and COSMO-Ru1 from 0600 UTC 16 Feb 2014 and corresponding observations for the low-visibility event at the biathlon stadium.

Citation: Bulletin of the American Meteorological Society 98, 9; 10.1175/BAMS-D-15-00307.1

Along with the traditional meteorological variables, some project models predicted less common variables, such as visibility, cloud-base height, and reflectivity. Figure 10 illustrates direct visibility forecasts for the same event by three versions of the GEM model with different grid spacings. It is interesting to note that the forecast by the finest resolution GEM-0.25 for 0000 UTC 16 February was the most successful. It realistically reproduced the timing of the sharp visibility reduction on 16 February (although the duration of the low-visibility period was underestimated).

Fig. 10.
Fig. 10.

The visibility forecasts made by GEM-2.5 (from 2100 UTC 15 Feb), GEM-1 (from 2300 UTC 15 Feb), and GEM-0.25 (from 0000 UTC 16 Feb), as well as the corresponding observations for the low-visibility event at the biathlon stadium. A model prediction of 100 km indicates unlimited visibility. The PWD sensors can report a maximum of 20-km visibility.

Citation: Bulletin of the American Meteorological Society 98, 9; 10.1175/BAMS-D-15-00307.1

ENSEMBLE PREDICTION.

The FROST-2014 ensemble prediction systems (EPSs) are listed in Table 5. Two convection-permitting systems (i.e., systems with explicitly simulated deep convection), COSMO-Ru2-EPS and HarmonEPS, were tested in research mode while the coarser-resolution EPSs were operational. All forecasts were issued twice a day, starting from 0000 and 1200 UTC, with the exception of the HIRLAM systems, which started at 0600 and 1800 UTC. Detailed information about the systems can be found in Frogner et al. (2016), Astakhova et al. (2015), Du et al. (2014), Iversen et al. (2011), Montani et al. (2013, 2014), and Wang et al. (2011). The Games area was within the operational domains of the ALADIN-Limited-Area Ensemble Forecasting (ALADIN-LAEF) system and the Grand Limited Area Ensemble Prediction System (GLAMEPS), whereas the other systems were specifically set up for FROST-2014.

Table 5.

Ensemble prediction systems used in FROST-2014.

Table 5.

The EPSs generated a set of probabilistic products, including ensemble mean and ensemble standard deviation for several near-surface and upper-air variables, the probability of exceeding a specified threshold, as well as ensemble meteograms for selected points. Additionally, pointwise calibrated and hourly updated GLAMEPS forecasts were produced. At the time of the Games, GLAMEPS had been operational for several years, and the development of calibrated forecasts had reached a level where it could be provided as part of the FDP. For HarmonEPS it was the first attempt to run the system in real time, and calibration was not part of the process. HarmonEPS was calibrated after the Games, and this is documented in Frogner et al. (2016). The impact of calibration on the skill of COSMO-based ensembles will be investigated in forthcoming studies. The ensemble products were systematically presented at the FROST-2014 site and widely applied and appreciated by the Sochi forecasters.

After the Games the project research was mainly focused on possible advantages of high-resolution convection-permitting and multimodel ensembles as well as on the effects of calibration. Figure 11 presents the continuous ranked probability score (CRPS; the lower the better) (WMO 2008) for the European Centre for Medium-Range Weather Forecasts (ECMWF) EPS, GLAMEPS, calibrated GLAMEPS, and HarmonEPS forecasts, three systems having quite different resolutions. While ECMWF EPS and GLAMEPS had a comparable number of ensemble members (51 and 54, respectively), HarmonEPS had only 13 members. The most striking feature in Fig. 11 is the effect of calibration producing much better scores for temperature and wind, and slightly better for precipitation for most lead times. Running an EPS is expensive, while calibration is much cheaper in terms of computational cost and thus appears to be a highly beneficial approach.

Fig. 11.
Fig. 11.

CRPSs for ECMWF EPS, GLAMEPS, calibrated GLAMEPS, and HarmonEPS (the lower the better): (top) T2m, (middle) 10-m wind speed, and (bottom) 3-h precipitation.

Citation: Bulletin of the American Meteorological Society 98, 9; 10.1175/BAMS-D-15-00307.1

Other developments of HarmonEPS after the Games were the calibration and enrichment of the ensemble. Besides 13 AROME-based members, another 13 ALARO model members were added. Figure 12 shows the CRPS for the original HarmonEPS and its extended version (labeled as multiphysics). There is a clear effect of the ensemble extension leading to better CRPS, which can be explained by the increased diversity in the ensemble and, thus, its higher representativeness.Figure 12 also includes calibrated HarmonEPS and a calibrated subset of GLAMEPS based on 26 members only, that is, the same number of members as in the extended HarmonEPS. As for GLAMEPS, calibration was beneficial for HarmonEPS, and calibrated HarmonEPS scored better than calibrated GLAMEPS with the same number of members, indicating that the finer-resolution calibrated HarmonEPS has the higher potential than thecalibrated GLAMEPS for predicting winter weather. For details, see Frogner et al.’s (2016) paper on the HIRLAM contributions to FROST-2014.

Fig. 12.
Fig. 12.

CRPSs for 10-m wind speed forecasts for HarmonEPS, extended HarmonEPS with two subensembles, calibrated HarmonEPS, and calibrated GLAMEPS based on 26 members only.

Citation: Bulletin of the American Meteorological Society 98, 9; 10.1175/BAMS-D-15-00307.1

Figure 13 illustrates the potential of a multimodel approach using the FROST-2014 EPSs. The areas under the relative operating characteristic (ROC) curves for individual EPSs and their combined multimodel ensemble are shown. The scores for convection-parameterized (left) and convection-permitting (right) EPSs are given as functions of forecast lead time for 6-h precipitation exceeding 1 mm. All FROST-2014 EPSs exhibited quite high and, on average, comparable ROC values. It may be noted that the scores of the multimodel ensemble are consistently higher than those of its constituents for all forecast ranges, indicating a better ability of the system to predict this type of event (A. Montani et al. 2016, unpublished manuscript).

Fig. 13.
Fig. 13.

Area under the ROC curve (the higher the better) for forecasts of the event “6-h accumulated precipitation is above 1 mm” aggregated over the stations of the mountain cluster for (left) convection-parameterized and (right) convection-permitting EPSs, as well as for the corresponding multimodel ensembles. Note that about 200 occurrences of the above event were observed during the verification period.

Citation: Bulletin of the American Meteorological Society 98, 9; 10.1175/BAMS-D-15-00307.1

The role of spatial resolution for EPS performance is demonstrated in Fig. 14. Here, the debiased ranked probability skill score (RPSS) was selected as it makes ensembles with differing sizes comparable (Weigel et al. 2007). In general, the higher-resolution ensembles with an explicit treatment of convection performed better than the convection-parameterized systems (COSMO-Ru2 and HarmonEPS versus COSMO-Ru7 and GLAMEPS, respectively).

Fig. 14.
Fig. 14.

Debiased RPSS (the higher the better) for 6-h accumulated precipitation forecast by two convection-parameterized (COSMO-S14-EPS and GLAMEPS) and two convection-permitting EPSs (COSMO-Ru2-EPS and HarmonEPS), aggregated over the stations of the mountain cluster.

Citation: Bulletin of the American Meteorological Society 98, 9; 10.1175/BAMS-D-15-00307.1

Before the Games, the majority of the local forecasters had very limited practice in the use of ensemble forecast products. The Games experience facilitated the gradual embedding of the probabilistic thinking into their working practices and formed a new need for this kind of numerical guidance. The probabilistic information tended to be more actively used by the forecasters for the second and third forecast days, while the deterministic predictions were preferred for the shorter forecast ranges. In some situations (particularly in the case of the previously mentioned low-visibility event) the information on forecast uncertainty was conveyed to the sport managers to support their decision-making.

FROST-2014 ARCHIVE.

A special server with data storage was dedicated to the FROST-2014 project at the Hydrometcentre of Russia. All the participants were provided with access to operational meteorological observations and used them to run and verify their forecasts for the Sochi region. The FROST-2014 contributors computed the forecasts at their home institutes in real time and uploaded the results to the server via the Internet. On the project website (http://frost2014.meteoinfo.ru), the forecasters and the project participants could access the data in digital and graphical formats and also use additional online tools for forecast verification and comparison.

The most intense data collection period was during the cold season of 2013/14. However, some of the forecast and observation records are 2–3 yr long. Automatic weather station data, regional surface synoptic observations (SYNOPs), radar graphical products and raw data (volume files), vertical profiler data, images from web cameras, upper-air sounding data, project automated forecasts, official forecast bulletins, and some additional information are available to the meteorological scientific community via the project server.

MANUAL AND AUTOMATED FORECASTS.

FROST-2014 was an “end to end” project. Its operational forecasts were used by the Olympics forecasting team, which was gathered from the whole Roshydromet for meteorological support of the Games. The list of the models and products expanded significantly in 2013 and even shortly before the Games. This diversity of forecast data was both a great help and a challenge. Sometimes the numerical guidance was misleading. Occasionally, the automated forecasting systems experienced difficulties in predicting the timing of weather events. Difficulties in the prediction of the presence/absence and amount of precipitation tended to grow under conditions of low-gradient fields. Substantial errors were noticed in relative humidity, wind direction, and maximum speed forecasts. The visibility and cloud-base forecasts should still be considered experimental despite their capability of producing a useful signal.

Time and practical experience were needed for the forecasters to adapt to the new products. Forecasters tend to use familiar products in their operational work. The most popular were products whose regular delivery started well before the Games and that were introduced to the forecasters during the pre-Olympic trainings in 2010–13. The transfer of experience of FROST-2014 experts from the EC and COSMO who were lecturing at the training courses helped considerably in building the forecasters’ confidence in the new forecasting products.

Under the operational time constraints, the forecasters usually did not have enough time to review and analyze all the available products. To compress this information and to facilitate preparation of the required hourly forecast updates for the information system of the Games, an automatic generation of a forecast first guess was employed using multimodel blended forecasts of the JOINT system. A special web interface was developed for the forecasters to correct this first guess, if necessary. This FROST-2014 data feed to the Olympic information system can be considered as one of the strongest societal impacts of the project. The performance of combined multimodel products was on average at the level of the best forecasts of individual forecasting systems and sometimes even exceeded it, especially during the first forecast hours.

FROST-2014 provided a good opportunity to compare the performance of the manual forecasts with the automated ones being used as numerical guidance. Among the regular official products generated during the Games were the forecast bulletins for the mountain cluster of Olympic sport venues. These bulletins were issued at around 1500 LT and covered a 24-h period starting from 2200 LT, that is, with a 7-h lead time. The nearest models’ start time for comparison of automated forecasts with the forecast bulletins is 1200 UTC (1600 LT). Some results of the intercomparison between the official and automated forecasts with hourly temporal resolution for the period from 1 November 2013 to 23 February 2014 are presented in Figs. 15 and 16 .

Fig. 15.
Fig. 15.

The skill of the official and model forecasts as a function of the lead time. MAEs of T2m aggregated over the mountain cluster (heights of about 600, 1,000, 1,500, and 2,000 m), during the period from 1 Nov 2013 to 23 Feb 2014 (HARMONIE AROME, from 9 Dec 2013; WRF-ARW-NIMS, from 23 Dec 2013), with the official forecasts issued at 1100 UTC, and the models started at 1200 UTC. After 24-h lead time, the HARMONIE AROME forecasts were issued with a 6-h time step, denoted by the blue dot at 30-h lead time in the plot.

Citation: Bulletin of the American Meteorological Society 98, 9; 10.1175/BAMS-D-15-00307.1

Fig. 16.
Fig. 16.

As in Fig. 15, but for the EDI (the higher the better) of 1-h precipitation occurrence.

Citation: Bulletin of the American Meteorological Society 98, 9; 10.1175/BAMS-D-15-00307.1

Figures 15 and 16 and similar results on winds and visibility (not shown) demonstrate the following points of interest:

  • Automated temperature forecasts, especially blended multimodel forecasts, were competitive with manual forecasts.

  • For wind speed and visibility, the human forecasts demonstrated the psychological biases toward higher speed and lower visibility [the phenomenon of overforecasting hazardous events by human forecasters is discussed, e.g., by Doswell (2004)].

  • For precipitation, the manual forecasts did add value to the model forecasts.

SUMMARY AND CONCLUSIONS.

Weather forecasts were crucial for the efficient conduct of the Sochi Olympic Games. This information was essential for sport teams, organizers, broadcasters, spectators, and the general public. Forecasts affected decisions of sport managers and were the reason for a number of changes in the Games’ schedule. FROST-2014 nowcasts and NWP guidance data were used by the forecasters for meteorological support of the Games, and thus contributed to the success of these events.

Implementation of the project strengthened the numerical guidance for the Olympic weather services with new state-of-the-art forecast products. A series of training sessions, including ones with the participation of the project international experts greatly helped with the capacity building of the forecasters. The multimodel JOINT forecasts served as a first guess for the forecasters in their production of the hourly prognostic updates requested by the International Olympic Committee. Involvement in the project provided important educational value to the local forecasters.

Despite the diversity of available state-of-the-art forecast data, the project experience shows that the tested systems were insufficient on their own for the meteorological support of such a high-profile event and that the role of a human forecaster was still crucial. A postevent survey among the forecasters showed their great interest in new prediction technologies resulting from FROST-2014. The survey also highlighted some lessons learned; for example, a diversity of available prognostic products makes their form and usability very important to forecasters.

The high-resolution data assimilation in the Sochi test bed was mostly limited to the assimilation of nonsatellite and nonradar observations. More extensive assimilation of remote sensing data and updating land surface fields is important for further forecast improvements over complex terrain.

The NWP systems demonstrated some benefits of transitioning from several kilometers to one kilometer and down to subkilometer grid spacing. A number of NWP postprocessing techniques (in particular, ABOM, INTW, JOINT, and calibrated GLAMEPS) were implemented for further refinement of the project numerical forecasts down to the individual Olympic venues and proved themselves quite efficient across complex terrain conditions. Model-based nowcasts of continuous variables were informative and helpful, but sometimes struggled to beat persistence. Radar nowcasting was limited by the problem of Lagrangian echo extrapolations over complex terrain but the forecasters found the CARDS products to be useful. The acquired experience facilitated implementation of a number of new methods and products into operations during the post-Olympic period (e.g., radar data assimilation, new NWP postprocessing techniques with rapid forecast updates, spatial verification methods, etc.).

All the forecasting systems exhibited their strengths and weaknesses. It is quite difficult to single out an unambiguous winner among the systems that participated in the field campaign because the results of this rating vary substantially depending on location, meteorological variable, forecast lead time, and other factors. The same applies to the ensemble prediction systems. A more robust outcome is that, as with over flat areas, the multimodel forecasts were consistently more informative than the forecasts of individual systems. However, there were significant differences in skill for particular cases and variables. These differences might come from many sources: data assimilation schemes, the types and numbers of the assimilated data, driving global models, configurations of nested limited-area models, and other details. A more rigorous unified experimental setup (e.g., with common global driving model and boundary conditions) is needed in this respect for more in-depth diagnostic studies and intercomparisons of the forecasting systems. In general, the FROST-2014 NWP systems were state of the art, so the Sochi test bed verifications may be considered to be characteristic of current NWP capabilities in mountain conditions.

Only a few systematic intercomparisons of multiple mesoscale forecasting systems in mountains are known because of the lack of appropriate observations and coordinated forecasting activities. In this respect the Sochi test bed provided a valuable information resource for development of forecasting systems and research into mesoscale predictability over complex terrain. Despite the limitations of the observational network in the Sochi region, the content and density of these Olympic test bed observations substantially surpassed the normal operational networks. The observations, project forecasts, and likewise official forecast bulletins are available to the meteorological scientific community via the FROST-2014 Internet server.

Another page in the history of the Olympics is closed. However, for FROST-2014 this is not the end of the story. The project participants continue processing and analyzing the field campaign data. A series of papers is under preparation to shape this project’s legacy.

ACKNOWLEDGMENTS

The authors acknowledge the guidance of the WWRP and its working groups, especially the Nowcasting and Mesoscale Research and Forecast Verification Research Working Groups, in facilitating and promoting this work. Roshydromet thanks the COSMO consortium and, in particular, DWD and MeteoSwiss for their help with the COSMO system. The authors thank Slobodan Nickovic, Nanette Lomarda, and Alexander Baklanov from the World Weather Research Division, WMO; Anna Glazer, Ruping Mo, Ivan Heckman, Monica Bailey, Laura Huang, David Hudak, Sudesh Boodoo, and Norman Donaldson from EC; Sami Niemelä, Sigbritt Näsman, Ari Aaltonen, Matias Brockmann, and Mikko Partio from FMI; John Bremnes from MET Norway; Kai Sattler from DMI; Alexander Kann, Jasmina Hadzimustafic, Florian Weidle, and Martin Suklitsch from ZAMG; Nikolai Bocharnikov and Tatyana Bazlova from IRAM; and Valery Lukyanov, Radomir Zaripov, Alexander Smirnov, Denis Blinov, Marina Shatunova, Dmitry Alferov, Alexander and Yury Melnichuk, Arkady Koldaev, and the Olympic forecasting team of Roshydromet.

REFERENCES

  • Anthes, R. A., Y.-H. Kuo, D. Baumhefner, R. M. Errico, and T. W. Bettge, 1985: Predictability of mesoscale atmospheric motions. Advances in Geophysics, Vol. 28B, Academic Press, 159–202, doi:10.1016/S0065-2687(08)60188-0.

    • Crossref
    • Export Citation
  • Astakhova, E. D., A. Montani, and D. Yu. Alferov, 2015: Ensemble forecasts for the Sochi-2014 Olympic Games. Russ. Meteor. Hydrol., 40, 531539, doi:10.3103/S1068373915080051.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bailey, M. E., G. A. Isaac, I. Gultepe, I. Heckman, and J. Reid, 2014: Adaptive blending of model and observations for automated short range forecasting: Examples from the Vancouver 2010 Olympic and Paralympic Winter Games. Pure Appl. Geophys., 171, 257276, doi:10.1007/s00024-012-0553-x.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Baldauf, M., A. Seifert, J. Förstner, D. Majewski, M. Raschendorfer, and T. Reinhardt, 2011: Operational convective-scale numerical weather prediction with the COSMO model: Description and sensitivities. Mon. Wea. Rev., 139, 38873905, doi:10.1175/MWR-D-10-05013.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bazlova, T., 2014: Weather stations for Sochi 2014. Meteorological Technology International, August 2014, UKI Media, 142–145. [Available online at www.ukimediaevents.com/pub-meteorological.php.]

  • Bellon, A., and G. L. Austin, 1978: The evaluation of two years of real-time operation of a Short-Term Precipitation Forecasting Procedure (SHARP). J. Appl. Meteor., 17, 17781787, doi:10.1175/1520-0450(1978)017<1778:TEOTYO>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Boudala, F. S., and G. A. Isaac, 2009: Parameterization of visibility in snow: Application in numerical weather prediction models. J. Geophys. Res., 114, D19202, doi:10.1029/2008JD011130.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Boudala, F. S., G. A. Isaac, R. Crawford, and J. Reid, 2012: Parameterization of runway visual range as a function of visibility: Implications for numerical weather prediction models. J. Atmos. Oceanic Technol., 29, 177191, doi:10.1175/JTECH-D-11-00021.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Colman, B., K. Cook, and B. J. Snyder, 2013: Numerical weather prediction and weather forecasting in complex terrain. Mountain Weather Research and Forecasting: Recent Progress and Current Challenges, F. K. Chow, S. F. J. De Wekker, and B. J. Snyder, Eds., Springer, 655–692.

    • Crossref
    • Export Citation
  • Doswell, C. A., III, 2004: Weather forecasting by humans—Heuristics and decision making. Wea. Forecasting, 19, 11151126, doi:10.1175/WAF-821.1.

  • Du, J., G. DiMego, B. Zhou, D. Jovic, B. Ferrier, B. Yang, and S. Benjamin, 2014: NCEP regional ensembles: Evolving toward hourly-updated convection-allowing scale and storm-scale predictions within a unified regional modeling system. 22nd Conf. on Numerical Weather Prediction/26th Conf. on Weather Analysis and Forecasting, Atlanta, GA, Amer. Meteor. Soc., J1.4. [Available online at https://ams.confex.com/ams/94Annual/webprogram/Paper239030.html.]

  • Duan, Y., and Coauthors, 2012: An overview of the Beijing, 2008 Olympics Research and Development Project (B08RDP). Bull. Amer. Meteor. Soc., 93, 381403, doi:10.1175/BAMS-D-11-00115.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ebert, E. E., L. J. Wilson, B. G. Brown, P. Nurmi, H. E. Brooks, J. Bally, and M. Jaeneke, 2004: Verification of nowcasts from the WWRP Sydney 2000 Forecast Demonstration Project. Wea. Forecasting, 19, 7396, doi:10.1175/1520-0434(2004)019<0073:VONFTW>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ferro, C. A. T., and D. B. Stephenson, 2011: Extremal dependence indices: Improved verification measures for deterministic forecasts of rare binary events. Wea. Forecasting, 26, 699713, doi:10.1175/WAF-D-10-05030.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Frogner, I.-L., T. Nipen, A. Singleton, J. B. Bremnes, and O. Vignes, 2016: Ensemble prediction with different spatial resolution for the 2014 Sochi Winter Olympic Games: The effects of calibration and multimodel approaches. Wea. Forecasting, 31, 18331851, doi:10.1175/WAF-D-16-0048.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Golding, B. W., and Coauthors, 2014: Forecasting capabilities for the London 2012 Olympics. Bull. Amer. Meteor. Soc., 95, 883896, doi:10.1175/BAMS-D-13-00102.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Haiden, T., A. Kann, C. Wittmann, G. Pistotnik, B. Bica, and C. Gruber, 2011: The Integrated Nowcasting through Comprehensive Analysis (INCA) system and its validation over the eastern Alpine region. Wea. Forecasting, 26, 166183, doi:10.1175/2010WAF2222451.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hoffman, R. N., and E. Kalnay, 1983: Lagged average forecasting, an alternative to Monte Carlo forecasting. Tellus, 35A, 100118, doi:10.1111/j.1600-0870.1983.tb00189.x.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Huang, L. X., G. A. Isaac, and G. Sheng, 2012: Integrating NWP forecasts and observation data to improve nowcasting accuracy. Wea. Forecasting, 27, 938953, doi:10.1175/WAF-D-11-00125.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Huang, L. X., G. A. Isaac, and G. Sheng, 2014a: A new integrated weighted model in SNOW-V10: Verification of continuous variables. Pure Appl. Geophys., 171, 277287, doi:10.1007/s00024-012-0548-7.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Huang, L. X., G. A. Isaac, and G. Sheng, 2014b: A new integrated weighted model in SNOW-V10: Verification of categorical variables. Pure Appl. Geophys., 171, 289302, doi:10.1007/s00024-012-0549-6.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Isaac, G. A., and Coauthors, 2014: Science of nowcasting Olympic weather for Vancouver 2010 (SNOW-V10): A World Weather Research Programme project. Pure Appl. Geophys., 171, 124, doi:10.1007/s00024-012-0579-0.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Iversen, T., A. Deckmyn, C. Santos, K. Sattler, J. B. Bremnes, H. Feddersen, and I.-L. Frogner, 2011: Evaluation of “GLAMEPS”—A proposed multimodel EPS for short range forecasting. Tellus, 63A, 513530, doi:10.1111/j.1600-0870.2010.00507.x.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Janjić, Z., and R. L. Gall, 2012: Scientific documentation of the NCEP Nonhydrostatic Multiscale Model on the B grid (NMMB). Part 1: Dynamics. NCAR Tech. Note NCAR/TN-489+STR, 75 pp., doi:10.5065/D6WH2MZX.

    • Crossref
    • Export Citation
  • Joe, P., M. Falla, P. Van Rijn, L. Stamadianos, T. Falla, D. Magosse, L. Ing, and J. Dobson, 2003: Radar data processing for severe weather in the National Radar Project of Canada. 21st Conf. on Severe Local Storms, San Antonio, TX, Amer. Meteor. Soc., P4.13. [Available online at https://ams.confex.com/ams/pdfpapers/47421.pdf.]

  • Kann, A., G. Pistotnik, and B. Bica, 2012: INCA-CE: A central European initiative in nowcasting severe weather and its applications. Adv. Sci. Res., 8, 6775, doi:10.5194/asr-8-67-2012.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Keenan, T., and Coauthors, 2003: The Sydney 2000 World Weather Research Programme Forecast Demonstration Project: Overview and current status. Bull. Amer. Meteor. Soc., 83, 16311643, doi:10.1175/BAMS-84-8-1041.

    • Search Google Scholar
    • Export Citation
  • Kiktev, D. B., E. D. Astakhova, R. B. Zaripov, A. V. Murav’ev, A. V. Smirnov, and M. D. Tsyrulnikov, 2015: FROST-2014 project and meteorological support of the Sochi-2014 Olympics. Russ. Meteor. Hydrol., 40, 504512, doi:10.3103/S1068373915080026; Erratum, 40, 844–845, doi:10.3103/S1068373915120109.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Liu, H., and V. Chandrasekar, 2000: Classification of hydrometeors based on polarimetric radar measurements: Development of fuzzy logic and neuro-fuzzy systems, and in situ verification. J. Atmos. Oceanic Technol., 17, 140164, doi:10.1175/1520-0426(2000)017<0140:COHBOP>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Marshall, J. S., and W. McK. Palmer, 1948: The distribution of raindrops with size. J. Meteor., 5, 165166, doi:10.1175/1520-0469(1948)005<0165:TDORWS>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Marshall, J. S., and K. L. S. Gunn, 1952: The measurement of snow parameters by radar. J. Meteor., 9, 322327, doi:10.1175/1520-0469(1952)009<0322:MOSPBR>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Masson, V., and Y. Seity, 2009: Including atmospheric layers in vegetation and urban offline surface schemes. J. Appl. Meteor. Climatol., 48, 13771397, doi:10.1175/2009JAMC1866.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Medina, S., B. F. Smull, R. A. Houze Jr., and M. Steiner, 2005: Cross-barrier flow during orographic precipitation events: Results from MAP and IMPROVE. J. Atmos. Sci., 62, 35803598, doi:10.1175/JAS3554.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Milbrandt, J. A., S. Belair, M. Faucher, M. Vallee, M. A. Carrera, and A. Glazer, 2016: The Pan-Canadian High Resolution (2.5 km) Deterministic Prediction System. Wea. Forecasting, 31, 17911816, doi:10.1175/WAF-D-16-0035.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Montani, A., C. Marsigli, and T. Paccagnella, 2013: Development of a COSMO-based limited-area ensemble system for the 2014 Winter Olympic Games. COSMO Newsletter, No. 13, 93–99. [Available online at http://cosmo-model.org/content/model/documentation/newsLetters/newsLetter13/cnl13_12.pdf.]

  • Montani, A., D. Alferov, E. Astakhova, C. Marsigli, and T. Paccagnella, 2014: Ensemble forecasting for Sochi-2014 Olympics: The COSMO-based ensemble prediction systems. COSMO Newsletter, No. 14, 88–94. [Available online at http://cosmo-model.org/content/model/documentation/newsLetters/newsLetter14/cnl14_10.pdf.]

  • Murav’ev, A. V., A. Yu. Bundel’, D. B. Kiktev, and A. V. Smirnov, 2013: Verification of mesoscale forecasts in the 2014 Olympic Games region. Part II: Preliminary results of diagnostic evaluation of quality and calibration of the forecasts by the COSMO-RU2 model. Russ. Meteor. Hydrol., 38, 797807, doi:10.3103/S1068373913120017.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Murav’ev, A. V., D. B. Kiktev, A. Yu. Bundel, T. G. Dmitrieva, and A. V. Smirnov, 2015: Verification of high-impact weather event forecasts for the region of the Sochi-2014 Olympic Games. Part I: Deterministic forecasts during the test period. Russ. Meteor. Hydrol., 40, 584597, doi:10.3103/S1068373915090034.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Niemelä, S., S. Näsman, and P. Nurmi, 2014: FROST-2014—Performance of HARMONIE 1km during Sochi Olympics. ALADIN-HIRLAM Newsletter, No. 3, 79–86. [Available online at www.hirlam.org/index.php/hirlam-documentation/cat_view/77-hirlam-official-publications/285-aladin-hirlam-newsletters.]

  • Nurmi, P., M. Brockmann, and S. Näsman, 2014: Forecast verification framework and some early results of the Sochi 2014 Winter Olympics. World Weather Open Science Conf., Montreal, QC, Canada, WMO. [Available online at www.wmo.int/pages/prog/arep/wwrp/new/wwosc/documents/WWOSC_SCI-PS124.02_Nurmi_Monday-am.pdf.]

  • Nurmi, P., and Coauthors, 2015: The framework of the WMO/WWRP FROST-2014 forecast verification setup and activities. 15th EMS Annual Meeting/12th European Conf. on Application of Meteorology, Sofia, Bulgaria, European Meteorological Society, EMS2015–577. [Available online at http://meetingorganizer.copernicus.org/EMS2015/EMS2015-577.pdf.]

  • Park, H. S., A. V. Ryzhkov, D. S. Zrnic, and K. E. Kim, 2009: The hydrometeor classification algorithm for the polarimetric WSR-88D: Description and application to an MCS. Wea. Forecasting, 24, 730748, doi:10.1175/2008WAF2222205.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Reid, J., D. Hudak, S. Boodoo, N. Donaldson, P. Joe, D. Kiktev, and A. Melnichuk, 2014: Dual-polarization radar particle classification results during the Sochi Olympic Games. Eighth European Conf. on Radar in Meteorology and Hydrology, Garmisch-Partenkirchen, Germany, DWD–DLR. [Available online at www.pa.op.dlr.de/erad2014/programme/ExtendedAbstracts/151_Reid.pdf.]

  • Rivin, G. S., and Coauthors, 2015: The COSMO-Ru system of nonhydrostatic mesoscale short-range weather forecasting of the Hydrometcenter of Russia: The second stage of implementation and development. Russ. Meteor. Hydrol., 40, 400410, doi:10.3103/S1068373915060060.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Schraff, C., 1997: Mesoscale data assimilation and prediction of low stratus in the Alpine region. Meteor. Atmos. Phys., 64, 2150, doi:10.1007/BF01044128.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Shatunova, M. V., G. S. Rivin, and I. A. Rozinkina, 2015: Visibility forecasting for February 16–18, 2014 for the region of the Sochi-2014 Olympic Games using the high-resolution COSMO-Ru1 model. Russ. Meteor. Hydrol., 40, 523530, doi:10.3103/S106837391508004X.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Vukicevic, T., and R. M. Errico, 1990: The influence of artificial and physical factors upon predictability estimates using a complex limited-area model. Mon. Wea. Rev., 118, 14601482, doi:10.1175/1520-0493(1990)118<1460:TIOAAP>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wang, Y., and Coauthors, 2011: The Central European limited-area ensemble forecasting system: ALADIN-LAEF. Quart. J. Roy. Meteor. Soc., 137, 483502, doi:10.1002/qj.751.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weigel, A. P., M. A. Liniger, and C. Appenzeller, 2007: The discrete Brier and ranked probability skill scores. Mon. Wea. Rev., 135, 118124, doi:10.1175/MWR3280.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wilson, J. W., Y. Feng, M. Chen, and R. D. Roberts, 2010: Nowcasting challenges during the Beijing Olympics: Successes, failures, and implications for future nowcasting systems. Wea. Forecasting, 25, 16911714, doi:10.1175/2010WAF2222417.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • WMO, 2008: Recommendations for the verification and intercomparison of QPFS and PQPFS from operational NWP models. WWRP 2009-1, WMO-TD 1485, World Meteorological Organization, 34 pp. [Available online at www.wmo.int/pages/prog/arep/wwrp/new/documents/WWRP2009-1_web_CD.pdf.]

Save
  • Anthes, R. A., Y.-H. Kuo, D. Baumhefner, R. M. Errico, and T. W. Bettge, 1985: Predictability of mesoscale atmospheric motions. Advances in Geophysics, Vol. 28B, Academic Press, 159–202, doi:10.1016/S0065-2687(08)60188-0.

    • Crossref
    • Export Citation
  • Astakhova, E. D., A. Montani, and D. Yu. Alferov, 2015: Ensemble forecasts for the Sochi-2014 Olympic Games. Russ. Meteor. Hydrol., 40, 531539, doi:10.3103/S1068373915080051.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bailey, M. E., G. A. Isaac, I. Gultepe, I. Heckman, and J. Reid, 2014: Adaptive blending of model and observations for automated short range forecasting: Examples from the Vancouver 2010 Olympic and Paralympic Winter Games. Pure Appl. Geophys., 171, 257276, doi:10.1007/s00024-012-0553-x.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Baldauf, M., A. Seifert, J. Förstner, D. Majewski, M. Raschendorfer, and T. Reinhardt, 2011: Operational convective-scale numerical weather prediction with the COSMO model: Description and sensitivities. Mon. Wea. Rev., 139, 38873905, doi:10.1175/MWR-D-10-05013.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bazlova, T., 2014: Weather stations for Sochi 2014. Meteorological Technology International, August 2014, UKI Media, 142–145. [Available online at www.ukimediaevents.com/pub-meteorological.php.]

  • Bellon, A., and G. L. Austin, 1978: The evaluation of two years of real-time operation of a Short-Term Precipitation Forecasting Procedure (SHARP). J. Appl. Meteor., 17, 17781787, doi:10.1175/1520-0450(1978)017<1778:TEOTYO>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Boudala, F. S., and G. A. Isaac, 2009: Parameterization of visibility in snow: Application in numerical weather prediction models. J. Geophys. Res., 114, D19202, doi:10.1029/2008JD011130.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Boudala, F. S., G. A. Isaac, R. Crawford, and J. Reid, 2012: Parameterization of runway visual range as a function of visibility: Implications for numerical weather prediction models. J. Atmos. Oceanic Technol., 29, 177191, doi:10.1175/JTECH-D-11-00021.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Colman, B., K. Cook, and B. J. Snyder, 2013: Numerical weather prediction and weather forecasting in complex terrain. Mountain Weather Research and Forecasting: Recent Progress and Current Challenges, F. K. Chow, S. F. J. De Wekker, and B. J. Snyder, Eds., Springer, 655–692.

    • Crossref
    • Export Citation
  • Doswell, C. A., III, 2004: Weather forecasting by humans—Heuristics and decision making. Wea. Forecasting, 19, 11151126, doi:10.1175/WAF-821.1.

  • Du, J., G. DiMego, B. Zhou, D. Jovic, B. Ferrier, B. Yang, and S. Benjamin, 2014: NCEP regional ensembles: Evolving toward hourly-updated convection-allowing scale and storm-scale predictions within a unified regional modeling system. 22nd Conf. on Numerical Weather Prediction/26th Conf. on Weather Analysis and Forecasting, Atlanta, GA, Amer. Meteor. Soc., J1.4. [Available online at https://ams.confex.com/ams/94Annual/webprogram/Paper239030.html.]

  • Duan, Y., and Coauthors, 2012: An overview of the Beijing, 2008 Olympics Research and Development Project (B08RDP). Bull. Amer. Meteor. Soc., 93, 381403, doi:10.1175/BAMS-D-11-00115.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ebert, E. E., L. J. Wilson, B. G. Brown, P. Nurmi, H. E. Brooks, J. Bally, and M. Jaeneke, 2004: Verification of nowcasts from the WWRP Sydney 2000 Forecast Demonstration Project. Wea. Forecasting, 19, 7396, doi:10.1175/1520-0434(2004)019<0073:VONFTW>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ferro, C. A. T., and D. B. Stephenson, 2011: Extremal dependence indices: Improved verification measures for deterministic forecasts of rare binary events. Wea. Forecasting, 26, 699713, doi:10.1175/WAF-D-10-05030.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Frogner, I.-L., T. Nipen, A. Singleton, J. B. Bremnes, and O. Vignes, 2016: Ensemble prediction with different spatial resolution for the 2014 Sochi Winter Olympic Games: The effects of calibration and multimodel approaches. Wea. Forecasting, 31, 18331851, doi:10.1175/WAF-D-16-0048.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Golding, B. W., and Coauthors, 2014: Forecasting capabilities for the London 2012 Olympics. Bull. Amer. Meteor. Soc., 95, 883896, doi:10.1175/BAMS-D-13-00102.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Haiden, T., A. Kann, C. Wittmann, G. Pistotnik, B. Bica, and C. Gruber, 2011: The Integrated Nowcasting through Comprehensive Analysis (INCA) system and its validation over the eastern Alpine region. Wea. Forecasting, 26, 166183, doi:10.1175/2010WAF2222451.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hoffman, R. N., and E. Kalnay, 1983: Lagged average forecasting, an alternative to Monte Carlo forecasting. Tellus, 35A, 100118, doi:10.1111/j.1600-0870.1983.tb00189.x.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Huang, L. X., G. A. Isaac, and G. Sheng, 2012: Integrating NWP forecasts and observation data to improve nowcasting accuracy. Wea. Forecasting, 27, 938953, doi:10.1175/WAF-D-11-00125.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Huang, L. X., G. A. Isaac, and G. Sheng, 2014a: A new integrated weighted model in SNOW-V10: Verification of continuous variables. Pure Appl. Geophys., 171, 277287, doi:10.1007/s00024-012-0548-7.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Huang, L. X., G. A. Isaac, and G. Sheng, 2014b: A new integrated weighted model in SNOW-V10: Verification of categorical variables. Pure Appl. Geophys., 171, 289302, doi:10.1007/s00024-012-0549-6.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Isaac, G. A., and Coauthors, 2014: Science of nowcasting Olympic weather for Vancouver 2010 (SNOW-V10): A World Weather Research Programme project. Pure Appl. Geophys., 171, 124, doi:10.1007/s00024-012-0579-0.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Iversen, T., A. Deckmyn, C. Santos, K. Sattler, J. B. Bremnes, H. Feddersen, and I.-L. Frogner, 2011: Evaluation of “GLAMEPS”—A proposed multimodel EPS for short range forecasting. Tellus, 63A, 513530, doi:10.1111/j.1600-0870.2010.00507.x.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Janjić, Z., and R. L. Gall, 2012: Scientific documentation of the NCEP Nonhydrostatic Multiscale Model on the B grid (NMMB). Part 1: Dynamics. NCAR Tech. Note NCAR/TN-489+STR, 75 pp., doi:10.5065/D6WH2MZX.

    • Crossref
    • Export Citation
  • Joe, P., M. Falla, P. Van Rijn, L. Stamadianos, T. Falla, D. Magosse, L. Ing, and J. Dobson, 2003: Radar data processing for severe weather in the National Radar Project of Canada. 21st Conf. on Severe Local Storms, San Antonio, TX, Amer. Meteor. Soc., P4.13. [Available online at https://ams.confex.com/ams/pdfpapers/47421.pdf.]

  • Kann, A., G. Pistotnik, and B. Bica, 2012: INCA-CE: A central European initiative in nowcasting severe weather and its applications. Adv. Sci. Res., 8, 6775, doi:10.5194/asr-8-67-2012.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Keenan, T., and Coauthors, 2003: The Sydney 2000 World Weather Research Programme Forecast Demonstration Project: Overview and current status. Bull. Amer. Meteor. Soc., 83, 16311643, doi:10.1175/BAMS-84-8-1041.

    • Search Google Scholar
    • Export Citation
  • Kiktev, D. B., E. D. Astakhova, R. B. Zaripov, A. V. Murav’ev, A. V. Smirnov, and M. D. Tsyrulnikov, 2015: FROST-2014 project and meteorological support of the Sochi-2014 Olympics. Russ. Meteor. Hydrol., 40, 504512, doi:10.3103/S1068373915080026; Erratum, 40, 844–845, doi:10.3103/S1068373915120109.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Liu, H., and V. Chandrasekar, 2000: Classification of hydrometeors based on polarimetric radar measurements: Development of fuzzy logic and neuro-fuzzy systems, and in situ verification. J. Atmos. Oceanic Technol., 17, 140164, doi:10.1175/1520-0426(2000)017<0140:COHBOP>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Marshall, J. S., and W. McK. Palmer, 1948: The distribution of raindrops with size. J. Meteor., 5, 165166, doi:10.1175/1520-0469(1948)005<0165:TDORWS>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Marshall, J. S., and K. L. S. Gunn, 1952: The measurement of snow parameters by radar. J. Meteor., 9, 322327, doi:10.1175/1520-0469(1952)009<0322:MOSPBR>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Masson, V., and Y. Seity, 2009: Including atmospheric layers in vegetation and urban offline surface schemes. J. Appl. Meteor. Climatol., 48, 13771397, doi:10.1175/2009JAMC1866.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Medina, S., B. F. Smull, R. A. Houze Jr., and M. Steiner, 2005: Cross-barrier flow during orographic precipitation events: Results from MAP and IMPROVE. J. Atmos. Sci., 62, 35803598, doi:10.1175/JAS3554.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Milbrandt, J. A., S. Belair, M. Faucher, M. Vallee, M. A. Carrera, and A. Glazer, 2016: The Pan-Canadian High Resolution (2.5 km) Deterministic Prediction System. Wea. Forecasting, 31, 17911816, doi:10.1175/WAF-D-16-0035.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Montani, A., C. Marsigli, and T. Paccagnella, 2013: Development of a COSMO-based limited-area ensemble system for the 2014 Winter Olympic Games. COSMO Newsletter, No. 13, 93–99. [Available online at http://cosmo-model.org/content/model/documentation/newsLetters/newsLetter13/cnl13_12.pdf.]

  • Montani, A., D. Alferov, E. Astakhova, C. Marsigli, and T. Paccagnella, 2014: Ensemble forecasting for Sochi-2014 Olympics: The COSMO-based ensemble prediction systems. COSMO Newsletter, No. 14, 88–94. [Available online at http://cosmo-model.org/content/model/documentation/newsLetters/newsLetter14/cnl14_10.pdf.]

  • Murav’ev, A. V., A. Yu. Bundel’, D. B. Kiktev, and A. V. Smirnov, 2013: Verification of mesoscale forecasts in the 2014 Olympic Games region. Part II: Preliminary results of diagnostic evaluation of quality and calibration of the forecasts by the COSMO-RU2 model. Russ. Meteor. Hydrol., 38, 797807, doi:10.3103/S1068373913120017.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Murav’ev, A. V., D. B. Kiktev, A. Yu. Bundel, T. G. Dmitrieva, and A. V. Smirnov, 2015: Verification of high-impact weather event forecasts for the region of the Sochi-2014 Olympic Games. Part I: Deterministic forecasts during the test period. Russ. Meteor. Hydrol., 40, 584597, doi:10.3103/S1068373915090034.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Niemelä, S., S. Näsman, and P. Nurmi, 2014: FROST-2014—Performance of HARMONIE 1km during Sochi Olympics. ALADIN-HIRLAM Newsletter, No. 3, 79–86. [Available online at www.hirlam.org/index.php/hirlam-documentation/cat_view/77-hirlam-official-publications/285-aladin-hirlam-newsletters.]

  • Nurmi, P., M. Brockmann, and S. Näsman, 2014: Forecast verification framework and some early results of the Sochi 2014 Winter Olympics. World Weather Open Science Conf., Montreal, QC, Canada, WMO. [Available online at www.wmo.int/pages/prog/arep/wwrp/new/wwosc/documents/WWOSC_SCI-PS124.02_Nurmi_Monday-am.pdf.]

  • Nurmi, P., and Coauthors, 2015: The framework of the WMO/WWRP FROST-2014 forecast verification setup and activities. 15th EMS Annual Meeting/12th European Conf. on Application of Meteorology, Sofia, Bulgaria, European Meteorological Society, EMS2015–577. [Available online at http://meetingorganizer.copernicus.org/EMS2015/EMS2015-577.pdf.]

  • Park, H. S., A. V. Ryzhkov, D. S. Zrnic, and K. E. Kim, 2009: The hydrometeor classification algorithm for the polarimetric WSR-88D: Description and application to an MCS. Wea. Forecasting, 24, 730748, doi:10.1175/2008WAF2222205.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Reid, J., D. Hudak, S. Boodoo, N. Donaldson, P. Joe, D. Kiktev, and A. Melnichuk, 2014: Dual-polarization radar particle classification results during the Sochi Olympic Games. Eighth European Conf. on Radar in Meteorology and Hydrology, Garmisch-Partenkirchen, Germany, DWD–DLR. [Available online at www.pa.op.dlr.de/erad2014/programme/ExtendedAbstracts/151_Reid.pdf.]

  • Rivin, G. S., and Coauthors, 2015: The COSMO-Ru system of nonhydrostatic mesoscale short-range weather forecasting of the Hydrometcenter of Russia: The second stage of implementation and development. Russ. Meteor. Hydrol., 40, 400410, doi:10.3103/S1068373915060060.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Schraff, C., 1997: Mesoscale data assimilation and prediction of low stratus in the Alpine region. Meteor. Atmos. Phys., 64, 2150, doi:10.1007/BF01044128.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Shatunova, M. V., G. S. Rivin, and I. A. Rozinkina, 2015: Visibility forecasting for February 16–18, 2014 for the region of the Sochi-2014 Olympic Games using the high-resolution COSMO-Ru1 model. Russ. Meteor. Hydrol., 40, 523530, doi:10.3103/S106837391508004X.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Vukicevic, T., and R. M. Errico, 1990: The influence of artificial and physical factors upon predictability estimates using a complex limited-area model. Mon. Wea. Rev., 118, 14601482, doi:10.1175/1520-0493(1990)118<1460:TIOAAP>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wang, Y., and Coauthors, 2011: The Central European limited-area ensemble forecasting system: ALADIN-LAEF. Quart. J. Roy. Meteor. Soc., 137, 483502, doi:10.1002/qj.751.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weigel, A. P., M. A. Liniger, and C. Appenzeller, 2007: The discrete Brier and ranked probability skill scores. Mon. Wea. Rev., 135, 118124, doi:10.1175/MWR3280.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wilson, J. W., Y. Feng, M. Chen, and R. D. Roberts, 2010: Nowcasting challenges during the Beijing Olympics: Successes, failures, and implications for future nowcasting systems. Wea. Forecasting, 25, 16911714, doi:10.1175/2010WAF2222417.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • WMO, 2008: Recommendations for the verification and intercomparison of QPFS and PQPFS from operational NWP models. WWRP 2009-1, WMO-TD 1485, World Meteorological Organization, 34 pp. [Available online at www.wmo.int/pages/prog/arep/wwrp/new/documents/WWRP2009-1_web_CD.pdf.]

  • The area of mountain cluster of Olympic sport venues and meteorological station “Aibga” from above.

  • Fig. 1.

    (a) The Sochi Olympic area on the global map, (b) the magnified map with locations of the meteorological equipment, and (c) the mountain cluster with the stations and five sport venues. Red bulbs designate the automatic meteorological stations, the radar icon is the Doppler radar, green bulbs are the micro rain radars, blue bulbs are the temperature profilers, and the yellow bulb is the wind profiler.

  • Fig. 2.

    Example of the radar reflectivity composite for the region of the Games. The Akhun, Trabzon, Samsun, Donetsk, and Simferopol radar coverage patterns are shown by circles.

  • Fig. 3.

    MAEs of point-specific forecasts aggregated over stations at the sport venues of the mountain cluster. ABOM for COSMO-Ru2 is the ABOM system based on COSMO-Ru2 forecasts. Aggregation period ranges from 15 Jan to 18 Mar 2014, averaged over hourly runs.

  • Fig. 4.

    The (top) EC and (bottom) Vaisala particle classifications for a 1.1° scan at 1755 UTC 26 Feb. The EC shows more rain than the Vaisala classification (red tones) and the opposite for wet snow (blue tones).

  • Fig. 5.

    MAEs of 1-km-resolution model forecasts. The scores are aggregated over all model runs (COSMO-Ru1 and HARMONIE AROME, 0000, 0600, 1200, and 1800 UTC; GEM-1, 2300 UTC; NEMS/NMMB, 0000 and 1200 UTC) and over 22 stations in the mountain cluster. Here and in Figs. 68 the period is from 15 Jan to 18 Mar 2014.

  • Fig. 6.

    As in Fig. 5, but for the ETS of 1-h precipitation >1 mm (the higher the better).

  • Fig. 7.

    The role of the horizontal grid spacing for the (left) COSMO and (right) GEM model families. Scores are MAEs and are aggregated over all model runs (COSMO, 0000, 0600, 1200, and 1800 UTC; GEM-2.5, 2100 UTC; GEM-1, 2300 UTC; and GEM-0.25, 0000 UTC) and over 22 stations in the mountain cluster.

  • Fig. 8.

    As in Fig. 7, but for 1-h precipitation occurrence forecasts. Score is the EDI (the higher the better).

  • Fig. 9.

    The RH2m forecasts by COSMO-Ru2 and COSMO-Ru1 from 0600 UTC 16 Feb 2014 and corresponding observations for the low-visibility event at the biathlon stadium.

  • Fig. 10.

    The visibility forecasts made by GEM-2.5 (from 2100 UTC 15 Feb), GEM-1 (from 2300 UTC 15 Feb), and GEM-0.25 (from 0000 UTC 16 Feb), as well as the corresponding observations for the low-visibility event at the biathlon stadium. A model prediction of 100 km indicates unlimited visibility. The PWD sensors can report a maximum of 20-km visibility.

  • Fig. 11.

    CRPSs for ECMWF EPS, GLAMEPS, calibrated GLAMEPS, and HarmonEPS (the lower the better): (top) T2m, (middle) 10-m wind speed, and (bottom) 3-h precipitation.

  • Fig. 12.

    CRPSs for 10-m wind speed forecasts for HarmonEPS, extended HarmonEPS with two subensembles, calibrated HarmonEPS, and calibrated GLAMEPS based on 26 members only.

  • Fig. 13.

    Area under the ROC curve (the higher the better) for forecasts of the event “6-h accumulated precipitation is above 1 mm” aggregated over the stations of the mountain cluster for (left) convection-parameterized and (right) convection-permitting EPSs, as well as for the corresponding multimodel ensembles. Note that about 200 occurrences of the above event were observed during the verification period.

  • Fig. 14.

    Debiased RPSS (the higher the better) for 6-h accumulated precipitation forecast by two convection-parameterized (COSMO-S14-EPS and GLAMEPS) and two convection-permitting EPSs (COSMO-Ru2-EPS and HarmonEPS), aggregated over the stations of the mountain cluster.

  • Fig. 15.

    The skill of the official and model forecasts as a function of the lead time. MAEs of T2m aggregated over the mountain cluster (heights of about 600, 1,000, 1,500, and 2,000 m), during the period from 1 Nov 2013 to 23 Feb 2014 (HARMONIE AROME, from 9 Dec 2013; WRF-ARW-NIMS, from 23 Dec 2013), with the official forecasts issued at 1100 UTC, and the models started at 1200 UTC. After 24-h lead time, the HARMONIE AROME forecasts were issued with a 6-h time step, denoted by the blue dot at 30-h lead time in the plot.

  • Fig. 16.

    As in Fig. 15, but for the EDI (the higher the better) of 1-h precipitation occurrence.