1. Introduction
The National Weather Service (NWS) agreed in 1992 to provide specialized operational weather support to the 1996 Centennial Olympic Games in Atlanta (Rothfusz et al. 1996). In response to the high-resolution weather forecast and warning requirements of the Olympic Games, the NWS developed the Olympic Weather Support System (OWSS). A key element of the OWSS was the operational implementation of the National Oceanic and Atmospheric Administration (NOAA) Forecast Systems Laboratory’s (FSL) Local Analysis and Prediction System (LAPS). A three-dimensional data assimilation system, LAPS incorporates all relevant available data sources to provide meso-β-scale analyses and forecasts of the atmosphere sufficient to cover the area of responsibility for the typical NWS forecast office (Albers 1995; McGinley 1995; Snook et al. 1995). Furthermore, LAPS is designed to run in the local forecast office using affordable computer workstation technology.
The demonstration of LAPS within the OWSS was important. For the first time, a mesoscale forecast model initialized with comparably high-resolution analyses was implemented in an operational environment using technology representative of that planned for NWS forecast offices in the next several years. The LAPS forecasts were produced by the Regional Atmospheric Modeling System (RAMS) (Pielke et al. 1992; Walko et al. 1995) developed at Colorado State University. The operational forecaster interactively selected several model options, including initialization time, model domain grid resolution and location, and model forecast length. The LAPS data assimilation for the Olympic domain and the operational configuration of the forecast modeling system is described. Local-domain analysis and forecast verification and benefits to the local forecast office are also presented.
2. Local-domain analyses
a. The Local Analysis and Prediction System
The analysis component of LAPS has been under development at FSL since 1987. It consists of data collection modules; a surface analysis; three-dimensional analyses of wind, temperature, moisture, and clouds; and derived products. LAPS analyses produced at FSL have been sent in real time to the Denver NWS Forecast Office since 1990. LAPS has also been used in exercises around the United States by various groups, for example, the Verification of Rotation in Tornadoes Experiment (Brewster et al. 1995). However, the installation as part of the OWSS was the first time LAPS was installed and run operationally entirely within a local forecast office.
Local site configuration for LAPS is a fairly simple process that generally takes a few hours. The user defines the grid size and location, defines some system library and data paths, and runs a script that builds LAPS. The only source code that must be customized by the user is for data collection. This can add a few days to the installation process, depending on the types and formats of the data sources available.
1) Data collection
The interface between LAPS and data at a local site is provided by several data collection programs. There are programs to access each class of data available to the typical weather office: surface, radar, satellite, rawinsonde, etc. The local database is read by the appropriate program and an LAPS intermediate data file is produced for each data type. The main reason for these intermediate files is to maximize portability. When LAPS is installed at a given site, it is only necessary to write routines that read the local database and link those routines to the data collection programs.
LAPS also requires output from a larger-scale numerical model as background conditions. Analyses and forecasts from a model are spatially interpolated to the LAPS grid. They are also interpolated in time, so that each time LAPS runs, background fields valid for that time are available. To use a different model for the analysis background, one only needs to change the code that reads the model data files. This has been the procedure for several of the models produced by the National Centers for Environmental Prediction (NCEP), and it is only necessary to set a flag indicating which model to use.
2) Surface analysis
The LAPS analysis cycle begins with the surface analysis, since it is used by other analyses as a background or lower boundary. Surface data are first examined with a simple quality control routine, which performs gross error, climate, range, and time tendency checks. Observations are then analyzed to the LAPS grid, and a Barnes (1964) analysis with a large radius of influence is used to produce a smooth analysis. The data are then checked against this smooth field, and observations differing by an appropriate threshold for each variable are removed. The remaining data are analyzed again, this time using a variable radius of influence based on data density. Using this analysis on the boundaries of the domain, a cubic spline then matches the interior analysis to the observations. For most fields, the previous surface analysis is used as a first guess, although variables from the three-dimensional LAPS analyses (e.g., winds) interpolated to the surface are also used.
A variational method developed by Lewis (1971) is used to constrain the pressure and wind fields to partially satisfy the full equations of motion. Vector wind changes from the first guess provide an estimate of time tendencies, and these are used to estimate the nonlinear terms. Friction terms are calculated as well, using the local variance in terrain elevation to determine the surface roughness.
Temperatures and dewpoints are analyzed as departures from estimated values, to reduce extrapolation errors in data-sparse regions of the domain. The estimates are made by reducing upper-level temperatures and dewpoints from the three-dimensional grids at each observation location to the surface elevation. When available, Geostationary Observational Environmental Satellite (GOES) window channel IR brightness temperatures are included in the temperature analyses by forcing the structure (i.e., horizontal gradients) of the brightness temperature field to match the observed surface station temperatures as part of the cubic spline fit. This horizontal shape matching (HSM) (Birkenheuer 1996; McGinley 1982, 1987) improves the temperature analyses in areas between observations.
The surface analysis produces 26 fields each cycle, including all the standard meteorological quantities; derived fields such as lifted index, buoyant energy, divergence, and terrain-induced vertical motion; and forecaster aids such as heat index.
3) Upper-level analyses
The upper-level wind analysis employs a two-pass objective analysis that merges single Doppler radar data with nonradar data (Albers 1995). First, nonradar data (e.g., surface, profiler, sounding) are used to produce a preliminary analysis. The background wind analysis from the larger-scale model is then subtracted from the new observations to give observation residuals. The residuals are checked against the background, and those that exceed a threshold for the given data type are rejected. The remaining residuals are then mapped to the nearest grid point in three dimensions and spread vertically ±50 hPa provided that no prior observation exists at the vertically adjacent grid points. Observations are then analyzed at each level using a Barnes (1964) approach where the radius of influence varies with the data density. Once the preliminary analysis is complete, the radar observations are mapped onto the LAPS grid and are quality checked and dealiased using the preliminary analysis. Then, at each point with a radar radial velocity, a wind vector is calculated using the preliminary analysis to provide the tangential component. These derived wind vectors are then combined with the nonradar data, and the analysis procedure as outlined above is repeated. The analysis increments are then added back to the first guess, creating the final wind analysis. At this point, other routines produce derived products for the forecasters, such as kinematic vertical velocity, helicity, and storm steering winds.
The LAPS upper-level moisture analysis begins with the background model moisture field interpolated to the LAPS grid. Dewpoints from the LAPS surface analysis are blended vertically into a diagnosed boundary layer, and quality checks are made for supersaturation. Once this preliminary analysis is complete, information from the LAPS cloud analysis is used to increase moisture in cloudy areas. Water vapor information from the GOES satellite, not available for the OWSS LAPS use, can now be inserted to produce the final three-dimensional analysis using HSM (Birkenheuer 1992, 1996).
The LAPS three-dimensional temperature analysis (Albers et al. 1996) begins with a larger-scale temperature analysis from the background model, interpolated to the LAPS grid. Other data (e.g., soundings) are inserted, if available, to give a preliminary analysis. Finally, the surface temperature analysis provides information for the lower levels of the three-dimensional analysis. To insert the LAPS surface temperatures, a 50-hPa boundary layer is defined above the terrain. The temperature at the top of this layer is set to the temperature of the preliminary analysis and the bottom to that of the LAPS surface analysis. This is done at each grid point where the surface pressure is >750 hPa. Next, the residual between the LAPS surface temperature and the first guess is calculated and applied through the boundary layer in a ramped fashion, so that the full bias is applied at the surface, half at the middle of the boundary layer, and no correction at the top. This helps to preserve the vertical temperature structure while maintaining consistency between the surface and three-dimensional analyses. The final three-dimensional temperature field is also used to compute a three-dimensional hydrostatic geopotential field, using the surface pressure and terrain fields as boundary conditions.
The LAPS cloud analysis (Albers et al. 1996) begins with surface observations, and pilot reports if available. Vertical “soundings” of clouds are spread horizontally to create a preliminary three-dimensional analysis. Next, satellite data are combined with the LAPS three-dimensional temperature analysis to give a cloud-top height field, which is blended into the preliminary analysis. Checks are done to maintain consistency between the surface and satellite data. To produce the final analysis, radar data and visible satellite data are inserted to provide additional detail, and again checks are done to maintain consistency among data sources.
b. OWSS configuration
1) Analysis domain
The LAPS Olympics domain included Georgia, most of South Carolina, and portions of North Carolina, Tennessee, Alabama, and Florida (Fig. 1), which covered all the Olympic venue sites in the area of responsibility of the Olympic Weather Support Office (OWSO), located at the Peachtree City, Georgia, NWS Forecast Office, and the Olympic Marine Weather Support Office (OMWSO) located at the yachting venue near Savannah. LAPS used an 85 × 85 × 21 point grid with 8-km horizontal and 50-hPa vertical spacing over this area. LAPS topography was based on a U.S. Geological Survey dataset with a 30-s resolution, interpolated to the 8-km LAPS grid and smoothed with a 4-Δx filter. Elevations over the domain ranged from sea level to 1590 m.
The data ingest portion of LAPS ran on an HP 755 workstation and was connected to the OWSS database and updated as new data arrived. Upper-level LAPS analyses were produced on a 30-min cycle, while surface analyses were created every 15 min. On average, the complete analysis cycle took less than 5 min to run on an IBM 39H workstation.
2) Data available to LAPS
Surface observations were plentiful for the LAPS analyses. Standard observations were provided via a connection between the NWS network and the OWSS database in METAR (aviation routine weather reports) format. They gave surface temperature, wind, moisture, pressure, cloud-height, and cloud coverage data. The LAPS ingest collected data for an area 1.1° latitude–longitude larger than the LAPS domain, so observations near to (but outside of) the grid could influence the boundaries. Over this larger area, approximately 60 METAR observations were available per analysis cycle.
Other surface data came from mesonets operated by several different organizations (Garza and Hoogenboom 1996, 1997). These automated observations all included temperature, wind, and moisture; most had tipping-bucket rain gauges, and some had pressure sensors. There were approximately 50 mesonet stations within the LAPS domain, reporting every 15 min. In addition to the mesonet data, observations were collected from three buoys placed offshore of Savannah surrounding the yachting venue. Buoy wind, temperature, humidity, and sea surface temperatures were available once per hour. While generally providing useful data, the automated mesonet observations would sometimes cause poor analyses (e.g., winds lighter than expected,“noisy” dewpoint fields). This seemed to be more of a problem near Savannah, where mesonet, METAR, and buoy observations were being combined over a small, geographically diverse (forest, coastline, and open water) area. Most cases were caught by the LAPS quality control algorithms, but several passed and were a problem for the OMWSO forecasters. Work is currently in progress at FSL to improve the LAPS quality control methodology (McGinley and Stamus 1996).
Two WSR-88D Doppler radars provided upper-level wind data for LAPS. One was located at the Peachtree City NWS Forecast Office and provided data for most of the LAPS domain. The other was located at Charleston, South Carolina, approximately 170 km northeast of Savannah, and provided coverage for the yachting venue. For use in LAPS, the radar radial velocities were first remapped to the LAPS grid using a technique described by Albers (1995), then sent to the OWSS for use in the LAPS three-dimensional wind analysis. This was done for each volume scan of the radar, approximately every 6 min. While the Peachtree City radar was almost always available, problems with storing the remapped files at Charleston prevented the regular use of these data in LAPS.
One profiler was available within the LAPS domain that provided additional upper-level wind data. NOAA’s Environmental Technology Laboratory placed one of its boundary layer profilers at Tybee Island, Georgia, near the yachting venue, to support the OMWSO at Savannah. FSL’s Demonstration Division, responsible for operating the NOAA Profiler Network and collecting data from boundary layer profilers, set up a special ingest–quality control system for these data. This system ran at FSL and was based on the operational system (Barth et al. 1997). The quality-controlled data were put into a standard format and were read by an LAPS ingest process also running at FSL. The resulting LAPS data file was then sent to the OWSS via the Internet for use in the analyses. This profiler was instrumented from 120 to 3770 m above the surface and had a 100-m resolution. Winds passed the quality control checks to 2400 m approximately 75% of the time, and to 2900 m about 50% of the time. Observations were available on a 30-min cycle.
The LAPS satellite ingest was configured to use ASCII satellite data files generated from the RAMM Advanced Meteorological Satellite Demonstration and Interpretation System (Molenar et al. 1996) installed in the OWSS. Both visible counts and 11.2-μm infrared brightness temperature data were created four times per hour. The resolution of these data was approximately 6 km for the IR and 1 km for the visible channel. Pixel averaging was used to analyze these data to the 8-km LAPS grid. Satellite data were used in the LAPS surface temperature and three-dimensional cloud analyses.
Background information for the LAPS analyses came from the rapid update cycle (RUC) (Benjamin et al. 1991, 1997). The RUC is a hybrid sigma-isentropic, meso-α-scale data assimilation system with 60-km horizontal grid spacing. The initialization and 1-h forecasts out to 12 h were produced every 3 h. RUC analyses and short-term forecasts are run operationally at NCEP and were available to the OWSS during the games. RUC fields were interpolated to the three-dimensional LAPS grid, in space and in time, and served as the starting point for the upper-level analyses.
3) Postprocessing
Once the analyses were completed, other processing was performed for various purposes. First, the surface and cloud analyses were used to create “interpolated observations” (or interobs). Weather information was required at venue sites within the LAPS domain where there were no observing stations. To fill this need, interpolated values from LAPS were combined with other data to make an interob, which was used like an observation by the olympics officials. Details on the creation and use of the interobs are discussed by Stamus et al. (1997).
LAPS output files were translated into GEMPAK format for display on the NCEP Advanced Weather Interactive Processing System (N-AWIPS) (desJardins et al. 1997) workstations used by the forecasters at the OWSO (Rothfusz et al. 1996). Since this processing was very computer intensive, it took place on the IBM RS6000 power-parallel computer, where the model forecasts ran (see section 3a).
All LAPS analyses were available to serve as initial conditions for the RAMS forecast runs. This gave the OWSO forecasters a great deal of flexibility in the timing and duration of their local model runs.
3. Local-domain mesoscale model predictions
The implementation of LAPS in the Peachtree City Forecast Office was an unprecedented demonstration of several local-domain mesoscale forecasting concepts. The three-dimensional LAPS analyses proved adequate for the initialization of a full-physics, nonhydrostatic mesoscale numerical model running at resolutions finer than previously feasible. FSL has been demonstrating and evaluating this concept in several quasi-operational environments (Snook et al. 1995; Snook and Pielke 1995), but the implementation of LAPS–RAMS in the OWSS was the first truly operational implementation of a local-domain analysis and prediction system functioning completely within an NWS forecast office. It was the first known attempt to initialize a local-domain mesoscale forecast model with comparably high-resolution operational analyses and to generate real-time numerical guidance using horizontal grid spacings as low as 2 km. A capability to interactively select several model options was incorporated to evaluate another aspect of local implementation.
a. Computer hardware selection
The initial OWSS LAPS–RAMS forecast system was installed in May 1995 on an IBM RS6000/590 RISC computer workstation located at FSL (Snook 1996). The hardware allowed the generation of one daily 18-h forecast using an 85 × 85 8-km grid and 25 levels. Although the predictions typically required 10 to 12 h to complete, the forecast results were encouraging and provided the basis to pursue more powerful computer platforms for this task. At that time, a parallel version of RAMS was being developed as a joint collaboration between FSL and Colorado State University. Test results using 81 nodes on an Intel Paragon distributed-memory, massively parallel processor (MPP) demonstrated that vast improvements in model completion time could be attained at a reasonable cost.
During early 1996, IBM, as an official sponsor of the Olympic Games, agreed to provide a 30-node RS6000 scalable power-parallel (SP2) system as the operational compute engine for the local-domain mesoscale forecast model. The IBM SP2 was installed at the OWSO during April 1996. Details of the SP2 system are described by Christidis et al. (1997). Implementation of the Scalable Modeling System (SMS) as applied to RAMS, which takes advantage of the SP2’s parallel architecture, is discussed by Edwards et al. (1997). Although the 30-node SP2 system is significantly more powerful than the hardware currently available in the typical NWS forecast office, rapid advances in software and hardware should allow this class of machine to be affordable to each forecast office as a part of the NWS modernization. For example, only one year after the games, each SP2 node has nearly doubled in speed. Improvements to software that take better advantage of the parallel architecture also reduce the amount of hardware needed to satisfy the task.
b. Operational model setup
The LAPS–RAMS forecast system was designed to have as much flexibility and local control as possible. Hence, two model domains (Table 1) were developed: the first was an 8-km grid covering the full Olympic domain (672 × 672 km2; Fig. 1) and the second domain was a smaller 2-km relocatable grid covering an area of 160 × 160 km2 (e.g., Fig. 2). Several model features were left under the complete control of the Olympic forecasters. These included model forecast initialization time (any 30-min LAPS run), model domain grid (either 8 or 2 km), model domain location (if the 2-km grid was selected), and model forecast period. Typical compute times were 11 min per forecast hour for the 85 × 85 × 30 8-km grid and 13 min for the 81 × 81 × 37 2-km grid. Generally, forecasters elected to run the model every 3 h, which allowed a 16-h forecast for the 8-km domain and a 14-h forecast for the 2-km domain. A typical forecast strategy included an 8-km forecast initialized at 0600 UTC; a 2-km forecast centered over Savannah, Georgia (Fig. 2), initialized at 0900 UTC to support a detailed sea-breeze forecast required for the yachting venue; and then 8-km forecasts initialized every 3 h starting at 1200 UTC.
Horizontal interpolation of the RAMS initialization data, provided by the OWSS LAPS, was only necessary when the 2-km grid was selected. It was necessary to vertically interpolate the LAPS isobaric analyses to the RAMS σz vertical coordinate system. The 8-km grid used 30 stretched vertical levels with the lowest model level positioned at about 48 m above ground level (AGL), a minimum vertical grid spacing of 100 m, a stretch factor of 1.14, and a maximum grid spacing of 1000 m. Greater vertical resolution was implemented in the 2-km grid by lowering the stretch factor to 1.1 and the maximum grid spacing to 750 m. The separate LAPS surface analyses were blended into the three-dimensional initialization up to 500 m AGL to take full advantage of the abundant surface data that were available to the OWSS.
Forecast lateral boundary conditions were provided by the NCEP 29-km, national domain Eta Model (Black 1994). A very conservative time step of 12 s was implemented to ensure satisfaction of the Courant–Friedrichs–Levy condition for computational stability (Haltiner and Williams 1980) in all situations. The time step was half of that used in previous quasi-operational implementations of RAMS. Sensitivity experiments showed that the shorter time step was necessary to support a vertical resolution higher than previously used and to accommodate the strong vertical motions that can occur in the convective environment. The RAMS model initialization and physics are summarized in Table 2, and further details can be found in Snook and Pielke (1995), Snook et al. (1995), and Snook (1996). The RAMS model physics options chosen were appropriate for the grid-scale resolution. Hence, a nonhydrostatic version of the model was employed with a full implementation of liquid and ice microphysics (Walko et al. 1995) that provided an explicit prediction of precipitation, and no cumulus parameterization scheme was implemented.
A parallel version of RAMS was implemented to take advantage of the IBM SP2’s multiple processors. The code was instrumented with the SMS–Nearest Neighbor Tool (NNT) library of utilities developed by the High Performance Computing Group at FSL. The NNT software library has been designed to minimize the code changes that must be made when parallelizing an existing geophysical model. NNT has been ported to MPP hardware, conventional vector supercomputers, shared memory multiprocessors, and workstations (Rodriguez et al. 1996).
c. Visualization
A comprehensive visualization system that integrates the model output with other guidance and allows the forecaster to rapidly assess the enormous amounts of data is important because the real-time predictions are useful for only a short time span. The same affordable computer workstation technology that runs LAPS–RAMS is capable of supporting several visualization systems that meet these requirements. The OWSS used the N-AWIPS meteorological workstation (desJardins et al. 1997), developed at NCEP, to ingest, integrate, and display a wide variety of forecaster guidance including the RAMS predictions. The capability to combine and compare the RAMS model output with other data and other model predictions and to animate products on the workstation was an extremely valuable tool for evaluating the enormous amounts of data and model output.
RAMS postprocessing included the generation of upper-air products and surface products. Basic-state variables were vertically interpolated from the model grid coordinate to more familiar isobaric surfaces. Since the model included explicit microphysics, three-dimensional forecast fields of liquid water content, ice water content, and radar reflectivity were derived from the predictions of microphysical species. Surface forecast products included basic-state variables that had been interpolated from the lowest model height (48 m AGL) to standard observation height using similarity theory (Louis 1979). Derived fields such as lifted index, available potential energy, and heat index were also generated. As with the LAPS analyses, postprocessing of the RAMS data into the N-AWIPS product format was very compute-intensive. The timely transmission of the LAPS and RAMS products to the N-AWIPS workstation required the dedication of four SP2 nodes to postprocessing at the expense of the model running with four fewer nodes. The large quantity and size of LAPS and RAMS N-AWIPS products often saturated the transmission line to Savannah, resulting in serious delays of all products to the OMWSO. These issues are currently being addressed with a new meteorological workstation, under development at FSL (MacDonald and Wakefield 1996), which is the prototype of the Advanced Weather Interactive Processing System, the next-generation workstation for the NWS field forecast offices.
Three-dimensional visualization of the RAMS predictions has been successfully demonstrated at FSL as another method to efficiently evaluate the model output (Snook et al. 1995). RAMS forecasts were stored every 10 min in a format readable by the three-dimensional IBM Visualization Data Explorer system. Three-dimensional time animations of model output were available to the forecasters at the Peachtree City OWSO and to the Olympics World Wide Web site (e.g., see AMS 1997). Treinish and Rothfusz (1997) provide a detailed description of this system.
d. Local office implementation
The complete LAPS system is intended to function wholly in the local forecast office. Hence, an additional design requirement is that the system be as automated as possible. If the system is to be deployed at numerous forecast offices, the human resources needed to run the system must be kept to a minimum. Representatives from FSL and IBM were present to troubleshoot any problems with the LAPS–RAMS system during the operational phases of the Olympic weather support. This proved beneficial as the last few problems were resolved during the first several days of the games. After this time, the LAPS–RAMS system required very little human interaction outside of the designed local control. As further testament to the minimal amount of required human attention, the LAPS–RAMS system continued to operate for three weeks following the Olympic Games in support of the Paralympic Games, an athletic competition for the physically challenged, during which no FSL or IBM representatives were present and the system had no software failures.
4. Local-domain analysis and forecast verification
a. Analyses
The LAPS analyses used all the available data, so it is not possible to do an independent verification. However, the LAPS surface analysis uses internal diagnostics that include a check against the current dataset. The analyses are interpolated back to the station locations using a bicubic spline algorithm, and analysis minus observation differences are recorded for each analysis time. This dependent verification showed that the LAPS surface temperature analysis was within 0.5–1.0 K (1°–2°F) of the observations on average, which is within the expected sensor error. Dewpoints were within 1.0–1.5 K (2°–3°F), again within expected error considering the different sensors used in the observation network. Wind speeds were also close, averaging a 1–2 m s−1 (2–4 kt) difference.
Another way to measure the quality of the analyses, albeit a subjective one, is by the comments of the forecasters using them. In a summary of the weather support for the Olympics, Rothfusz and McLaughlin (1997) report that all but one Olympic forecaster cited LAPS as a critical tool in all mesoscale forecasting situations. The most favorable response to LAPS was from forecasters with previous mesoscale forecasting experience, based on comments made by the forecasters in a postOlympics survey (L. Rothfusz 1996, personal communication). Their opinion of most useful fields (divergence, wind vectors, CAPE, and equivalent potential energy) reflected this, as did their comments on how they used LAPS (e.g., “evaluating the preconvective environment,” “short-term <3 h forecasting”). LAPS analyses were thought to be less useful after convection developed in a weakly forced environment. In these cases, the 8-km grid spacing of LAPS and the available surface observations were generally not sufficient to resolve individual outflow boundaries. Several outflows at different locations within the domain could combine to produce a noisy analysis or be smoothed out due to lack of supporting data. In either event, the result was an unrepresentative analysis. Forecaster comments about the least useful LAPS fields (mainly advection fields) support this idea.
b. Model predictions
Quantitative model validation was performed automatically on a variety of surface variables including temperature, dewpoint, and wind. Surface observations were available from the standard NWS observation network and from the special automated network assembled specifically for Olympic Games support. Approximately 70 of the 110 possible surface observations (Fig. 1) were typically available for comparison with model output that was interpolated to each surface observation location. Because differences exist between the low-level RAMS model height (48 m AGL) and surface observation elevation, several adjustments were made to the interpolated model output. Similarity theory (Louis 1979) was used to adjust model temperatures and wind speeds to the surface temperature observation level of 1.5 m and the surface wind observation level of 10 m. An additional adjustment was made to the model temperature using a standard lapse rate of −6.5 K km−1 to account for any difference between the model terrain height and the surface elevation at the observation location. Forecast winds were compared to observed winds and no additional adjustments were made to correct for different sampling times. No adjustments were made to the model moisture variable (mixing ratio). Bias and rms statistics were computed using all surface observing points. Spatial and temporal quality control of the observations was provided by LAPS.
Results are presented in Fig. 3, along with a comparison to statistics computed from the 10- and 29-km Eta Model forecasts provided by NCEP. Similar adjustments to account for differences in model and actual observation heights were computed by NCEP prior to the arrival of the forecasts at Peachtree City. The results are an average of all Eta forecasts initialized at 0300 UTC and all RAMS forecasts initialized at 0600 UTC for the period 2 July–24 August 1996. The plotted Eta results are displaced by 3 h so that the comparisons are displayed at common forecast valid times. RAMS 8-km forecasts were available at 1-h increments and Eta forecasts were available at 3-h increments.
The bias and rms results indicate an improvement with RAMS compared to both 10- and 29-km Eta Model forecasts through 1200 UTC for temperature (rms reductions of 0.1–0.9 K), dewpoint (rms reductions of 0.2–1.0 K), and wind. Positive improvements in quality continue through 1500 UTC for dewpoint and rms reductions of 0.4–1.0 m s−1 are evident throughout the entire forecast period for wind. Several experimental forecasts were conducted after the games in an attempt to explain the cool temperature and moist dewpoint biases after 1200 UTC (0800 LT). These simulations suggested that RAMS was slow in mixing out the boundary layer during the late morning. Otherwise, the forecast improvements are likely the result of better initialization of RAMS by LAPS and higher model grid resolution.
Forecasters were asked to subjectively compare model predictions with physical observations and other visual accounts (e.g., human observations) of the weather. Although not as rigid as a quantitative approach, the qualitative evaluation is useful for subjective comparisons with alternative data sources and other model output. For this article, a qualitative evaluation is presented for the model’s performance of mesoscale precipitation predictions. This important forecast is difficult to quantify due to a lack of high temporal and spatial scale observations, but the operational forecasters can provide subjective insight into the model’s performance through comparisons with the radar, satellite, and human observations. Forecaster comments were obtained through personal communication and from forecaster surveys completed by the Olympic support operational personnel (Rothfusz and McLaughlin 1997).
The location and timing of the RAMS precipitation forecasts were, in general, quite good. However, there was a very large overprediction of precipitation amount associated with convection. This is likely the result of the 8-km grid spacing being insufficient to fully resolve the airmass thunderstorms typical during the Georgia summer. The capability of RAMS to even represent convective-scale storms was a noted improvement over the other available forecast models. Added value was also recognized from the ability to restart the model every 3 h. Two benefits were evident from this strategy. First, the early morning initialized predictions were often incapable of predicting the subtle mesoscale surface forcings that are important to the prediction of afternoon convection in the subtropical environment. But, once these features started to be detected in the later morning LAPS analyses, the RAMS forecasts were able to “latch onto” these features and generate reliable convective guidance. Second, as the forecasters observed common features in repeated predictions, they became more confident using these particular forecast features.
The 0900 UTC, 2-km RAMS predictions were designed to enhance the detailed sea-breeze forecasts required by the yachting venue. Local buoy observations of sea surface temperature were used in the RAMS model initialization. In addition to the standard N-AWIPS products, special point wind forecast products were generated at half-hour increments for the two buoy positions (see Fig. 2), which were collocated with the yachting event sites. The special products included a textual derived surface wind prediction (Fig. 4) and a forecast time–height series of wind (Fig. 5). These products helped to define the timing, penetration, and direction of the sea breeze, and the operational forecasters noted that these parameters were well forecast by RAMS. A common theme expressed by all the forecasters was that the RAMS forecasts by themselves were generally good, but the predictions were most useful when viewed in combination with all other available guidance. A more detailed validation of all model forecasts in the vicinity of the yachting venue is provided by M. Powell and S. Rinard (1998, manuscript submitted to Wea. Forecasting).
Several 2-km grid forecasts using a window centered over Atlanta were performed prior to the start of the games. These forecasts were often contaminated by the lateral boundaries, especially toward the latter stages of the prediction cycle. For these small domains, the results were good when there existed a strong forcing (e.g., sea-breeze forcing) internal to the domain. Results were, however, not as good when the internal forcings were weak, such as for the Atlanta domain. For these reasons, the only 2-km grid forecasts scheduled were to support the Savannah yachting venue. A subject of research is to determine if better 2-km grid forecasts over land can be obtained by using a nested grid system that would reduce the adverse effects of the lateral boundaries.
5. Benefits to the local forecast office
The LAPS–RAMS system has been running quasi-operationally at FSL for several years. Results from this system have suggested that the local forecast office could realize many benefits from running a mesoscale analysis and forecasting system on site (Snook and Pielke 1995; Snook 1996). Now, for the first time, these benefits have been demonstrated in a true operational environment.
The LAPS analyses provide a unique way for the forecasters to monitor and evaluate the local weather. LAPS combines many different data types, including local and national, remotely sensed and in situ, into a complete picture of the current conditions. This allows the forecasters to summarize the latest data and decide where they must focus their attention for a more detailed evaluation. Frequent updates provide continuity, valuable in developing weather situations. On a workstation designed for mesoscale forecasters, LAPS products (both analyses and forecasts) can be overlaid on satellite or radar images, as well as combined with the actual observations, to further enhance their utility. In addition, the high-frequency analyses provide the initial conditions to run the mesoscale model when the forecasters choose and also allow them to quickly evaluate the resulting high-frequency forecasts. The forecaster surveys indicated that the high-frequency predictions were one of the most important benefits of the local model. Identifying common features and trends in new model output and in previous forecasts provided a higher level of confidence for these predictions.
Local control of the mesoscale model proved beneficial. The ability to interactively select the model domain, the model grid resolution, the model start time and duration, and the frequency of model predictions allowed the operational forecasters to tailor a strategy that would best meet their needs for the local forecast problem. This is a decision that can only be made in the local forecast office. No special forecaster education or training is required for this local control.
Locally produced weather analyses and predictions greatly reduce the amount of required telecommunications, from both a data collection standpoint and a model output dissemination standpoint. Local data sources, which may not be available to the NCEP central facility in a timely fashion, can be incorporated into local analyses and prediction. The volume of model output continues to grow exponentially in combination with expanding computer hardware capabilities. Indeed, the OWSS dedicated T1 connection to NCEP was frequently saturated with numerous model predictions and products. Frequent saturation also occurred on the dedicated T1 connection between the Savannah OMWSO and the Peachtree City OWSO, which had the additional burden of transmitting LAPS and RAMS products. Communication of model output over long distances to another computer platform is not necessary when the model runs locally on the same network of computers that controls the operational meteorological workstation. This also eliminates the problem of degrading the frequency and resolution of the model output that frequently occurs when disseminating model data from the NCEP central facility to a local office. Hence, the whole flow of data (from collection into the local analyses, to model initialization, to model computations, to model output visualization) occurs in a timely fashion at one location.
Rothfusz and McLaughlin (1997) also noted the LAPS–RAMS benefits in their overview of the Olympic weather support. Overall observations included, “LAPS was a critical tool in all mesoscale forecasting situations” and “Locally controlled, regional models have significant benefit to forecasts and warnings.” Finally, it is important to understand that the locally produced local-domain numerical weather analysis and prediction effort is not intended to replace any guidance that is available from the NCEP central modeling facility or any other center. The local-domain forecasting support is designed to provide an additional mesoscale forecast tool to the suite of products already available on the meteorological workstation. The experiences at Peachtree City and Savannah have successfully demonstrated this synergy.
6. Summary
The LAPS–RAMS meso-β-scale analysis and prediction system was implemented in the OWSS at the Peachtree City NWS Forecast Office to support the high-resolution weather forecast and warning requirements of the Olympic Games. This was the first operational implementation of the complete LAPS–RAMS system. The capability to generate meso-β-scale analyses and forecasts in the local forecast office using technology typical of that planned for the NWS in the next several years was successfully demonstrated. Frequently updated, high-resolution analyses provided the forecasters with a tool to summarize the large amounts of data coming into the OWSS, in addition to always having model initial conditions available. Local control of the mesoscale model allowed the operational forecasters to tailor the model characteristics to better meet their forecast requirements. Operation of the model in the local environment allowed for more frequent mesoscale forecasts and more timely receipt of the model output. A quantitative and qualitative assessment of the model performance indicates that the local model provided added value to the other guidance available through the OWSS. Furthermore, the demonstration showed that the logistics of running a mesoscale model in the local forecast office can be accomplished with minimal human resources. The OWSS, with LAPS–RAMS included, is an excellent example of the enhanced operational mesoscale forecast capabilities that will be available to the NWS and other forecast offices in the near future.
Acknowledgments
The authors wish to thank Drs. Roger Pielke and William Cotton of Colorado State University and Dr. Craig Tremback of Mission Research Corporation for their permission to use RAMS for this project. The Advanced Computing Group within FSL is acknowledged for their help in parallelizing the RAMS model. RAMS was developed under the support of the National Science Foundation and the Army Research Office. IBM is acknowledged for providing SP2 hardware and software support to the Olympic Games effort.
Thanks also to the other members of the LAPS team who contributed to this project: Steve Albers, Dan Birkenheuer, Barb Keppler, John Smart, Paul Schultz, and Linda Wharton. Clark Safford and J. T. Johnson provided much help from the OWSS side. Profiler data were provided to LAPS by Mike Barth, Forecast Systems Laboratory, and Allen White, Environmental Technology Laboratory. The authors also thank Paul Schultz and Nita Fullerton for reviewing this paper. Many helpful suggestions were obtained through formal reviews by Ed Szoke, Lans Rothfusz, Mark Powell, and Carlos Garza. Thanks to Will von Dauster and Lloyd Treinish for help with figure preparation.
REFERENCES
Albers, S. C., 1995: The LAPS wind analysis. Wea. Forecasting,10, 342–352.
——, J. A. McGinley, D. L. Birkenheuer, and J. R. Smart, 1996: The Local Analysis and Prediction System (LAPS): Analysis of clouds, precipitation, and temperature. Wea. Forecasting,11, 273–287.
AMS, 1997: Cover figure. Preprints, 13th International Conf. on Interactive and Processing Systems for Meteorology, Oceanography, and Hydrology, Long Beach, CA, Amer. Meteor. Soc.
Barnes, S., 1964: A technique for maximizing details in numerical weather map analysis. J. Appl. Meteor.,3, 396–409.
Barth, M. F., R. B. Chadwick, and W. M. Faas, 1997: The Forecast Systems Laboratory Boundary Layer Profiler Data Acquisition Project. Preprints, First Symp. on Integrated Observing Systems, Long Beach, CA, Amer. Meteor. Soc., 130–137.
Benjamin, S. G., K. A. Brewster, R. Brümmer, B. F. Jewett, T. W. Schlatter, T. L. Smith, and P. A. Stamus, 1991: An isentropic three-hourly data assimilation system using ACARS aircraft data. Mon. Wea. Rev.,119, 888–906.
——, and Coauthors, 1997: Improvements in aviation forecasts from the 40-km RUC. Preprints, Seventh Conf. on Aviation, Range, and Aerospace Meteorology, Long Beach, CA, Amer. Meteor. Soc., 411–416.
Birkenheuer, D. L., 1992: The LAPS specific humidity analysis. NOAA Tech. Memo. ERL-FSL-1, NOAA Forecast Systems Laboratory, Boulder, CO, 39 pp. [Available from FSL, 325 Broadway, Boulder, CO 80303-3328; or online from http://www.fsl.noaa.gov.].
——, 1996: Applying satellite gradient moisture information to local-scale water vapor analysis using variational methods. J. Appl. Meteor.,35, 24–35.
Black, T. L., 1994: The new NMC mesoscale eta model: Description and forecast examples. Wea. Forecasting,9, 265–278.
Brewster K., S. Albers, F. H. Carr, and M. Xiue, 1995: Initializing a nonhydrostatic forecast model using WSR-88D data and OLAPS. Preprints, 27th Conf. on Radar Meteorology, Vail, CO, Amer. Meteor. Soc., 252–254.
Christidis, Z., J. Edwards, and J. S. Snook, 1997: Regional weather forecasting in the 1996 Summer Olympic Games using an IBM SP2. Preprints, 13th Int. Conf. on Interactive and Processing Systems for Meteorology, Oceanography, and Hydrology, Long Beach, CA, Amer. Meteor. Soc., 22–25.
desJardins, M. L., S. Jacobs, D. Plummer, and S. Schotz, 1997: N-AWIPS: AWIPS at the National Centers for Environmental Prediction. Preprints, 13th Int. Conf. on Interactive and Processing Systems for Meteorology, Oceanography, and Hydrology, Long Beach, CA, Amer. Meteor. Soc., 296–298.
Edwards, J., J. S. Snook, and Z. Christidis, 1997: Forecasting for the 1996 Summer Olympic Games with the SMS-RAMS parallel model. Preprints, 13th Int. Conf. on Interactive and Processing Systems for Meteorology, Oceanography, and Hydrology, Long Beach, CA, Amer. Meteor. Soc., 19–21.
Garza, C., and G. Hoogenboom, 1996: The integration of diverse environmental data collection systems used in support of the 1996 Summer Olympic Games. Preprints, 12th Int. Conf. on Interactive and Processing Systems for Meteorology, Oceanography, and Hydrology, Atlanta, GA, Amer. Meteor. Soc., 40–42.
——, and ——, 1997: Success experienced in the use of diverse surface weather data collection systems in support of the 1996 Olympic Games. Preprints, 13th Int. Conf. on Interactive and Processing Systems for Meteorology, Oceanography, and Hydrology, Long Beach, CA, Amer. Meteor. Soc., 1–4.
Haltiner, G. J., and R. T. Williams, 1980: Numerical Prediction and Dynamic Meteorology. 2d ed. Wiley, 477 pp.
Lewis, J. M., 1971: Variational subsynoptic analysis with applications to severe local storms. Mon. Wea. Rev.,99, 786–795.
Louis, J. F., 1979: A parametric model of vertical eddy fluxes in the atmosphere. Bound.-Layer Meteor.,17, 187–202.
MacDonald, A. E., and J. S. Wakefield, 1996: WFO-Advanced: An AWIPS-like prototype forecast workstation. Preprints, 12th Int. Conf. on Interactive and Processing Systems for Meteorology, Oceanography, and Hydrology, Atlanta, GA, Amer. Meteor. Soc., 190–193.
McGinley, J. A., 1982: A diagnosis of alpine lee cyclogenesis. Mon. Wea. Rev.,110, 1271–1287.
——, 1987: Use of satellite IR data in the enhancement of surface thermal fields. Proc. Symp. on Mesoscale Analysis and Forecasting, Vancouver, BC, Canada, ESA SP-282, 123–128.
——, 1995: Opportunities for high resolution data analysis, prediction, and product dissemination within the local weather office. Preprints, 14th Conf. on Weather Analysis and Forecasting, Dallas, TX, Amer. Meteor. Soc., 478–485.
——, and P. A. Stamus, 1996: A quality control scheme for local surface mesonet observations based on the Kalman filter. Preprints, 15th Conf. on Weather Analysis and Forecasting, Norfolk, VA, Amer. Meteor. Soc., 223–226.
Molenar, D., K. J. Schrab, J. F. W. Purdom, and H. Gosden, 1996: RAMSDIS in digital satellite data training and analysis. Preprints, 12th Int. Conf. on Interactive and Processing Systems for Meteorology, Oceanography, and Hydrology, Atlanta, GA, Amer. Meteor. Soc., 160–163.
Pielke, R. A., and Coauthors, 1992: A comprehensive meteorological modeling system—RAMS. Meteor. Atmos. Phys.,49, 69–91.
Rodriguez, B., L. Hart, and T. Henderson, 1996: Parallelizing operational weather forecast models for portable and fast execution. J. Parallel and Distributed Comput.,37, 159–170.
Rothfusz, L. P., and M. R. McLaughlin, 1997: Weather support for the XXVI Olympiad. NOAA Tech. Memo. NWS SR-184, National Weather Service, Southern Region, Fort Worth, TX, 70 pp. [Available from National Technical Information Service, U.S. Dept. of Commerce, 5285 Port Royal Road, Springfield VA 22161.].
——, J. T. Johnson, L. C. Safford, M. R. McLaughlin, and S. K. Rinard, 1996: The Olympic Weather Support System. Preprints, 12th Int. Conf. on Interactive and Processing Systems for Meteorology, Oceanography, and Hydrology, Atlanta, GA, Amer. Meteor. Soc., 1–6.
Snook, J. S., 1996: Local domain forecasting support to the 1996 Atlanta Olympic Games. Preprints, 12th Int. Conf. on Interactive and Processing Systems for Meteorology, Oceanography, and Hydrology, Atlanta, GA, Amer. Meteor. Soc., 32–35.
——, and R. A. Pielke, 1995: Diagnosing a Colorado heavy snow event with a nonhydrostatic mesoscale numerical model structured for operational use. Wea. Forecasting,10, 261–285.
——, J. M. Cram, and J. M. Schmidt, 1995: LAPS/RAMS: A nonhydrostatic mesoscale numerical modeling system configured for operational use. Tellus,47A, 864–875.
Stamus, P. A., L. C. Safford, J. T. Johnson, and L. P. Rothfusz, 1997:The creation and use of “interobservations” for Olympic venues. Preprints, 13th Int. Conf. on Interactive and Processing Systems for Meteorology, Oceanography, and Hydrology, Long Beach, CA, Amer. Meteor. Soc., 15–18.
Treinish, L. A., and L. P. Rothfusz, 1997: Three-dimensional visualization for support of operational weather forecasting at the 1996 Centennial Olympic Games. Preprints, 13th Int. Conf. on Interactive and Processing Systems for Meteorology, Oceanography, and Hydrology, Long Beach, CA, Amer. Meteor. Soc., 31–34.
Walko, R. L., W. R. Cotton, J. L. Harrington, and M. P. Meyers, 1995:New RAMS cloud microphysics parameterization. Part I: The single-moment scheme. Atmos. Res.,38, 29–62.
OWSS LAPS domain and data.
Citation: Weather and Forecasting 13, 1; 10.1175/1520-0434(1998)013<0138:LDMAAF>2.0.CO;2
Grayscale representation of an operational surface product from the 9-h RAMS forecast of temperature (shaded, °F), relative humidity (contours, %), and wind (1 full barb = 5 m s−1) valid at 1800 UTC 31 July 1996 for the 2-km horizontal grid supporting the Savannah yachting events. Wind barbs are plotted at every other model grid point. Wassaw Sound was the location of the Olympic Games yachting venue with events held near buoys 21 and 23. Figure prepared by L. Treinish, IBM, using IBM’s Visualization Data Explorer™.
Citation: Weather and Forecasting 13, 1; 10.1175/1520-0434(1998)013<0138:LDMAAF>2.0.CO;2
Hourly bias and rms differences of (a) temperature (K) and (b) dewpoint (K), and hourly rms differences of (c) wind vector difference (m s−1) averaged for the period 2 July–24 August 1996. Differences are computed by subtracting surface observations from model forecasts (RAMS, solid; 29-km Eta, long dashed; 10-km Eta, short dashed). Model initialization time was 0600 UTC for RAMS and 0300 UTC for Eta. Plotted Eta results are displaced by 3 h so that the comparisons are displayed at common forecast valid times (UTC).
Citation: Weather and Forecasting 13, 1; 10.1175/1520-0434(1998)013<0138:LDMAAF>2.0.CO;2
Textual RAMS point surface wind forecast for buoy locations 21 and 23 (see Fig. 2) valid from 0900 to 2300 UTC on 31 July 1996. Half-hourly forecasts of wind speed (kt) and direction (°) were provided to the operational marine forecasters, which aided in the preparation of the detailed sea-breeze predictions.
Citation: Weather and Forecasting 13, 1; 10.1175/1520-0434(1998)013<0138:LDMAAF>2.0.CO;2
Forecast time–height series of RAMS wind (1 full barb = 5 m s−1) for buoy location 21 (see Fig. 2) valid from 0900 through 2300 UTC 31 July 1996.
Citation: Weather and Forecasting 13, 1; 10.1175/1520-0434(1998)013<0138:LDMAAF>2.0.CO;2
RAMS model grid configurations.
RAMS model physics.