Abstract

Localized Aviation MOS Program (LAMP) forecasts of ceiling height, visibility, wind, and other weather elements of interest to the aviation community have been produced and put into the National Digital Guidance Database (NDGD) since 2006. The High Resolution Rapid Refresh (HRRR) model is now producing explicit forecasts of ceiling height and visibility computed by algorithms based on variables directly forecasted by the HRRR. The Meteorological Development Laboratory has improved the LAMP ceiling and visibility forecasts by combining these two sources of information into a LAMP–HRRR MELD. The new forecasts show improvement over the original LAMP and particularly over the HRRR and persistence in terms of bias, threat score, and the Gerrity score. This paper explains how the MELD is produced and shows selected verification and example forecasts. A new guidance product based on this work will be made available to partners and customers.

1. Introduction

The National Weather Service (NWS) has been disseminating a suite of weather forecast guidance products from the current version of the Localized Aviation MOS Program (LAMP) since 2006 (Ghirardelli and Glahn 2010). The primary purpose of LAMP is to support aviation interests, and included in that suite are forecasts of ceiling height and visibility at specific sites that report those variables, predominantly METAR (OFCM 1995) sites. LAMP provides forecasts each hour, available about 40 min after the hour, at projections each hour out to 25 h. Since 2010, LAMP gridded forecasts over the conterminous United States (CONUS) have been put into the National Digital Guidance Database (NDGD), the guidance companion to the National Digital Forecast Database (NDFD; Glahn and Ruth 2003). A number of numerical models also produce forecasts of ceiling and visibility, including the High Resolution Rapid Refresh (HRRR) model (Benjamin et al. 2016) that is run operationally at the National Centers for Environmental Prediction (NCEP). The Meteorological Developmental Laboratory (MDL) has developed a method to meld the LAMP forecasts with those created by the HRRR model to produce probabilistic and categorical values of ceiling height and visibility. The MELD categorical forecasts improve over LAMP alone by a substantial amount for projections from 4 through 17 h, as measured by the threat score (TS; Palmer and Allen 1949; Wilks 2011)1 for both warm (April–September) and cool seasons (October–March).

2. The LAMP model

The LAMP system is described in Ghirardelli and Glahn (2010). Basically, it follows the MOS (Glahn and Lowry 1972) paradigm, whereby a predictand, usually composed of observations (obs) of a weather variable, is related to a variety of predictors. The predictors used in LAMP for ceiling and visibility prediction come from three sources: 1) the current observation of the variable being forecast, 2) the output from three simple advective models, and 3) the MOS forecasts based on NCEP’s Global Forecast System (GFS MOS; Dallavalle et al. 2004). Very short-range forecasts (i.e., on the order of an hour or two) must be heavily based on the current observation for the forecasts to compete favorably with the ob itself as a forecast (persistence). Essentially, the LAMP model furnishes a blending mechanism from the obs at initial time to GFS MOS at the longest projections.

When dealing with highly nonnormal and discontinuous distributions such as ceiling and visibility, MDL has found the regression estimation of event probabilities (REEP; Miller 1964; Wilks 2011) method of development gives better results than dealing with a continuous predictand (e.g., Bocchieri and Glahn 1972, p. 877; D. Unger and B. Glahn, unpublished manuscript2). The predictand is divided into several, say M, categories, and REEP estimates the probability of occurrence of each category. A predictand category that occurs is given the value of 1, and 0 if it does not; this defines the binary predictand necessary for REEP. The categories can be either discrete or cumulative (from above or below). For development purposes, it is better to use cumulative binaries (Glahn 1965, pp. 125 and 126), but for provision to users, discrete categories are many times preferred. It is also customary for many or all of the predictors in this regression to be binary and, generally, cumulative binary.

The M REEP equations are used to estimate the probability of the M predictand categories. However, usually a specific, single-value forecast of ceiling and of visibility is preferred, even required, by users of aviation forecasts. To produce such categorical forecasts, a probability threshold for each category is computed in such a manner that the bias3 of the category falls within prescribed limits, and within those limits, the TS is maximized. These thresholds are then used to make the cumulative categorical forecasts from which the discrete forecasts can be derived. The categories used by LAMP are indicated in Table 1; the cumulative categories are used for development. The lowest category of ceiling and of visibility was the lowest for which sufficient obs were available to develop stable equations.

Table 1.

Category definitions of LAMP ceiling height and visibility (Ghirardelli and Glahn 2010). Ceilings are observed (reported) in hundreds (hd) of feet (ft). Visibilities are observed to fractions of a mile (mi) when the visibility is low.

Category definitions of LAMP ceiling height and visibility (Ghirardelli and Glahn 2010). Ceilings are observed (reported) in hundreds (hd) of feet (ft). Visibilities are observed to fractions of a mile (mi) when the visibility is low.
Category definitions of LAMP ceiling height and visibility (Ghirardelli and Glahn 2010). Ceilings are observed (reported) in hundreds (hd) of feet (ft). Visibilities are observed to fractions of a mile (mi) when the visibility is low.

The LAMP forecasts are made from REEP equations developed on a regional basis. That is, stations within a geographic region for which it was thought the predictand–predictor relationships were similar were grouped, and all such stations share the same equations. Some grouping of stations is necessary because there are not enough occurrences of low ceilings or visibilities at a single station to allow a stable equation to be developed for each station. The predictand data for producing the LAMP equations were the METAR obs. Equations for ceiling and also for visibility were developed for all projections from 1 through 25 h simultaneously so that the same set of predictors was chosen for all projections to maximize the consistency of the forecasts from one projection to the next [see Glahn and Wiedenfeld (2006) and Ghirardelli and Glahn (2010) for details]. Forecasts are made each hour for hourly projections out to 25 h for about 1552 stations,4 the stations that had METAR obs when the equations were developed. These forecasts are analyzed by the Bergthorssen–Cressman–Doos–Glahn (BCDG; Glahn et al. 2009; Im and Glahn 2012; Glahn and Im 2015) method to the grid used in the NDFD and NDGD.

3. The HRRR model

The HRRR is a convection-permitting, hourly updated model that has been running at ESRL for a number of years and is now running at NCEP; its parent model, the Rapid Refresh (RAP), and physics and assimilation common to RAP and HRRR are described in Benjamin et al. (2016). The HRRR model produces ceiling and visibility forecasts according to internal algorithms for projections from 1 through 15 h. The forecasts are for specific values in meters, and ceiling height is above sea level.

4. Data availability and preparation

LAMP probability and categorical forecasts are made for specific locations (stations) and are archived. Gridded specific value forecasts are also available on the NDGD grid, but not gridded probability forecasts; however, gridded probability forecasts could be produced for the sample if needed. HRRR forecasts are available on a 3-km grid and could be interpolated either to stations or to the NDGD grid. The obs are available at stations and are analyzed onto the NDGD grid with the BCDG analysis system (Glahn et al. 2009; Im and Glahn 2012; Glahn and Im 2015). Therefore, the matching of the predictands and predictors for the MELD statistical analysis could be done either at stations or at grid points. Because the predictand is at stations, there is no reason to grid the obs and do the statistical analysis at grid points, because all the predictand information is in the station values; an analysis of these values adds no information, and the information at grid points, not being obs but being interpolated (analyzed) values, would be less accurate than the station values themselves. Therefore, the MELD regression analysis was done at stations.

a. LAMP forecasts

Operational LAMP ceiling and visibility forecasts have been archived in both the probabilistic and categorical forms for the first seven cumulative categories of ceiling and the first six categories of visibility shown in Table 1.

After development of the regression equations based on the then-available data, some other stations were later added within the CONUS regions and also over southern Canada as extensions of the adjacent regions. For those added stations that do not have obs, LAMP “backup” equations that do not include obs as predictors are used. No LAMP equations could be developed for locations over water because of the lack of obs, but some forecasts over water have been added by using nearby land backup equations. These point forecasts are gridded with the BCDG method for guidance for forecasters in preparing grids for the NDFD; an example of these gridded forecasts is shown in Fig. 1. This example was chosen without reference to forecasts, but rather on the basis of a well-defined frontal system in the central part of the United States, as shown in Fig. 2. However, neither these gridded LAMP forecasts nor the forecasts produced by backup equations for the added stations were used in the development of the regression meld of LAMP and HRRR data.

Fig. 1.

The LAMP categorical ceiling height forecast (color bar; thousands of ft), 7-h projection from 1200 UTC 11 Apr 2013.

Fig. 1.

The LAMP categorical ceiling height forecast (color bar; thousands of ft), 7-h projection from 1200 UTC 11 Apr 2013.

Fig. 2.

Sea level pressures and fronts for 1200 UTC 11 Apr 2013.

Fig. 2.

Sea level pressures and fronts for 1200 UTC 11 Apr 2013.

b. HRRR forecasts

Two years of HRRR forecasts were available starting in March 2013. The HRRR data were obtained from the ESRL/Global Systems Division archive of HRRR data that were collected prior to the HRRR running in the NCEP operational job stream. We divided the sample into warm seasons (April–September) and cool seasons (October–March), characteristic of MOS and LAMP development in MDL. Of the 12 months in each seasonal sample, we used eight for development and four for testing, as indicated in Tables 2 and 3.

Table 2.

The warm season months marked with an X are those used for independent testing.

The warm season months marked with an X are those used for independent testing.
The warm season months marked with an X are those used for independent testing.
Table 3.

The cool season months marked with an X are those used for independent testing.

The cool season months marked with an X are those used for independent testing.
The cool season months marked with an X are those used for independent testing.

During the 2-yr period, the HRRR was changed from HRRR1 to HRRR2. Our cool season development used only HRRR1; the warm season featured a mix of HRRR1 and HRRR2. While mixing model versions in development or applying results in operations to a model different from that used during development is to be avoided if possible, we judged it to be acceptable in this case. The viability of the process is demonstrated by verification on independent data shown later.

The HRRR ceiling and visibility forecasts are available each hour at hourly increments on a 3-km Lambert conformal grid covering the CONUS for projections from 1 through 15 h. To furnish the regression dataset, interpolation was performed onto the HRRR grid to the LAMP points. The meld of HRRR and LAMP forecasts should be distributed very shortly after LAMP is currently available, about 40 min after the top of the hour. The HRRR run is not completed until nearly an hour later, so for any given LAMP start time (cycle), the HRRR must be used from the hour previous. For instance, for the 1200 UTC LAMP cycle, the HRRR 1100 UTC cycle is used. The HRRR ceiling forecasts are in reference to sea level, so the HRRR terrain was used to adjust the forecasts to above ground level, the way ceiling heights are expressed for aviation uses. In addition, visibility was converted from meters to miles and ceiling was converted from meters to hundreds of feet, the conventional units used in aviation.

The HRRR forecasts offer a level of detail that looks synoptically realistic, as shown in Fig. 3 for the same case as in Figs. 1 and 2, but much of it is beyond the realm of predictability at the present time. For instance, visibilities that vary from 8.0 to 0.5 mi within the space of 10 km or so are possible, but are not generally observed or forecastable on this scale. Therefore, a preprocessor (to the melding) was run on the HRRR forecasts after converting them to the predictor categories (discussed later) that essentially eliminated or coalesced “spots” of ≤7.5 km in diameter. The resulting detail is still of marginal predictability but is more plausible.5 Figures 3 (before) and 4 (after) show the effects of this “spot removal.”

Fig. 3.

The HRRR ceiling height forecast (color bar; thousands of ft) for 8-h projection from 1100 UTC 11 Apr 2013. The HRRR model grid in the archive we used does not fully cover the forecast rectangle.

Fig. 3.

The HRRR ceiling height forecast (color bar; thousands of ft) for 8-h projection from 1100 UTC 11 Apr 2013. The HRRR model grid in the archive we used does not fully cover the forecast rectangle.

Fig. 4.

As in Fig. 3, but after removal or coalescing of small spots.

Fig. 4.

As in Fig. 3, but after removal or coalescing of small spots.

c. Observations

METAR and other obs have been archived by MDL for many years in standard aviation units. These data have undergone both interelement consistency checks and temporal continuity quality control checks. They were accessed to extract the needed data.

5. Regression analysis

REEP was used to develop equations with predictors from the LAMP and HRRR models and from obs to produce MELD forecasts for projections from 1 through 25 h. The predictors are the same in the MELD equations for each projection, except that the model predictor projections “march” with the predictands. For instance, for the 1200 UTC cycle, and for the 6-h projection, the observation at 1800 UTC (the predictand) is matched with the LAMP 6-h forecast made with 1200 UTC data and the HRRR 7-h forecast made from 1100 UTC data so that the predictand and predictors have the same valid time. As noted earlier, a 1-h-old HRRR run has to be used to meet timeliness requirements. The predictors in the MELD regression equations were chosen by forward selection. At each selection step, the next predictor was chosen based on the highest added reduction of variance (RV) afforded by any potential predictor for any projection and any predictand category. The selection stopped when no potential predictor reduced any predictand variance by ≥0.5%.

To keep the process reasonably simple, and especially because of the limited data sample, a generalized approach was used, where all stations were grouped together. In an initial study, Glahn et al. (2014) determined that the LAMP probability forecasts are much better predictors than the categorical ones, so only the probabilities were used for the MELD equations.

The LAMP forecasts have only a few categories, sufficient for providing forecasts to users in table form. However, for a gridded product, more definition is desirable, so we used an expanded set of categories shown in Table 4. Two categories of visibility and one category of ceiling were added below those for which LAMP forecasts are available. For visibility, there is a category for each reportable value below 10 mi, except the very lowest ones. For ceiling, every reportable value has a category below 1000 ft, as well as at meaningful thresholds above that. The MELD produces a probability of each category. Using the same procedure as was used in LAMP, we developed thresholds to produce categorical forecasts with biases in the range from 1.0 to 1.2. This process is explained fully in Ghirardelli and Glahn (2010). Because some of the categories cover more than one reportable value, the values put on the grid are sometimes averages; the values for the grid are shown in the third and fifth columns of Table 4. Also shown in Table 4 are the upper limits for instrument flight rules (IFR), low IFR (LIFR), very low IFR (VLIFR), and marginal visual flight rules (MVFR); any value above MVFR indicates visual flight rules.

Table 4.

The 16 predictand cumulative from below category upper limits for visibility and 24 for ceiling, and the associated values for the grid used in the MELD. There is a category above the last one in the table of ≥10 mi for visibility and >12 000 ft for ceiling; the last including unlimited ceiling. The categories for which LAMP forecasts exist are in boldface.

The 16 predictand cumulative from below category upper limits for visibility and 24 for ceiling, and the associated values for the grid used in the MELD. There is a category above the last one in the table of ≥10 mi for visibility and >12 000 ft for ceiling; the last including unlimited ceiling. The categories for which LAMP forecasts exist are in boldface.
The 16 predictand cumulative from below category upper limits for visibility and 24 for ceiling, and the associated values for the grid used in the MELD. There is a category above the last one in the table of ≥10 mi for visibility and >12 000 ft for ceiling; the last including unlimited ceiling. The categories for which LAMP forecasts exist are in boldface.

It is of considerable importance that the forecasts are not only consistent from projection to projection, but also from the analysis (0-h projection) to the 1-h projection. The regression software and development process were designed based on these considerations. To enhance continuity of the MELD, the initial obs were used in developing the MELD equations for all projections, as they had been in developing the LAMP equations.

We were also concerned about the possible lack of continuity between the 14-h projection, the longest projection for which the HRRR is available, and the 15-h and following projections. Therefore, we used the HRRR 14-h projection, not only for the 14-h MELD projection, but for all projections from 15 through 25 h.

a. Ceiling height

Grouping all stations together gave a large number of predictand–predictor pairs (sample size) varying for the warm season from about 335 000 for the 1-h projection to 297 000 for projections from 14 to 25 h. The decrease of sample size with projection was due to missing HRRR data. The low relative frequencies of low ceilings restricted the development method to a generalized operator (Bocchieri and Glahn 1972, p. 970), as mentioned earlier. For instance, there were <100 occurrences of ceiling < 100 ft and <300 occurrences of ceiling < 200 ft in the 8-month developmental sample for all stations combined, so further spatial stratification would not be feasible unless the two lower categories were eliminated.

We were concerned that if all potential predictors—LAMP, HRRR, and obs—were offered together for selection, the HRRR might be overwhelmed by the obs, which are well known for their importance in the early projections. Therefore, we made an initial screening of only the seven LAMP predictors shown in Table 1 and the 12 binary HRRR predictors shown in Table 5 for projections from 1 through 14 h. For the warm season, all seven LAMP predictors and five of the 12 potential HRRR predictors were selected with the 0.5% RV cutoff criterion. We then forced these 12 predictors and added the 15 potential obs predictors. The six observation categories indicated in Table 5 were selected. Another regression run was made for projection hours from 14 through 25. All 18 of those previously selected were “forced,” but were included only if the additional RV was >0.01%. One of the obs, <8 mi, was not included in the equations for these projections. These are the equations used for the independent verification. The development for the cool season followed the same general process (see Table 5).

Table 5.

The 12 HRRR ceiling height forecasts and 15 ceiling height observations offered as binary predictors for predicting ceiling. The HRRR and obs predictors selected by screening on the 8-month developmental sample are in boldface.

The 12 HRRR ceiling height forecasts and 15 ceiling height observations offered as binary predictors for predicting ceiling. The HRRR and obs predictors selected by screening on the 8-month developmental sample are in boldface.
The 12 HRRR ceiling height forecasts and 15 ceiling height observations offered as binary predictors for predicting ceiling. The HRRR and obs predictors selected by screening on the 8-month developmental sample are in boldface.

One could speculate as to why these specific predictors were chosen. It is clear that the obs were furnishing information for the very low categories, for which LAMP and HRRR did not do an adequate job. Also, they were chosen for the very short-range projections. The RVs for the categories below which LAMP is available were higher than for the other categories (not shown), indicating the equations were likely somewhat unstable because of the small number of cases.

b. Visibility

The developmental process was the same for visibility as for ceiling. In addition to the six LAMP predictors, the HRRR and obs used as predictors are shown in Table 6. Previous work for the cool season (see Glahn et al. 2014) showed that higher HRRR visibility thresholds were not useful. For the warm season, a trial regression was done where all LAMP and HRRR predictors were screened together; all six LAMP predictors were selected along with only three HRRR predictors. The final regression run was made by forcing the six LAMP and the three HRRR predictors selected in the trial run. Five observation predictors were selected from the set shown in Table 6. The HRRR forecasts and observations chosen as predictors are shown in Table 6 and are shown in boldface.

Table 6.

The 11 HRRR visibility forecasts and 15 visibility observations offered as binary predictors for forecacting visibility. The HRRR and obs selected by screening on the 8-month developmental sample are in boldface.

The 11 HRRR visibility forecasts and 15 visibility observations offered as binary predictors for forecacting visibility. The HRRR and obs selected by screening on the 8-month developmental sample are in boldface.
The 11 HRRR visibility forecasts and 15 visibility observations offered as binary predictors for forecacting visibility. The HRRR and obs selected by screening on the 8-month developmental sample are in boldface.

As with ceiling, the lower categories of obs were chosen as predictors for the low categories of visibility. In addition, three other categories of obs were chosen, indicating the importance of persistence in visibility prediction. Also, similarly to ceiling, the lower two categories of the predictand had unexpectedly high RVs, showing their respective equations were likely fitting the data too closely.

6. Evaluation of independent data

As described earlier, the development was done at stations—discrete points where the predictand data applied. For implementation and evaluation, three options were considered:

  1. interpolate the HRRR forecasts to the LAMP stations, apply the equations and thresholds at the LAMP stations, and analyze the probabilities (if they are desired) and categorical forecasts to the LAMP grid;

  2. analyze the LAMP station probabilities and observations to the LAMP grid, interpolate the HRRR forecasts to the same grid, and apply the equations and thresholds on the grid; and

  3. interpolate the HRRR forecasts to the LAMP stations, evaluate the equations at the LAMP stations, analyze the MELD probabilities, and apply the thresholds at the grid points.

Any one of the three processes will work and it is not known which is best; we chose option 2 for the implementation process, but for the test sample verification, we applied the equations and thresholds at stations.

We applied the implementation process to data from the 7-h forecast at 1200 UTC 11 April 2013. The results looked reasonable. Features of both LAMP and the HRRR could be seen, with the LAMP features being more apparent because LAMP furnished better predictors than did HRRR. However, in concert with the suspected instability of the lowest category equations, some “blobs” of category 1 forecasts were made in unexpected places. Such features detract from the overall usefulness of the MELD. Rather than not use the suspect equations, we chose to mitigate the effect by developing thresholds with biases between 0.4 and 0.6 for the two lower categories.

The developmental equations were evaluated on the 4 months of test data indicated in Tables 2 and 3 for both warm and cool seasons. The primary scores were bias and TS for several categories, although the probability of detection, false alarm ratio, and Gerrity (1992) score were also computed (not shown). In all the verification graphs shown here, LAMP means the original LAMP forecasts; the equations on which the forecasts are based were developed several years before the test sample. The HRRR forecasts, taken from the same archive used in development, were interpolated from the HRRR grid to LAMP stations and for verification did not include the “spot removing” preprocessing that was performed for the regression analysis; unit conversion and reduction to sea level were, of course, done. All comparisons were on matched samples, differing only by projection. As discussed above, the predictand categories were defined as cumulative from below. Verification scores were also computed for cumulative categories. The primary verification used the categories for which LAMP forecasts were available and comparative verification could be done.

a. Cool season, ceiling height

Figures 58 show the biases on a log scale and TSs for ceiling height events < 1000 ft and for events < 500 ft for the cool season. The LAMP and MELD biases are generally good; the HRRR less so. For this start time (1200 UTC), persistence is high biased except for short projections and 24 h later, peaking around the 12-h projection. LAMP, MELD, and persistence TSs are nearly equal at 1 h; they all decline rapidly, persistence more rapidly than LAMP, and the MELD less rapidly than LAMP. The uncalibrated HRRR is not competitive for a few hours from this start time. The MELD is better than LAMP at all projections except for their near equality at 1 h. Even though there are HRRR forecasts only up until 14 h, their influence lingers and then gradually diminishes.

Fig. 5.

Ceiling height bias for events < 1000 ft, cool season, 4 months of independent data. All verification figures are for the 1200 UTC start time. The “LAMP+HRRR” key means the MELD.

Fig. 5.

Ceiling height bias for events < 1000 ft, cool season, 4 months of independent data. All verification figures are for the 1200 UTC start time. The “LAMP+HRRR” key means the MELD.

Fig. 6.

As in Fig. 5, but for ceiling height TS for events < 1000 ft during the cool season.

Fig. 6.

As in Fig. 5, but for ceiling height TS for events < 1000 ft during the cool season.

Fig. 7.

As in Fig. 5, but for ceiling height bias for events < 500 ft during the cool season.

Fig. 7.

As in Fig. 5, but for ceiling height bias for events < 500 ft during the cool season.

Fig. 8.

As in Fig. 5, but for ceiling height TS for events < 500 ft during the cool season.

Fig. 8.

As in Fig. 5, but for ceiling height TS for events < 500 ft during the cool season.

b. Cool season, visibility

Figures 912 show the biases and TSs for events < 3 mi and for the events < 1 mi for the cool seasons. Although the TSs are lower for visibility than for ceiling height, the comments above for the ceiling height apply for visibility as well.

Fig. 9.

As in Fig. 5, but for visibility bias for events < 3 mi during the cool season.

Fig. 9.

As in Fig. 5, but for visibility bias for events < 3 mi during the cool season.

Fig. 10.

As in Fig. 5, but for visibility TS for events < 3 mi during the cool season.

Fig. 10.

As in Fig. 5, but for visibility TS for events < 3 mi during the cool season.

Fig. 11.

As in Fig. 5, but for visibility bias for events < 1 mi during the cool season.

Fig. 11.

As in Fig. 5, but for visibility bias for events < 1 mi during the cool season.

Fig. 12.

As in Fig. 5, but for visibility TS for events < 1 mi during the cool season.

Fig. 12.

As in Fig. 5, but for visibility TS for events < 1 mi during the cool season.

c. Warm season, ceiling height

Figures 1316 are similar to Figs. 58, but for the warm season. Although the TSs are somewhat lower for the warm season than for the cool season, the comments above still hold except that the effect of the 14-h HRRR fades more quickly and gives little or no improvement past about 18 h. It is surprising that the LAMP biases are above 1.5 for the later projections; this may be because the GFS MOS has changed since the development of the LAMP equations.

Fig. 13.

As in Fig. 5, but for ceiling height bias for events < 1000 ft during the warm season.

Fig. 13.

As in Fig. 5, but for ceiling height bias for events < 1000 ft during the warm season.

Fig. 14.

As in Fig. 5, but for ceiling height TS for events < 1000 ft during the warm season.

Fig. 14.

As in Fig. 5, but for ceiling height TS for events < 1000 ft during the warm season.

Fig. 15.

As in Fig. 5, but for ceiling height bias for events < 500 ft during the warm season.

Fig. 15.

As in Fig. 5, but for ceiling height bias for events < 500 ft during the warm season.

Fig. 16.

As in Fig. 5, but for ceiling height TS for events < 500 ft during the warm season.

Fig. 16.

As in Fig. 5, but for ceiling height TS for events < 500 ft during the warm season.

d. Warm season, visibility

Figures 1720 are similar to Figs. 912, but for the warm season. For this start time (1200 UTC), the TSs are lower than for the ceiling height, and for the cool season. The TSs for the HRRR are more similar to persistence than to the LAMP and MELD except for the first 2 h, where the HRRR is not as good as persistence; however, by 14 h, the HRRR is between the persistence and the MELD. The LAMP biases show remarkable diurnal variation, being quite low then quite high with projection.

Fig. 17.

As in Fig. 5, but for visibility for events < 3 mi during the warm season.

Fig. 17.

As in Fig. 5, but for visibility for events < 3 mi during the warm season.

Fig. 18.

As in Fig. 5, but for visibility TS for events < 3 mi during the warm season.

Fig. 18.

As in Fig. 5, but for visibility TS for events < 3 mi during the warm season.

Fig. 19.

As in Fig. 5, but for visibility bias for events < 1 mi during the warm season.

Fig. 19.

As in Fig. 5, but for visibility bias for events < 1 mi during the warm season.

Fig. 20.

As in Fig. 5, but for visibility TS for events < 1 mi during thewarm season.

Fig. 20.

As in Fig. 5, but for visibility TS for events < 1 mi during thewarm season.

7. Equations for daily use

a. Ceiling height

The equations for daily use were developed on all 12 months of data. For projections from 1 through 14 h, the full set of potential predictors was offered for selection. Nearly the same sets of predictors were selected for the 12-month equations as for the 8-month equations shown in Table 5. These final predictors are also included in the equations for projections from 15 through 25 h.

A MELD forecast, depicted in Fig. 21, was made with the 12-month equations for the same case as shown in Figs. 14; features of both LAMP and HRRR can be seen. The MELD forecast contains some very small-scale features that are not forecastable, so spot removal software6 was applied to produce the slightly less “choppy” one shown in Fig. 22. The frontal detail shown by HRRR in Fig. 3 is generally present in Fig. 22. The small blue spot in northeastern Texas is caused by one LAMP station having a low ceiling forecast, and the spot is larger than what the software will remove; being a valid LAMP forecast, it is not obvious that it should be removed, even though it does not agree with its neighboring stations. Projection hour 7, depicted in these maps, is one where HRRR is expected to contribute strongly. Both verification and maps (not shown) indicate that the HRRR is much less influential at very short projections, and also past about projection hour 18.

Fig. 21.

The ceiling 7-h MELD forecast (color bar; thousands of ft) from 1200 UTC 11 Apr 2013.

Fig. 21.

The ceiling 7-h MELD forecast (color bar; thousands of ft) from 1200 UTC 11 Apr 2013.

Fig. 22.

As in Fig. 21, but after removal or coalescing of spots.

Fig. 22.

As in Fig. 21, but after removal or coalescing of spots.

b. Visibility

As with ceiling, the equations for daily use were developed based on all 12 months of data. For projections from 1 through 14 h, the 6 cumulative probability LAMP (see Table 1) and 11 HRRR predictors (see Table 6) were offered for selection. All six LAMP predictors and only three HRRR predictors were selected. These nine predictors were then forced when developing for all 25 projections. Five obs predictors were chosen, making a total of 14 predictors in the equations.

A MELD forecast, shown in Fig. 23, was made with the 12-month equations for the same case shown in Figs. 14. As with ceiling, a few small spots can be the result of the binary process we use for making the forecasts. The probability forecasts made directly from the equations are thresholded to make specific value forecasts. When the probability is near the threshold for that category, it may get “tripped” for one grid point, but not for a neighboring one. A forecast of one category can be made at one grid point, and a forecast of a category above or below that being made at a neighboring grid point, resulting in speckles. This is essentially random noise and is not meaningful. The spot-remover postprocessing routine was run on the grid depicted in Fig. 23 to yield the result shown in Fig. 24.

Fig. 23.

The visibility 7-h MELD forecast (color bar; mi) from 1200 UTC 11 Apr 2013.

Fig. 23.

The visibility 7-h MELD forecast (color bar; mi) from 1200 UTC 11 Apr 2013.

Fig. 24.

As in Fig. 23, but after removal or coalescing of spots.

Fig. 24.

As in Fig. 23, but after removal or coalescing of spots.

8. Summary and conclusions

A system for making objective ceiling height and visibility forecasts at grid points based on a meld of the LAMP and HRRR predictions of those weather elements has been developed, tested, and readied for daily use. Observations at the initial time were also included in the regression equations, primarily for continuity from the analysis of observations at initial time to the 1-h forecast. The results shown here are for the 1200 UTC cycle and are consistent with earlier work for the cool, 0000 UTC cycle (Glahn et al. 2014).

Overall, the MELD approach seems to be viable, with the MELD biases and TSs generally being markedly better than HRRR or persistence alone, except for the 1-h forecast, where the MELD could not improve upon persistence. The MELD is also better than LAMP alone except for the first hour or two and after about 18 h, when the 14-h HRRR forecast is no longer very useful. The MELD forecasts show characteristics of both LAMP and HRRR. The HRRR has much small-scale detail, some of which needs to be disregarded for specific point forecasts. While such detail might be reasonable for a 1-h projection, HRRR does not do well at that range. At projections of several hours, where HRRR is closer to being competitive with LAMP, pinpointing variations in ceiling and visibility on the order of 10 km is beyond the forecasting ability, and the smaller spots of this size are removed. However, larger-scale detail, such as the low ceilings and visibilities associated with the frontal structure east of the lower Mississippi River, is kept as shown in Figs. 4, 22, and 24. While verification for only one start time is presented here, forecasts have been made and verified for all 24 start times, and the results are similar, although they do, of course, vary somewhat by start time.

The HRRR does not show good bias or high accuracy in Figs. 520. This is partly because the forecasts are not calibrated, and the individual category verifications do not indicate the overall usefulness of the HRRR, which is captured in the calibrated MELD.

Persisting the HRRR past 14 h can cause a feature to remain stationary for a time until the equation coefficients render the HRRR noneffective for larger projections. The alternative would have been to not use the HRRR past 14 h; then, a feature due to the HRRR would disappear immediately. This problem will be corrected when a HRRR archive of longer projections becomes available.

Benjamin et al. (2016) state that the crossover projection where the RAP becomes better than persistence for ceiling ≤ 1000 and ≤ 3000 ft is about 3 h during the summer and 4 h in winter. This is in general agreement with our results for the HRRR.

The MELD is a combination of models. LAMP itself incorporates three advective models and GFS MOS. In some sense, persistence can be called a model, as it furnishes extremely useful information. Heretofore, we have not incorporated a dynamic, mesoscale model because of a lack of an adequate developmental sample and our belief that the model and its output would change significantly before operational implementation could be achieved. However, the HRRR has been developed to the point we believe it (or some similar model) should be used. This is in concert with the national blend of models (Gilbert et al. 2016). It is unusual for an “improvement” to an existing operational product (LAMP) to be as large as the HRRR affords, and the improvement justifies the HRRR’s use in the LAMP suite of products.

Acknowledgments

We are indebted to Gordana Sindic-Rancic and Chenjie Huang for assistance in data preparation. A portion of the work was funded by NOAA’s Nextgen Weather Program. This paper is the responsibility of the authors and does not necessarily represent the views of the NWS or any other governmental agency.

REFERENCES

REFERENCES
Benjamin
,
S. G.
, and Coauthors
,
2016
:
A North American hourly assimilation and model forecast cycle: The Rapid Refresh
.
Mon. Wea. Rev.
,
144
,
1669
1694
, doi:.
Bocchieri
,
J. R.
, and
H. R.
Glahn
,
1972
:
Use of model output statistics for predicting ceiling height
.
Mon. Wea. Rev.
,
100
,
869
879
, doi:.
Dallavalle
,
J. P.
,
M. C.
Erickson
, and
J. C.
Maloney
III
,
2004
: Model output statistics (MOS) guidance for short- range projections. Preprints, 20th Conf. on Weather Analysis and Forecasting/16th Conf. on Numerical Weather Prediction, Seattle, WA, Amer. Meteor. Soc., 6.1. [Available online at https://ams.confex.com/ams/84Annual/techprogram/paper_73764.htm.]
Donaldson
,
R.
,
R.
Dyer
, and
M.
Krauss
,
1975
: An objective evaluator of techniques for predicting severe weather events. Preprints, Ninth Conf. on Severe Local Storms, Norman, OK, Amer. Meteor. Soc., 321–326.
Gerrity
,
J. P.
,
1992
:
A note on Gandin and Murphy’s equitable skill score
.
Mon. Wea. Rev.
,
120
,
2709
2712
, doi:.
Ghirardelli
,
J. E.
, and
B.
Glahn
,
2010
:
The Meteorological Development Laboratory’s aviation weather prediction system
.
Wea. Forecasting
,
25
,
1027
1051
, doi:.
Gilbert
,
K. K.
,
J. P.
Craven
,
T. M.
Hamill
,
D. R.
Novak
,
D. P.
Ruth
,
J.
Settelmaier
,
J. E.
Sieveking
, and
B.
Veenhuis
Jr.
,
2016
: The national blend of models, version one. Preprints, 23th Conf. on Probability and Statistics in the Atmospheric Sciences, New Orleans, LA, Amer. Meteor. Soc., 1.3. [Available online at https://ams.confex.com/ams/96Annual/webprogram/Paper285973.html.]
Glahn
,
B.
, and
D. P.
Ruth
,
2003
:
The new digital forecast database of the National Weather Service
.
Bull. Amer. Meteor. Soc.
,
84
,
195
201
, doi:.
Glahn
,
B.
, and
J.
Wiedenfeld
,
2006
: Insuring temporal consistency in short range statistical weather forecasts. Preprints, 18th Conf. on Probability and Statistics in the Atmospheric Sciences, Atlanta, GA, Amer. Meteor. Soc., 6.3. [Available online at https://ams.confex.com/ams/pdfpapers/103378.pdf.]
Glahn
,
B.
, and
J.-S.
Im
,
2015
: Objective analysis of visibility and ceiling height observations and forecasts. MDL Office Note 15-2, NWS/Meteorological Development Laboratory, 17 pp. [Available online at https://www.weather.gov/media/mdl/MDL_OfficeNote15-2.pdf.]
Glahn
,
B.
,
K.
Gilbert
,
R.
Cosgrove
,
D. P.
Ruth
, and
K.
Sheets
,
2009
:
The gridding of MOS
.
Wea. Forecasting
,
24
,
520
529
, doi:.
Glahn
,
B.
,
R.
Yang
, and
J.
Ghirardelli
,
2014
: Combining LAMP and HRRR visibility forecasts. MDL Office Note 14-2, NWS/Meteorological Development Laboratory, 20 pp.
Glahn
,
H. R.
,
1965
:
Objective weather forecasting by statistical methods
.
Statistician
,
15
,
111
142
, doi:.
Glahn
,
H. R.
, and
D. A.
Lowry
,
1972
:
The use of model output statistics (MOS) in objective weather forecasting
.
J. Appl. Meteor.
,
11
,
1203
1211
, doi:.
Im
,
J.-S.
, and
B.
Glahn
,
2012
:
Objective analysis of hourly 2-m temperature and dewpoint observations at the Meteorological Development Laboratory
.
Natl. Wea. Dig.
,
36
(
2
),
103
114
.
Miller
,
R. G.
,
1964
: Regression estimation of event probabilities. U.S. Weather Bureau Tech. Rep. 1, prepared by The Travelers Research Center, Hartford, CT, 153 pp. [Available online at http://www.dtic.mil/dtic/tr/fulltext/u2/602037.pdf.]
OFCM
,
1995
: Surface weather observations and reports. Federal Meteorological Handbook, No. 1, Rep. FCM-H1-2005, NOAA/Office of the Federal Coordinator for Meteorological Services and Supporting Research, 104 pp. [Available online at http://www.ofcm.gov/publications/fmh/FMH1/FMH1.pdf.]
Palmer
,
W. C.
, and
R. A.
Allen
,
1949
: Note on the accuracy of forecasts concerning the rain problem. Weather Bureau Manuscript, 2 pp.
Shaefer
,
J. T.
,
1990
:
The critical success index as an indicator of warning skill
.
Wea. Forecasting
,
5
,
570
575
, doi:.
Wilks
,
D. S.
,
2011
: Statistical Methods in the Atmospheric Sciences. 3rd ed. Academic Press, 676 pp.

Footnotes

For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (http://www.ametsoc.org/PUBSCopyrightPolicy).

1

Palmer and Allen suggested the name because the event being forecasted and evaluated was thought to be a threat. The TS is the same as the critical success index proposed by Donaldson et al. (1975) and discussed by Shaefer (1990).

2

This work is unpublished. The developers did much work in the early part of the LAMP project using various transformations of the visibility and ceiling height observations as predictands. This work was largely unsuccessful; reliable and skillful forecasts could not be made of the lowest values.

3

Bias for a categorical variable (event) is defined as the number of forecast events divided by the number of observed events.

4

LAMP forecasts are made for more than 1552 stations in operations, but we used only the ones that had observations and forecast equations when the development was done several years ago.

5

While the spot removal has some characteristics of smoothing, it is not smoothing in the usual sense where averages are computed. The integrity of “unusual” values is maintained when the area covered is of sufficient size or a number of unusual values are close together, even though they are not contiguous. No change of value is made unless the elevation difference among the points involved is <100 m, so that variations that may be due to terrain are maintained.

6

This postprocessing removes spots as large as 12.5 km across, while the preprocessing removes 7.5-km spots.