Search Results

You are looking at 1 - 10 of 36 items for

  • Author or Editor: Elizabeth Ebert x
  • Refine by Access: All Content x
Clear All Modify Search
Elizabeth Ebert

Abstract

Measurement of polar cloud cover is important because of its strong radiative influence on the energy balance of the snow and ice surface. Conventional satellite cloud detection schemes often fail in the polar regions because the visible and thermal contrasts between cloud and surface are typically small. Nevertheless, experts looking at satellite imagery can distinguish clouds from the surface by examining the textural characteristics of the scene.

This paper describes an automated pattern recognition algorithm winch identities regions of various surface and cloud types at high latitudes from visible, near-infrared, and infrared AVHRR satellite data. Five spectral features give information about the magnitude of albedos and brightness temperatures, while three textural features describe the variability and “bumpiness” in a scene. The maximum likelihood decision rule is used to classify that region into one of seven surface categories or 11 cloud categories.

The algorithm was able to classify 870 training samples with a skill of 84%. Eighteen hundred artificer samples created using a Monte Carlo technique were classified with a skill of 92%, which represents the theoretical limit of class separability using the given features. Both the near-infrared information and the textural information proved to be especially useful in detecting high-latitude cloudiness. The algorithm experienced some difficulty identifying thin stratus over snow and ice and thin cirrus over land and water, situations which also prove difficult for most other cloud detection schemes.

When tested on AVHRR imagery from a different date, the algorithm showed a skill of 83% as verified against the analyses of three independent experts. Significant variability was encountered among the experts, underlining the need for an objective routine. This algorithm performed more accurately thin others constructed with alternate feature sets corresponding to various existing cloud detection schemes.

Full access
Elizabeth E. Ebert

Abstract

The analysis of cloud cover in the polar regions from satellite data is more difficult than at other latitudes because the visible and thermal contrasts between the cloud cover and the underlying surface are frequently quite small. Pattern recognition has proven to be a useful tool in detecting and identifying several cloud types over snow and ice. Here a pattern recognition algorithm in combined with a hybrid histogram-spatial coherence (HHSC) scheme to derive cloud classification and fractional coverage, surface and cloud visible albedos and infrared brightness temperatures from multispectral AVHRR satellite imagery. The accuracy of the cloud fraction estimates were between 0.05 and 0.26, based on the mean absolute difference between the automated and manual nephanalyses of nearly 1000 training samples. The HHSC demonstrated greater accuracy at estimating cloud friction than three different threshold. methods. An important result is that the prior classification of a sample may significantly improve the accuracy of the analysis of cloud fraction, albedos and brightness temperatures over that of an unclassified sample.

The algorithm is demonstrated for a set of AVHRR imagery from the summertime Arctic. The automated classification and analysis are in good agreement with manual interpretation of the satellite imagery and with surface observations.

Full access
Elizabeth E. Ebert

Abstract

A poor man's ensemble is a set of independent numerical weather prediction (NWP) model forecasts from several operational centers. Because it samples uncertainties in both the initial conditions and model formulation through the variation of input data, analysis, and forecast methodologies of its component members, it is less prone to systematic biases and errors that cause underdispersive behavior in single-model ensemble prediction systems (EPSs). It is also essentially cost-free. Its main disadvantage is its relatively small size. This paper investigates the ability of a poor man's ensemble to provide forecasts of the probability and distribution of rainfall in the short range, 1–2 days.

The poor man's ensemble described here consists of 24- and 48-h daily quantitative precipitation forecasts (QPFs) from seven operational NWP models. The ensemble forecasts were verified for a 28-month period over Australia using gridded daily rain gauge analyses. Forecasts of the probability of precipitation (POP) were skillful for rain rates up to 50 mm day−1 for the first 24-h period, exceeding the skill of the European Centre for Medium-Range Weather Forecasts EPS. Probabilistic skill was limited to lower rain rates during the second 24 h. The skill and accuracy of the ensemble mean QPF far exceeded that of the individual models for both forecast periods when standard measures such as the root-mean-square error and equitable threat score were used. Additional measures based on the forecast location and intensity of individual rain events substantiated the improvements associated with the ensemble mean QPF. The greatest improvement was seen in the location of the forecast rain pattern, as the mean displacement from the observations was reduced by 30%. As a result the number of event forecasts that could be considered “hits” (forecast rain location and maximum intensity close to the observed) improved markedly.

Averaging to produce the ensemble mean caused a large bias in rain area and a corresponding reduction in mean and maximum rain intensity. Several alternative deterministic ensemble forecasts were tested, with the most successful using probability matching to reassign the ensemble mean rain rates using the rain rate distribution of the component QPFs. This eliminated most of the excess rain area and increased the maximum rain rates, improving the event hit rate.

The dependence of the POP and ensemble mean results on the number of members included in the ensemble was investigated using the 24-h model QPFs. When ensemble members were selected randomly the performance improved monotonically with increasing ensemble size, with verification statistics approaching their asymptotic limits for an ensemble size of seven. When the members were chosen according to greatest overall skill the ensemble performance peaked when only five or six members were used. This suggests that the addition of ensemble members with lower skill can degrade the overall product. Low values of the spread–skill correlation indicate that it is not possible to predict the forecast skill from the spread of the ensemble alone. However, the number of models predicting a particular rain event gives a good indication of the likelihood of the ensemble to envelop the location and magnitude of that event.

Full access
Elizabeth E. Ebert
Full access
Chermelle Engel and Elizabeth Ebert

Abstract

This paper presents an extension of the operational consensus forecast (OCF) method, which performs a statistical correction of model output at sites followed by weighted average consensus on a daily basis. Numerical weather prediction (NWP) model forecasts are received from international centers at various temporal resolutions. As such, in order to extend the OCF methodology to hourly temporal resolution, a method is described that blends multiple models regardless of their temporal resolution. The hourly OCF approach is used to generate forecasts of 2-m air temperature, dewpoint temperature, RH, mean sea level pressure derived from the barometric pressure at the station location (QNH), along with 10-m wind speed and direction for 283 Australian sites. In comparison to a finescale hourly regional model, the hourly OCF process results in reductions in average mean square error of 47% (air temperature), 40% (dewpoint temperature), 43% (RH), 29% (QNH), 42% (wind speed), and 11% (wind direction) during February–March with slightly higher reductions in May. As part of the development of the approach, the systematic and random natures of hourly NWP forecast errors are evaluated and found to vary with forecast hour, with a diurnal modulation overlaying the normal error growth with time. The site-based statistical correction of the model forecasts is found to include simple statistical downscaling. As such, the method is found to be most appropriate for meteorological variables that vary systematically with spatial resolution.

Full access
Elizabeth E. Ebert

Abstract

High-resolution forecasts may be quite useful even when they do not match the observations exactly. Neighborhood verification is a strategy for evaluating the “closeness” of the forecast to the observations within space–time neighborhoods rather than at the grid scale. Various properties of the forecast within a neighborhood can be assessed for similarity to the observations, including the mean value, fractional coverage, occurrence of a forecast event sufficiently near an observed event, and so on. By varying the sizes of the neighborhoods, it is possible to determine the scales for which the forecast has sufficient skill for a particular application. Several neighborhood verification methods have been proposed in the literature in the last decade. This paper examines four such methods in detail for idealized and real high-resolution precipitation forecasts, highlighting what can be learned from each of the methods. When applied to idealized and real precipitation forecasts from the Spatial Verification Methods Intercomparison Project, all four methods showed improved forecast performance for neighborhood sizes larger than grid scale, with the optimal scale for each method varying as a function of rainfall intensity.

Full access
Chermelle Engel and Elizabeth E. Ebert

Abstract

This paper describes an extension of an operational consensus forecasting (OCF) scheme from site forecasts to gridded forecasts. OCF is a multimodel consensus scheme including bias correction and weighting. Bias correction and weighting are done on a scale common to almost all multimodel inputs (1.25°), which are then downscaled using a statistical approach to an approximately 5-km-resolution grid. Local and international numerical weather prediction model inputs are found to have coarse scale biases that respond to simple bias correction, with the weighted average consensus at 1.25° outperforming all models at that scale. Statistical downscaling is found to remove the systematic representativeness error when downscaling from 1.25° to 5 km, though it cannot resolve scale differences associated with transient small-scale weather.

Full access
Judith A. Curry and Elizabeth E. Ebert

Abstract

The relationship between cloud optical properties and the radiative fluxes over the Arctic Ocean is explored by conducting a series of modeling experiments. The annual cycle of arctic cloud optical properties that are required to reproduce both the outgoing radiative fluxes at the top of the atmosphere as determined from satellite observations and the available determinations of surface radiative fluxes are derived. Existing data on cloud fraction and cloud microphysical properties are utilized. Four types of cloud are considered: low stratus clouds, midlevel clouds, cirrus clouds, and wintertime ice crystal precipitation. Internally consistent annual cycles of surface temperature, surface albedo, cloud fraction and cloud optical properties, components of surface and top of atmosphere radiative fluxes, and cloud radiative forcing are presented.

The modeled total cloud optical depth (weighted by cloud fraction) ranges from a low value in winter of 2 to a high summertime value of 8. Infrared emmissivities for liquid water clouds are shown to be substantiafly less than unity during the cold half of the year. Values of modeled surface cloud radiative forcing are positive except for two weeks in midsummer; over the course of the year clouds have a net warning effect on the surface in the Arctic. Total cloud radiative forcing at the top of the atmosphere is determined to be positive only briefly in early autumn. Surface longwave fluxes are shown to be very sensitive to the presence of lower-tropospheric ice crystal precipitation during the cold half of the year.

Full access
Andrew Brown, Andrew Dowdy, and Elizabeth E. Ebert

Abstract

Epidemic asthma events represent a significant risk to emergency services as well as the wider community. In southeastern Australia, these events occur in conjunction with relatively high amounts of grass pollen during the late spring and early summer, which may become concentrated in populated areas through atmospheric convergence caused by a number of physical mechanisms including thunderstorm outflow. Thunderstorm forecasts are therefore important for identifying epidemic asthma risk factors. However, the representation of thunderstorm environments using regional numerical weather prediction models, which are a key aspect of the construction of these forecasts, have not yet been systematically evaluated in the context of epidemic asthma events. Here, we evaluate diagnostics of thunderstorm environments from historical simulations of weather conditions in the vicinity of Melbourne, Australia, in relation to the identification of epidemic asthma cases based on hospital data from a set of controls. Skillful identification of epidemic asthma cases is achieved using a thunderstorm diagnostic that describes near-surface water vapor mixing ratio. This diagnostic is then used to gain insights on the variability of meteorological environments related to epidemic asthma in this region, including diurnal variations, long-term trends, and the relationship with large-scale climate drivers. Results suggest that there has been a long-term increase in days with high water vapor mixing ratio during the grass pollen season, with large-scale climate drivers having a limited influence on these conditions.

Significance Statement

We investigate the atmospheric conditions associated with epidemic thunderstorm asthma events in Melbourne, Australia, using historical model simulations of the weather. Conditions appear to be associated with high atmospheric moisture content, which relates to environments favorable for severe thunderstorms, but also potentially pollen rupturing as suggested by previous studies. These conditions are shown to be just as important as the concentration of grass pollen for a set of epidemic thunderstorm asthma events in this region. This means that weather model simulations of thunderstorm conditions can be incorporated into the forecasting process for epidemic asthma in Melbourne, Australia. We also investigate long-term variability in atmospheric conditions associated with severe thunderstorms, including relationships with the large-scale climate and long-term trends.

Open access
Elizabeth E. Ebert and Greg J. Holland

Abstract

A detailed analysis is made of the development of a region of cold cloud-top temperatures in Tropical Cyclone Hilda (1990) in the Coral Sea off eastern Australia. Observed temperatures of approximately 173 K (−100°C) from two independent satellite sources indicate that the convective turrets penetrated well into the stratosphere to an estimated height of around 19.2 km.

The analytical parcel model of Schlesinger is used, together with available observations from the cyclone vicinity, to estimate the convective updrafts required to produce the observed stratosphere penetration. Under realistic assumptions of entrainment and hydrometeor drag, an updraft speed of between 15 and 38 m s−1 at tropopause level is required to provide the observed stratospheric penetration. Independent calculations using observed anvil expansion and environmental CAPE (convective available potential energy) support these updraft findings.

Full access