Search Results

You are looking at 1 - 5 of 5 items for

  • Author or Editor: L. Delle Monache x
  • All content x
Clear All Modify Search
James O. Pinto, Andrew J. Monaghan, Luca Delle Monache, Emilie Vanvyve, and Daran L. Rife

Abstract

Dynamical downscaling is a computationally expensive method whereby finescale details of the atmosphere may be portrayed by running a limited area numerical weather prediction model (often called a regional climate model) nested within a coarse-resolution global reanalysis or global climate model output. The goal of this study is to assess using sampling techniques to dynamically downscale a small subset of days to approximate the statistical properties of the entire period of interest. Two sampling techniques are explored: one where days are randomly selected and another where representative days are chosen (or targeted) based on a set of selection criteria. The relative merit of using random sampling versus targeted random sampling is demonstrated using daily mean 2-m air temperature (T2M). The first two moments of dynamically downscaled T2M can be approximated within 0.3 K using just 5% of the population of available days during a 20-yr period. Targeted random sampling can reduce the mean absolute error of these estimates by as much as 30% locally. Estimation of the more extreme values of T2M is more uncertain and requires a larger sample size. The potential reduction in computational cost afforded by these sampling techniques could greatly benefit applications requiring high-resolution dynamically downscaled depictions of regional climate, including situations in which an ensemble of regional climate simulations is required to properly characterize uncertainty in the model physics assumptions, scenarios, and so on.

Full access
Luca Delle Monache, F. Anthony Eckel, Daran L. Rife, Badrinath Nagarajan, and Keith Searight

Abstract

This study explores an analog-based method to generate an ensemble [analog ensemble (AnEn)] in which the probability distribution of the future state of the atmosphere is estimated with a set of past observations that correspond to the best analogs of a deterministic numerical weather prediction (NWP). An analog for a given location and forecast lead time is defined as a past prediction, from the same model, that has similar values for selected features of the current model forecast. The AnEn is evaluated for 0–48-h probabilistic predictions of 10-m wind speed and 2-m temperature over the contiguous United States and against observations provided by 550 surface stations, over the 23 April–31 July 2011 period. The AnEn is generated from the Environment Canada (EC) deterministic Global Environmental Multiscale (GEM) model and a 12–15-month-long training period of forecasts and observations. The skill and value of AnEn predictions are compared with forecasts from a state-of-the-science NWP ensemble system, the 21-member Regional Ensemble Prediction System (REPS). The AnEn exhibits high statistical consistency and reliability and the ability to capture the flow-dependent behavior of errors, and it has equal or superior skill and value compared to forecasts generated via logistic regression (LR) applied to both the deterministic GEM (as in AnEn) and REPS [ensemble model output statistics (EMOS)]. The real-time computational cost of AnEn and LR is lower than EMOS.

Full access
A. Cobb, A. Michaelis, S. Iacobellis, F. M. Ralph, and L. Delle Monache

Abstract

Atmospheric rivers (ARs) are responsible for intense winter rainfall events impacting the U.S. West Coast, and have been studied extensively during CalWater and AR Recon field programs (2014–20). A unique set of 858 dropsondes deployed in lines transecting 33 ARs are analyzed, and integrated vapor transport (IVT) is used to define five regions: core, cold and warm sectors, and non-AR cold and warm sides. The core is defined as having at least 80% of the maximum IVT in the transect. Remaining dropsondes with IVT > 250 kg m−1 s−1 are assigned to cold or warm sectors, and those outside of this threshold form non-AR sides. The mean widths of the three AR sectors are approximately 280 km. However, the core contains roughly 50% of all the water vapor transport (i.e., the total IVT), while the others each contain roughly 25%. A low-level jet occurs most often in the core and warm sector with mean maximum wind speeds of 28.3 and 21.7 m s−1, comparable to previous studies, although with heights approximately 300 m lower than previously reported. The core exhibits characteristics most favorable for adiabatic lifting to saturation by the California coastal range. On average, stability in the core is moist neutral, with considerable variability around the mean. A relaxed squared moist Brunt–Väisälä frequency threshold shows ~8%–12% of core profiles exhibiting near-moist neutrality. The vertical distribution of IVT, which modulates orographic precipitation, varied across AR sectors, with 75% of IVT residing below 3115 m in the core.

Restricted access
Badrinath Nagarajan, Luca Delle Monache, Joshua P. Hacker, Daran L. Rife, Keith Searight, Jason C. Knievel, and Thomas N. Nipen

Abstract

Recently, two analog-based postprocessing methods were demonstrated to reduce the systematic and random errors from Weather Research and Forecasting (WRF) Model predictions of 10-m wind speed over the central United States. To test robustness and generality, and to gain a deeper understanding of postprocessing forecasts with analogs, this paper expands upon that work by applying both analog methods to surface stations evenly distributed across the conterminous United States over a 1-yr period. The Global Forecast System (GFS), North American Mesoscale Forecast System (NAM), and Rapid Update Cycle (RUC) forecasts for screen-height wind, temperature, and humidity are postprocessed with the two analog-based methods and with two time series–based methods—a running mean bias correction and an algorithm inspired by the Kalman filter. Forecasts are evaluated according to a range of metrics, including random and systematic error components; correlation; and by conditioning the error distributions on lead time, location, error magnitude, and day-to-day error variability.

Results show that the analog methods are generally more effective than time series–based methods at reducing the random error component, leading to an overall reduction in root-mean-square error. Details among the methods differ and are elucidated upon in this study. The relative levels of random and systematic error in the raw forecasts determine, to a large extent, the effectiveness of each postprocessing method in reducing forecast errors. When the errors are dominated by random errors (e.g., where thunderstorms are common), the analog-based methods far outperform the time series–based methods. When the errors are strictly systematic (i.e., a bias), the analog methods lose their advantage over the time series methods. It is shown that slowly evolving systematic errors rarely dominate, so reducing the random error component is most effective at reducing the error magnitude. The results are shown to be valid for all seasons. The analog methods show similar performance to the operational model output statistics (MOS) while showing greater reduction of random errors at certain lead times.

Full access
Cristina L. Archer, Brian A. Colle, Luca Delle Monache, Michael J. Dvorak, Julie Lundquist, Bruce H. Bailey, Philippe Beaucage, Matthew J. Churchfield, Anna C. Fitch, Branko Kosovic, Sang Lee, Patrick J. Moriarty, Hugo Simao, Richard J. A. M. Stevens, Dana Veron, and John Zack
Full access