Browse

You are looking at 1 - 10 of 9,574 items for :

  • Journal of Applied Meteorology and Climatology x
  • User-accessible content x
Clear All
Christopher P. Loughner, Benjamin Fasoli, Ariel F. Stein, and John C. Lin

Abstract

The Hybrid Single-Particle Lagrangian Integrated Trajectory model (HYSPLIT) is a state-of-the-science atmospheric dispersion model that is developed and maintained at the National Oceanic Atmospheric Administration’s Air Resources Laboratory. In the early 2000s, HYSPLIT served as the starting point for development of the Stochastic Time-Inverted Lagrangian Transport (STILT) model that emphasizes backward-in-time dispersion simulations to determine source regions of receptors. STILT continued its separate development and gained a wide user base. Since STILT was built on a now outdated version of HYSPLIT and lacks long-term institutional support to maintain the model, incorporating STILT features into HYSPLIT allows these features to stay up to date. This paper describes the STILT features incorporated into HYSPLIT, which include a new vertical interpolation algorithm for WRF-derived meteorological input files, a detailed algorithm for estimating boundary layer height, a new turbulence parameterization, a vertical Lagrangian time scale that varies in time and space, a complex dispersion algorithm, and two new convection schemes. An evaluation of these new features was performed using tracer release data from the Cross Appalachian Tracer Experiment and the Across North America Tracer Experiment. Results show that the dispersion module from STILT, which takes up to double the amount of time to run, is less dispersive in the vertical direction and is in better agreement with observations when compared with the existing HYSPLIT option. The other new modeling features from STILT were not consistently statistically different than existing HYSPLIT options. Forward-time simulations from the new model were also compared with backward-in-time equivalents and were found to be statistically comparable to one another.

Open access
Brittany N. Carson-Marquis, Jianglong Zhang, Peng Xian, Jeffrey S. Reid, and Jared Marquis

Abstract

When unaccounted for in numerical weather prediction (NWP) models, heavy aerosol events can cause significant unrealized biases in forecasted meteorological parameters such as surface temperature. To improve near-surface forecasting accuracies during heavy aerosol loadings, we demonstrate the feasibility of incorporating aerosol fields from a global chemical transport model as initial and boundary conditions into a higher resolution NWP model with aerosol-meteorological coupling. This concept is tested for a major biomass burning smoke event over the Northern Great Plains region of the United States that occurred during summer of 2015. Aerosol analyses from the global Navy Aerosol Analysis and Prediction System (NAAPS) are used as initial and boundary conditions for Weather Research and Forecasting with Chemistry (WRF-Chem) simulations. Through incorporating more realistic aerosol direct effects into the WRF-Chem simulations, errors in WRF-Chem simulated surface downward shortwave radiative fluxes and near-surface temperature are reduced compared with surface-based observations. This study confirms the ability to decrease biases induced by the aerosol direct effect for regional NWP forecasts during high-impact aerosol episodes through the incorporation of analyses and forecasts from a global aerosol transport model.

Open access
Maike F. Holthuijzen, Brian Beckage, Patrick J. Clemins, Dave Higdon, and Jonathan M. Winter

Abstract

High-resolution, bias-corrected climate data are necessary for climate impact studies at local scales. Gridded historical data are convenient for bias correction but may contain biases resulting from interpolation. Long-term, quality-controlled station data are generally superior climatological measurements, but because the distribution of climate stations is irregular, station data are challenging to incorporate into downscaling and bias-correction approaches. Here, we compared six novel methods for constructing full-coverage, high-resolution, bias-corrected climate products using daily maximum temperature simulations from a regional climate model (RCM). Only station data were used for bias correction. We quantified performance of the six methods with the root-mean-square-error (RMSE) and Perkins skill score (PSS) and used two ANOVA models to analyze how performance varied among methods. We validated the six methods using two calibration periods of observed data (1980–89 and 1980–2014) and two testing sets of RCM data (1990–2014 and 1980–2014). RMSE for all methods varied throughout the year and was larger in cold months, whereas PSS was more consistent. Quantile-mapping bias-correction techniques substantially improved PSS, while simple linear transfer functions performed best in improving RMSE. For the 1980–89 calibration period, simple quantile-mapping techniques outperformed empirical quantile mapping (EQM) in improving PSS. When calibration and testing time periods were equivalent, EQM resulted in the largest improvements in PSS. No one method performed best in both RMSE and PSS. Our results indicate that simple quantile-mapping techniques are less prone to overfitting than EQM and are suitable for processing future climate model output, whereas EQM is ideal for bias correcting historical climate model output.

Open access
Cristian Muñoz and David M. Schultz

Abstract

A study of 500-hPa cutoff lows in central Chile during 1979–2017 was conducted to contrast cutoff lows associated with the lowest quartile of daily precipitation amounts (LOW25) with cutoff lows associated with the highest quartile (HIGH25). To understand the differences between low- and high-precipitation cutoff lows, daily precipitation records, radiosonde observations, and reanalyses were used to analyze the three ingredients necessary for deep moist convection (instability, lift, and moisture) at the eastern and equatorial edge of these lows. Instability was generally small, if any, and showed no major differences between LOW25 and HIGH25 events. Synoptic-scale ascent associated with Q-vector convergence also showed little difference between LOW25 and HIGH25 events. In contrast, the moisture distribution around LOW25 and HIGH25 cutoff lows was different, with a moisture plume that was more defined and more intense equatorward of HIGH25 cutoff lows as compared with LOW25 cutoff lows where the moisture plume occurred poleward of the cutoff low. The presence of the moisture plume equatorward of HIGH25 cutoff lows may have contributed to the shorter persistence of HIGH25 events by providing a source for latent-heat release when the moisture plume reached the windward side of the Andes. Indeed, whereas 48% of LOW25 cutoff lows persisted for longer than 72 h, only 25% of HIGH25 cutoff lows did, despite both systems occurring mostly during the rainy season (May–September). The occurrence of an equatorial moisture plume on the eastern and equatorial edge of cutoff lows is fairly common during high-impact precipitation events, and this mechanism could help to explain high-impact precipitation where the occurrence of cutoff lows and moisture plumes is frequent.

Open access
Joseph Sedlar, Laura D. Riihimaki, Kathleen Lantz, and David D. Turner

Abstract

Various methods have been developed to characterize cloud type, otherwise referred to as cloud regime. These include manual sky observations, combining radiative and cloud vertical properties observed from satellite, surface-based remote sensing, and digital processing of sky imagers. While each method has inherent advantages and disadvantages, none of these cloud-typing methods actually includes measurements of surface shortwave or longwave radiative fluxes. Here, a method that relies upon detailed, surface-based radiation and cloud measurements and derived data products to train a random-forest machine-learning cloud classification model is introduced. Measurements from five years of data from the ARM Southern Great Plains site were compiled to train and independently evaluate the model classification performance. A cloud-type accuracy of approximately 80% using the random-forest classifier reveals that the model is well suited to predict climatological cloud properties. Furthermore, an analysis of the cloud-type misclassifications is performed. While physical cloud types may be misreported, the shortwave radiative signatures are similar between misclassified cloud types. From this, we assert that the cloud-regime model has the capacity to successfully differentiate clouds with comparable cloud–radiative interactions. Therefore, we conclude that the model can provide useful cloud-property information for fundamental cloud studies, inform renewable energy studies, and be a tool for numerical model evaluation and parameterization improvement, among many other applications.

Open access
William A. Gough

Abstract

A newly developed precipitation phase metric is used to detect the impact of urbanization on the nature of precipitation at Toronto, Ontario, Canada, by contrasting the relative amounts of rain and snow. A total of 162 years of observed precipitation data were analyzed to classify the nature of winter-season precipitation for the city of Toronto. In addition, shorter records were examined for nearby climate stations in less-urbanized areas in and near Toronto. For Toronto, all winters from 1849 to 2010 as well as three climate normal periods (1961–90, 1971–2000, and 1981–2010) were thus categorized for the Toronto climate record. The results show that Toronto winters have become increasingly “rainy” across these time periods in a statistically significant fashion, consistent with a warming climate. Toronto was compared with the other less urban sites to tease out the impacts of the urban heat island from larger-scale warming. This yielded an estimate of 19%–27% of the Toronto shift in precipitation type (from snow to rain) that can be attributed to urbanization for coincident time periods. Other regions characterized by similar climates and urbanization with temperatures near the freezing point are likely to experience similar climatic changes expressed as a change in the phase of winter-season precipitation.

Open access
Dario Ruggiu, Francesco Viola, and Andreas Langousis

Abstract

We develop a nonparametric procedure to assess the accuracy of the normality assumption for annual rainfall totals (ART), based on the marginal statistics of daily rainfall. The procedure is addressed to practitioners and hydrologists that operate in data-poor regions. To do so we use 1) goodness-of-fit metrics to conclude on the approximate convergence of the empirical distribution of annual rainfall totals to a normal shape and classify 3007 daily rainfall time series from the NOAA/NCDC Global Historical Climatology Network database, with at least 30 years of recordings, into Gaussian (G) and non-Gaussian (NG) groups; 2) logistic regression analysis to identify the statistics of daily rainfall that are most descriptive of the G/NG classification; and 3) a random-search algorithm to conclude on a set of constraints that allows classification of ART samples on the basis of the marginal statistics of daily rain rates. The analysis shows that the Anderson–Darling (AD) test statistic is the most conservative one in determining approximate Gaussianity of ART samples (followed by Cramer–Von Mises and Lilliefors’s version of Kolmogorov–Smirnov) and that daily rainfall time series with fraction of wet days f wd < 0.1 and daily skewness coefficient of positive rain rates skwd > 5.92 deviate significantly from the normal shape. In addition, we find that continental climate (type D) exhibits the highest fraction of Gaussian distributed ART samples (i.e., 74.45%; AD test at α = 5% significance level), followed by warm temperate (type C; 72.80%), equatorial (type A; 68.83%), polar (type E; 62.96%), and arid (type B; 60.29%) climates.

Open access
Sybille Y. Schoger, Dmitri Moisseev, Annakaisa von Lerber, Susanne Crewell, and Kerstin Ebell

Abstract

Two power-law relations linking equivalent radar reflectivity factor Z e and snowfall rate S are derived for a K-band Micro Rain Radar (MRR) and for a W-band cloud radar. For the development of these Z e –S relationships, a dataset of calculated and measured variables is used. Surface-based video-disdrometer measurements were collected during snowfall events over five winters at the high-latitude site in Hyytiälä, Finland. The data from 2014 to 2018 include particle size distributions (PSD) and their fall velocities, from which snowflake masses were derived. The K- and W-band Z e values are computed using these surface-based observations and snowflake scattering properties as provided by T-matrix and single-particle scattering tables, respectively. The uncertainty analysis shows that the K-band snowfall-rate estimation is significantly improved by including the intercept parameter N 0 of the PSD calculated from concurrent disdrometer measurements. If N 0 is used to adjust the prefactor of the Z e –S relationship, the RMSE of the snowfall-rate estimate can be reduced from 0.37 to around 0.11 mm h−1. For W-band radar, a Z e –S relationship with constant parameters for all available snow events shows a similar uncertainty when compared with the method that includes the PSD intercept parameter. To demonstrate the performance of the proposed Z e –S relationships, they are applied to measurements of the MRR and the W-band microwave radar for Arctic clouds at the Arctic research base operated by the German Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research (AWI) and the French Polar Institute Paul Emile Victor (IPEV) (AWIPEV) in Ny-Ålesund, Svalbard, Norway. The resulting snowfall-rate estimates show good agreement with in situ snowfall observations while other Z e –S relationships from literature reveal larger differences.

Open access
Heather MacDonald, Daniel W. McKenney, Xiaolan L. Wang, John Pedlar, Pia Papadopol, Kevin Lawrence, Yang Feng, and Michael F. Hutchinson

Abstract

This study presents spatial models (i.e., thin-plate spatially continuous spline surfaces) of adjusted precipitation for Canada at daily, pentad (5 day), and monthly time scales from 1900 to 2015. The input data include manual observations from 3346 stations that were adjusted previously to correct for snow water equivalent (SWE) conversion and various gauge-related issues. In addition to the 42 331 models for daily total precipitation and 1392 monthly total precipitation models, 8395 pentad models were developed for the first time, depicting mean precipitation for 73 pentads annually. For much of Canada, mapped precipitation values from this study were higher than those from the corresponding unadjusted models (i.e., models fitted to the unadjusted data), reflecting predominantly the effects of the adjustments to the input data. Error estimates compared favorably to the corresponding unadjusted models. For example, root generalized cross-validation (GCV) estimate (a measure of predictive error) at the daily time scale was 3.6 mm on average for the 1960–2003 period as compared with 3.7 mm for the unadjusted models over the same period. There was a dry bias in the predictions relative to recorded values of between 1% and 6.7% of the average precipitations amounts for all time scales. Mean absolute predictive errors of the daily, pentad, and monthly models were 2.5 mm (52.7%), 0.9 mm (37.4%), and 11.2 mm (19.3%), respectively. In general, the model skill was closely tied to the density of the station network. The current adjusted models are available in grid form at ~2–10-km resolutions.

Open access
Sarah D. Bang and Daniel J. Cecil

Abstract

Several studies in the literature have developed approaches to diagnose hail storms from satellite-borne passive microwave imagery and build nearly global climatologies of hail. This paper uses spaceborne Ku-band radar measurements from the Global Precipitation Measurement (GPM) mission Dual-Frequency Precipitation Radar (DPR) to validate several passive microwave approaches. We assess the retrievals on the basis of how tightly they constrain the radar reflectivity at −20°C and how this measured radar reflectivity aloft varies geographically. The algorithm that combines minimum 19-GHz polarization corrected temperature (PCT) with a 37-GHz PCT depression normalized by tropopause height constrains the radar reflectivity most tightly and gives the least appearance of regional biases. A retrieval that is based on a 19-GHz PCT threshold of 261K also produces tightly clustered profiles of radar reflectivity, with little regional bias. An approach using regionally adjusted minimum 37-GHz PCT performs relatively well, but our results indicate it may overestimate hail in some subtropical and midlatitude regions. A threshold applied to the minimum 37-GHz PCT (≤230 K), without any scaling by region or probability of hail, overestimates hail in the tropics and underestimates beyond the tropics. For all retrieval approaches, storms identified as having hail tended to have radar reflectivity profiles that are consistent with general expectations for hailstorms (reflectivity > 50 dBZ below the 0°C level, and > 40 dBZ extending far above 0°C). Profiles from oceanic regions tended to have more rapidly decreasing reflectivity with height than profiles from other regions. Subtropical, high-latitude, and high-terrain land profiles had the slowest decreases of reflectivity with height.

Open access