Search Results
You are looking at 1 - 10 of 28 items for
- Author or Editor: Christopher J. Anderson x
- Refine by Access: All Content x
Abstract
Large, long-lived convective systems over the United States in 1992 and 1993 have been classified according to physical characteristics observed in satellite imagery as quasi-circular [mesoscale convective complex (MCC)] or elongated [persistent elongated convective system (PECS)] and cataloged. The catalog includes the time of initiation, maximum extent, termination, duration, area of the −52°C cloud shield at the time of maximum extent, significant weather associated with each occurrence, and tracks of the −52°C cloud-shield centroid.
Both MCC and PECS favored nocturnal development and on average lasted about 12 h. In both 1992 and 1993, PECS produced −52°C cloud-shield areas of greater extent and occurred more frequently compared with MCCs. The mean position of initiation for PECS in 1992 and 1993 followed a seasonal shift similar to the climatological seasonal shift for MCC occurrences but was displaced eastward of the mean position of MCC initiation in 1992 and 1993. The spatial distribution of MCC and PECS occurrences contain a period of persistent development near 40°N in July 1992 and July 1993 that contributed to the extreme wetness experienced in the Midwest during these two months.
Both MCC and PECS initiated in environments characterized by deep, synoptic-scale ascent associated with continental-scale baroclinic waves. PECS occurrences initiated more often as vigorous waves exited the intermountain region, whereas MCCs initiated more often within a high-amplitude wave with a trough positioned over the northwestern United States and a ridge positioned over the Great Plains. The low-level jet transported moisture into the region of initiation for both MCC and PECS occurrences. The areal extent of convective initiation was limited by the orientation of low-level features for MCC occurrences.
Abstract
Large, long-lived convective systems over the United States in 1992 and 1993 have been classified according to physical characteristics observed in satellite imagery as quasi-circular [mesoscale convective complex (MCC)] or elongated [persistent elongated convective system (PECS)] and cataloged. The catalog includes the time of initiation, maximum extent, termination, duration, area of the −52°C cloud shield at the time of maximum extent, significant weather associated with each occurrence, and tracks of the −52°C cloud-shield centroid.
Both MCC and PECS favored nocturnal development and on average lasted about 12 h. In both 1992 and 1993, PECS produced −52°C cloud-shield areas of greater extent and occurred more frequently compared with MCCs. The mean position of initiation for PECS in 1992 and 1993 followed a seasonal shift similar to the climatological seasonal shift for MCC occurrences but was displaced eastward of the mean position of MCC initiation in 1992 and 1993. The spatial distribution of MCC and PECS occurrences contain a period of persistent development near 40°N in July 1992 and July 1993 that contributed to the extreme wetness experienced in the Midwest during these two months.
Both MCC and PECS initiated in environments characterized by deep, synoptic-scale ascent associated with continental-scale baroclinic waves. PECS occurrences initiated more often as vigorous waves exited the intermountain region, whereas MCCs initiated more often within a high-amplitude wave with a trough positioned over the northwestern United States and a ridge positioned over the Great Plains. The low-level jet transported moisture into the region of initiation for both MCC and PECS occurrences. The areal extent of convective initiation was limited by the orientation of low-level features for MCC occurrences.
Abstract
Reanalysis datasets that are produced by assimilating observations into numerical forecast models may contain unrealistic features owing to the influence of the underlying model. The authors have evaluated the potential for such errors to affect the depiction of summertime low-level jets (LLJs) in the NCEP–NCAR reanalysis by comparing the incidence of LLJs over 7 yr (1992–98) in the reanalysis to hourly observations obtained from the NOAA Wind Profiler Network. The profiler observations are not included in the reanalysis, thereby providing an independent evaluation of the ability of the reanalysis to represent LLJs.
LLJs in the NCEP–NCAR reanalysis exhibit realistic spatial structure, but strong LLJs are infrequent in the lee of the Rocky Mountains, causing substantial bias in LLJ frequency. In this region the forecast by the reanalysis model diminishes the ageostrophic wind, forcing the analysis scheme to restore the ageostrophic wind. The authors recommend sensitivity tests of LLJ simulations by GCMs in which terrain resolution and horizontal grid spacing are varied independently.
Abstract
Reanalysis datasets that are produced by assimilating observations into numerical forecast models may contain unrealistic features owing to the influence of the underlying model. The authors have evaluated the potential for such errors to affect the depiction of summertime low-level jets (LLJs) in the NCEP–NCAR reanalysis by comparing the incidence of LLJs over 7 yr (1992–98) in the reanalysis to hourly observations obtained from the NOAA Wind Profiler Network. The profiler observations are not included in the reanalysis, thereby providing an independent evaluation of the ability of the reanalysis to represent LLJs.
LLJs in the NCEP–NCAR reanalysis exhibit realistic spatial structure, but strong LLJs are infrequent in the lee of the Rocky Mountains, causing substantial bias in LLJ frequency. In this region the forecast by the reanalysis model diminishes the ageostrophic wind, forcing the analysis scheme to restore the ageostrophic wind. The authors recommend sensitivity tests of LLJ simulations by GCMs in which terrain resolution and horizontal grid spacing are varied independently.
Abstract
Large, long-lived mesoscale convective systems (MCSs) over the United States during the 1997–98 El Niño are documented. Two periods of abnormal MCS activity are identified in 1998: from March to mid-April an unusually large number of quasi-linear MCSs were observed in the Midwest; while quasi-circular MCSs in June–August of 1998 were concentrated near 37°N rather than following a seasonal shift similar to that observed in the climatological distribution. Episodic surges of northerly low-level flow were infrequent in March 1998, thereby leading to an unusually high incidence of quasi-linear MCSs and to precipitation anomalies in the central United States.
Abstract
Large, long-lived mesoscale convective systems (MCSs) over the United States during the 1997–98 El Niño are documented. Two periods of abnormal MCS activity are identified in 1998: from March to mid-April an unusually large number of quasi-linear MCSs were observed in the Midwest; while quasi-circular MCSs in June–August of 1998 were concentrated near 37°N rather than following a seasonal shift similar to that observed in the climatological distribution. Episodic surges of northerly low-level flow were infrequent in March 1998, thereby leading to an unusually high incidence of quasi-linear MCSs and to precipitation anomalies in the central United States.
Abstract
The authors have evaluated the performance of operational hourly data from a NOAA Wind Profiler Network 404-MHz radar profiler for detecting low-level jet (LLJ) events in the central United States. Independent, collocated rawinsonde and radar profiler data were time matched, producing 2614 paired observations over a 2-yr period. These observations were used to determine the impacts of the height of the first profiler range gate (500 m) and contamination of the hourly data by migrating birds on the ability of the profiler to accurately diagnose LLJ events. The profilers tend to underrepresent both the strength and frequency of occurrence of the LLJ. It was found that about 50% of LLJ events with wind speed maxima below 500 m were detected, increasing to 70%–80% for events having their wind speed maxima above 500 m. To reduce contamination by migrating birds when using profilers to detect the LLJ, a second-moment filtering technique with a threshold of approximately 2–2.5 m2 s−2 is suggested as an effective compromise between maximizing threat score and probability of detection while maintaining a low false alarm rate.
Abstract
The authors have evaluated the performance of operational hourly data from a NOAA Wind Profiler Network 404-MHz radar profiler for detecting low-level jet (LLJ) events in the central United States. Independent, collocated rawinsonde and radar profiler data were time matched, producing 2614 paired observations over a 2-yr period. These observations were used to determine the impacts of the height of the first profiler range gate (500 m) and contamination of the hourly data by migrating birds on the ability of the profiler to accurately diagnose LLJ events. The profilers tend to underrepresent both the strength and frequency of occurrence of the LLJ. It was found that about 50% of LLJ events with wind speed maxima below 500 m were detected, increasing to 70%–80% for events having their wind speed maxima above 500 m. To reduce contamination by migrating birds when using profilers to detect the LLJ, a second-moment filtering technique with a threshold of approximately 2–2.5 m2 s−2 is suggested as an effective compromise between maximizing threat score and probability of detection while maintaining a low false alarm rate.
Abstract
The development and propagation of mesoscale convective systems (MCSs) was examined within the Weather Research and Forecasting (WRF) model using the Kain–Fritsch (KF) cumulus parameterization scheme and a modified version of this scheme. Mechanisms that led to propagation in the parameterized MCS are evaluated and compared between the versions of the KF scheme. Sensitivity to the convective time step is identified and explored for its role in scheme behavior. The sensitivity of parameterized convection propagation to microphysical feedback and to the shape and magnitude of the convective heating profile is also explored.
Each version of the KF scheme has a favored calling frequency that alters the scheme’s initiation frequency despite using the same convective trigger function. The authors propose that this behavior results in part from interaction with computational damping in WRF. A propagating convective system develops in simulations with both versions, but the typical flow structures are distorted (elevated ascending rear inflow as opposed to a descending rear inflow jet as is typically observed). The shape and magnitude of the heating profile is found to alter the propagation speed appreciably, even more so than the microphysical feedback. Microphysical feedback has a secondary role in producing realistic flow features via the resolvable-scale model microphysics. Deficiencies associated with the schemes are discussed and improvements are proposed.
Abstract
The development and propagation of mesoscale convective systems (MCSs) was examined within the Weather Research and Forecasting (WRF) model using the Kain–Fritsch (KF) cumulus parameterization scheme and a modified version of this scheme. Mechanisms that led to propagation in the parameterized MCS are evaluated and compared between the versions of the KF scheme. Sensitivity to the convective time step is identified and explored for its role in scheme behavior. The sensitivity of parameterized convection propagation to microphysical feedback and to the shape and magnitude of the convective heating profile is also explored.
Each version of the KF scheme has a favored calling frequency that alters the scheme’s initiation frequency despite using the same convective trigger function. The authors propose that this behavior results in part from interaction with computational damping in WRF. A propagating convective system develops in simulations with both versions, but the typical flow structures are distorted (elevated ascending rear inflow as opposed to a descending rear inflow jet as is typically observed). The shape and magnitude of the heating profile is found to alter the propagation speed appreciably, even more so than the microphysical feedback. Microphysical feedback has a secondary role in producing realistic flow features via the resolvable-scale model microphysics. Deficiencies associated with the schemes are discussed and improvements are proposed.
Abstract
The authors have altered the vertical profile of updraft mass flux detrainment in an implementation of the Kain–Fritsch2 (KF2) convective parameterization within the fifth-generation Pennsylvania State University–National Center for Atmospheric Research (Penn State–NCAR) Mesoscale Model (MM5). The effect of this modification was to alter the vertical profile of convective parameterization cloud mass (including cloud water and ice) supplied to the host model for explicit simulation by the grid-resolved dynamical equations and parameterized microphysical processes. These modifications and their sensitivity to horizontal resolution in a matrix of experimental simulations of the June–July 1993 flood in the central United States were tested.
The KF2 modifications impacted the diurnal cycle of precipitation by reducing precipitation from the convective parameterization and increasing precipitation from more slowly evolving mesoscale processes. The modified KF2 reduced an afternoon bias of high precipitation rate in both low- and high-resolution simulations but affected mesoscale precipitation processes only in high-resolution simulations. The combination of high-resolution and modified KF2 resulted in more frequent and more realistically clustered propagating, nocturnal mesoscale precipitation events and agreed best with observations of the nocturnal precipitation rate.
Abstract
The authors have altered the vertical profile of updraft mass flux detrainment in an implementation of the Kain–Fritsch2 (KF2) convective parameterization within the fifth-generation Pennsylvania State University–National Center for Atmospheric Research (Penn State–NCAR) Mesoscale Model (MM5). The effect of this modification was to alter the vertical profile of convective parameterization cloud mass (including cloud water and ice) supplied to the host model for explicit simulation by the grid-resolved dynamical equations and parameterized microphysical processes. These modifications and their sensitivity to horizontal resolution in a matrix of experimental simulations of the June–July 1993 flood in the central United States were tested.
The KF2 modifications impacted the diurnal cycle of precipitation by reducing precipitation from the convective parameterization and increasing precipitation from more slowly evolving mesoscale processes. The modified KF2 reduced an afternoon bias of high precipitation rate in both low- and high-resolution simulations but affected mesoscale precipitation processes only in high-resolution simulations. The combination of high-resolution and modified KF2 resulted in more frequent and more realistically clustered propagating, nocturnal mesoscale precipitation events and agreed best with observations of the nocturnal precipitation rate.
Abstract
The number of tornadoes reported in the United States is believed to be less than the actual incidence of tornadoes, especially prior to the 1990s, because tornadoes may be undetectable by human witnesses in sparsely populated areas and areas in which obstructions limit the line of sight. A hierarchical Bayesian model is used to simultaneously correct for population-based sampling bias and estimate tornado density using historical tornado report data. The expected result is that F2–F5 compared with F0–F1 tornado reports would vary less with population density. The results agree with this hypothesis for the following population centers: Atlanta, Georgia; Champaign, Illinois; and Des Moines, Iowa. However, the results indicated just the opposite in Oklahoma. It is hypothesized that the result is explained by the misclassification of tornadoes that were worthy of F2–F5 rating but were classified as F0–F1 tornadoes, thereby artificially decreasing the number of F2–F5 and increasing the number of F0–F1 reports in rural Oklahoma.
Abstract
The number of tornadoes reported in the United States is believed to be less than the actual incidence of tornadoes, especially prior to the 1990s, because tornadoes may be undetectable by human witnesses in sparsely populated areas and areas in which obstructions limit the line of sight. A hierarchical Bayesian model is used to simultaneously correct for population-based sampling bias and estimate tornado density using historical tornado report data. The expected result is that F2–F5 compared with F0–F1 tornado reports would vary less with population density. The results agree with this hypothesis for the following population centers: Atlanta, Georgia; Champaign, Illinois; and Des Moines, Iowa. However, the results indicated just the opposite in Oklahoma. It is hypothesized that the result is explained by the misclassification of tornadoes that were worthy of F2–F5 rating but were classified as F0–F1 tornadoes, thereby artificially decreasing the number of F2–F5 and increasing the number of F0–F1 reports in rural Oklahoma.
Abstract
Parameterizations in numerical models account for unresolved processes. These parameterizations are inherently difficult to construct and as such typically have notable imperfections. One approach to account for this uncertainty is through stochastic parameterizations. This paper describes a methodological approach whereby existing parameterizations provide the basis for a simple stochastic approach. More importantly, this paper describes systematically how one can “train” such parameterizations with observations. In particular, a stochastic trigger function has been implemented for convective initiation in the Kain–Fritsch (KF) convective parameterization scheme within the fifth-generation Pennsylvania State University–National Center for Atmospheric Research (Penn State–NCAR) Mesoscale Model (MM5). In this approach, convective initiation within MM5 is modeled by a binary random process. The probability of initiation is then modeled through a transformation in terms of the standard KF trigger variables, but with random parameters. The distribution of these random parameters is obtained through a Bayesian Monte Carlo procedure informed by radar reflectivities. Estimates of these distributions are then incorporated into the KF trigger function, giving a meaningful stochastic (distributional) parameterization. The approach is applied to cases from the International H2O project (IHOP). The results suggest the stochastic parameterization/Bayesian learning approach has potential to improve forecasts of convective precipitation in mesoscale models.
Abstract
Parameterizations in numerical models account for unresolved processes. These parameterizations are inherently difficult to construct and as such typically have notable imperfections. One approach to account for this uncertainty is through stochastic parameterizations. This paper describes a methodological approach whereby existing parameterizations provide the basis for a simple stochastic approach. More importantly, this paper describes systematically how one can “train” such parameterizations with observations. In particular, a stochastic trigger function has been implemented for convective initiation in the Kain–Fritsch (KF) convective parameterization scheme within the fifth-generation Pennsylvania State University–National Center for Atmospheric Research (Penn State–NCAR) Mesoscale Model (MM5). In this approach, convective initiation within MM5 is modeled by a binary random process. The probability of initiation is then modeled through a transformation in terms of the standard KF trigger variables, but with random parameters. The distribution of these random parameters is obtained through a Bayesian Monte Carlo procedure informed by radar reflectivities. Estimates of these distributions are then incorporated into the KF trigger function, giving a meaningful stochastic (distributional) parameterization. The approach is applied to cases from the International H2O project (IHOP). The results suggest the stochastic parameterization/Bayesian learning approach has potential to improve forecasts of convective precipitation in mesoscale models.
Abstract
The most significant precipitation events in California occur during the winter and are often related to synoptic-scale storms from the Pacific Ocean. Because of the terrain characteristics and the fact that the urban and infrastructural expansion is concentrated in lower elevation areas of the California Central Valley, a high risk of flooding is usually associated with these events. In the present study, the area of interest was the American River basin (ARB). The main focus of the present study was to investigate methods for Quantitative Precipitation Forecast (QPF) improvement by estimating the impact that various microphysical schemes, planetary boundary layer (PBL) schemes, and initialization methods have on cold season precipitation, primarily orographically induced. For this purpose, 3-km grid spacing Weather Research and Forecasting (WRF) model simulations of four Hydrometeorological Test bed (HMT) events were used. For each event, four different microphysical schemes and two different PBL schemes were used. All runs were initialized with both a diabatic Local Analysis and Prediction System (LAPS) “hot” start and 40-km eta analyses.
To quantify the impact of physical schemes, their interactions, and initial conditions upon simulated rain volume, the factor separation methodology was used. The results showed that simulated rain volume was particularly affected by changes in microphysical schemes for both initializations. When the initialization was changed from the LAPS to the eta analysis, the change in the PBL scheme and corresponding synergistic terms (which corresponded to the interactions between different microphysical and PBL schemes) resulted in a statistically significant impact on rain volume. In addition, by combining model runs based on the knowledge about their impact on simulated rain volume obtained through the factor separation methodology, the bias in simulated rain volume was reduced.
Abstract
The most significant precipitation events in California occur during the winter and are often related to synoptic-scale storms from the Pacific Ocean. Because of the terrain characteristics and the fact that the urban and infrastructural expansion is concentrated in lower elevation areas of the California Central Valley, a high risk of flooding is usually associated with these events. In the present study, the area of interest was the American River basin (ARB). The main focus of the present study was to investigate methods for Quantitative Precipitation Forecast (QPF) improvement by estimating the impact that various microphysical schemes, planetary boundary layer (PBL) schemes, and initialization methods have on cold season precipitation, primarily orographically induced. For this purpose, 3-km grid spacing Weather Research and Forecasting (WRF) model simulations of four Hydrometeorological Test bed (HMT) events were used. For each event, four different microphysical schemes and two different PBL schemes were used. All runs were initialized with both a diabatic Local Analysis and Prediction System (LAPS) “hot” start and 40-km eta analyses.
To quantify the impact of physical schemes, their interactions, and initial conditions upon simulated rain volume, the factor separation methodology was used. The results showed that simulated rain volume was particularly affected by changes in microphysical schemes for both initializations. When the initialization was changed from the LAPS to the eta analysis, the change in the PBL scheme and corresponding synergistic terms (which corresponded to the interactions between different microphysical and PBL schemes) resulted in a statistically significant impact on rain volume. In addition, by combining model runs based on the knowledge about their impact on simulated rain volume obtained through the factor separation methodology, the bias in simulated rain volume was reduced.
Abstract
High-resolution (3 km) time-lagged (initialized every 3 h) multimodel ensembles were produced in support of the Hydrometeorological Testbed (HMT)-West-2006 campaign in northern California, covering the American River basin (ARB). Multiple mesoscale models were used, including the Weather Research and Forecasting (WRF) model, Regional Atmospheric Modeling System (RAMS), and fifth-generation Pennsylvania State University–National Center for Atmospheric Research Mesoscale Model (MM5). Short-range (6 h) quantitative precipitation forecasts (QPFs) and probabilistic QPFs (PQPFs) were compared to the 4-km NCEP stage IV precipitation analyses for archived intensive operation periods (IOPs). The two sets of ensemble runs (operational and rerun forecasts) were examined to evaluate the quality of high-resolution QPFs produced by time-lagged multimodel ensembles and to investigate the impacts of ensemble configurations on forecast skill. Uncertainties in precipitation forecasts were associated with different models, model physics, and initial and boundary conditions. The diabatic initialization by the Local Analysis and Prediction System (LAPS) helped precipitation forecasts, while the selection of microphysics was critical in ensemble design. Probability biases in the ensemble products were addressed by calibrating PQPFs. Using artificial neural network (ANN) and linear regression (LR) methods, the bias correction of PQPFs and a cross-validation procedure were applied to three operational IOPs and four rerun IOPs. Both the ANN and LR methods effectively improved PQPFs, especially for lower thresholds. The LR method outperformed the ANN method in bias correction, in particular for a smaller training data size. More training data (e.g., one-season forecasts) are desirable to test the robustness of both calibration methods.
Abstract
High-resolution (3 km) time-lagged (initialized every 3 h) multimodel ensembles were produced in support of the Hydrometeorological Testbed (HMT)-West-2006 campaign in northern California, covering the American River basin (ARB). Multiple mesoscale models were used, including the Weather Research and Forecasting (WRF) model, Regional Atmospheric Modeling System (RAMS), and fifth-generation Pennsylvania State University–National Center for Atmospheric Research Mesoscale Model (MM5). Short-range (6 h) quantitative precipitation forecasts (QPFs) and probabilistic QPFs (PQPFs) were compared to the 4-km NCEP stage IV precipitation analyses for archived intensive operation periods (IOPs). The two sets of ensemble runs (operational and rerun forecasts) were examined to evaluate the quality of high-resolution QPFs produced by time-lagged multimodel ensembles and to investigate the impacts of ensemble configurations on forecast skill. Uncertainties in precipitation forecasts were associated with different models, model physics, and initial and boundary conditions. The diabatic initialization by the Local Analysis and Prediction System (LAPS) helped precipitation forecasts, while the selection of microphysics was critical in ensemble design. Probability biases in the ensemble products were addressed by calibrating PQPFs. Using artificial neural network (ANN) and linear regression (LR) methods, the bias correction of PQPFs and a cross-validation procedure were applied to three operational IOPs and four rerun IOPs. Both the ANN and LR methods effectively improved PQPFs, especially for lower thresholds. The LR method outperformed the ANN method in bias correction, in particular for a smaller training data size. More training data (e.g., one-season forecasts) are desirable to test the robustness of both calibration methods.