Search Results
You are looking at 1 - 10 of 192 items for
- Author or Editor: Ming Xue x
- Refine by Access: All Content x
Abstract
High-order numerical diffusion is commonly used in numerical models to provide scale selective control over small-scale noise. Conventional high-order schemes have undesirable side effects, however: they can introduce noise themselves. Two types of monotonic high-order diffusion schemes are proposed. One is based on flux correction/limiting on the corrective fluxes, which is the difference between a high-order (fourth order and above) diffusion scheme and a lower-order (typically second order) one. Overshooting and undershooting found in the solutions of higher-order diffusions near sharp gradients are prevented, while the highly selective property of damping is retained.
The second simpler (flux limited) scheme simply ensures that the diffusive fluxes are always downgradient; otherwise, the fluxes are set to zero. This much simpler scheme yields as good a solution in 1D cases as and better solutions in 2D than the one using the first more elaborate flux limiter. The scheme also preserves monotonicity in the solutions and is computational much more efficient.
The simple flux-limited fourth- and sixth-order diffusion schemes are also applied to thermal bubble convection. It is shown that overshooting and undershooting are consistently smaller when the flux-limited version of the high-order diffusion is used, no matter whether the advection scheme is monotonic or not. This conclusion applies to both scalar and momentum fields. Higher-order monotonic diffusion works better and even more so when used together with monotonic advection.
Abstract
High-order numerical diffusion is commonly used in numerical models to provide scale selective control over small-scale noise. Conventional high-order schemes have undesirable side effects, however: they can introduce noise themselves. Two types of monotonic high-order diffusion schemes are proposed. One is based on flux correction/limiting on the corrective fluxes, which is the difference between a high-order (fourth order and above) diffusion scheme and a lower-order (typically second order) one. Overshooting and undershooting found in the solutions of higher-order diffusions near sharp gradients are prevented, while the highly selective property of damping is retained.
The second simpler (flux limited) scheme simply ensures that the diffusive fluxes are always downgradient; otherwise, the fluxes are set to zero. This much simpler scheme yields as good a solution in 1D cases as and better solutions in 2D than the one using the first more elaborate flux limiter. The scheme also preserves monotonicity in the solutions and is computational much more efficient.
The simple flux-limited fourth- and sixth-order diffusion schemes are also applied to thermal bubble convection. It is shown that overshooting and undershooting are consistently smaller when the flux-limited version of the high-order diffusion is used, no matter whether the advection scheme is monotonic or not. This conclusion applies to both scalar and momentum fields. Higher-order monotonic diffusion works better and even more so when used together with monotonic advection.
Abstract
Various configurations of the intermittent data assimilation procedure for Level-II Weather Surveillance Radar-1988 Doppler radar data are examined for the analysis and prediction of a tornadic thunderstorm that occurred on 8 May 2003 near Oklahoma City, Oklahoma. Several tornadoes were produced by this thunderstorm, causing extensive damages in the south Oklahoma City area. Within the rapidly cycled assimilation system, the Advanced Regional Prediction System three-dimensional variational data assimilation (ARPS 3DVAR) is employed to analyze conventional and radar radial velocity data, while the ARPS complex cloud analysis procedure is used to analyze cloud and hydrometeor fields and adjust in-cloud temperature and moisture fields based on reflectivity observations and the preliminary analysis of the atmosphere. Forecasts for up to 2.5 h are made from the assimilated initial conditions. Two one-way nested grids at 9- and 3-km grid spacings are employed although the assimilation configuration experiments are conducted for the 3-km grid only while keeping the 9-km grid configuration the same. Data from the Oklahoma City radar are used. Different combinations of the assimilation frequency, in-cloud temperature adjustment schemes, and the length and coverage of the assimilation window are tested, and the results are discussed with respect to the length and evolution stage of the thunderstorm life cycle. It is found that even though the general assimilation method remains the same, the assimilation settings can significantly impact the results of assimilation and the subsequent forecast. For this case, a 1-h-long assimilation window covering the entire initial stage of the storm together with a 10-min spinup period before storm initiation works best. Assimilation frequency and in-cloud temperature adjustment scheme should be set carefully to add suitable amounts of potential energy during assimilation. High assimilation frequency does not necessarily lead to a better result because of the significant adjustment during the initial forecast period. When a short assimilation window is used, covering the later part of the initial stage of storm and using a high assimilation frequency and a temperature adjustment scheme based on latent heat release can quickly build up the storm and produce a reasonable analysis and forecast. The results also show that when the data from a single Doppler radar are assimilated with properly chosen assimilation configurations, the model is able to predict the evolution of the 8 May 2003 Oklahoma City tornadic thunderstorm well for up to 2.5 h. The implications of the choices of assimilation settings for real-time applications are discussed.
Abstract
Various configurations of the intermittent data assimilation procedure for Level-II Weather Surveillance Radar-1988 Doppler radar data are examined for the analysis and prediction of a tornadic thunderstorm that occurred on 8 May 2003 near Oklahoma City, Oklahoma. Several tornadoes were produced by this thunderstorm, causing extensive damages in the south Oklahoma City area. Within the rapidly cycled assimilation system, the Advanced Regional Prediction System three-dimensional variational data assimilation (ARPS 3DVAR) is employed to analyze conventional and radar radial velocity data, while the ARPS complex cloud analysis procedure is used to analyze cloud and hydrometeor fields and adjust in-cloud temperature and moisture fields based on reflectivity observations and the preliminary analysis of the atmosphere. Forecasts for up to 2.5 h are made from the assimilated initial conditions. Two one-way nested grids at 9- and 3-km grid spacings are employed although the assimilation configuration experiments are conducted for the 3-km grid only while keeping the 9-km grid configuration the same. Data from the Oklahoma City radar are used. Different combinations of the assimilation frequency, in-cloud temperature adjustment schemes, and the length and coverage of the assimilation window are tested, and the results are discussed with respect to the length and evolution stage of the thunderstorm life cycle. It is found that even though the general assimilation method remains the same, the assimilation settings can significantly impact the results of assimilation and the subsequent forecast. For this case, a 1-h-long assimilation window covering the entire initial stage of the storm together with a 10-min spinup period before storm initiation works best. Assimilation frequency and in-cloud temperature adjustment scheme should be set carefully to add suitable amounts of potential energy during assimilation. High assimilation frequency does not necessarily lead to a better result because of the significant adjustment during the initial forecast period. When a short assimilation window is used, covering the later part of the initial stage of storm and using a high assimilation frequency and a temperature adjustment scheme based on latent heat release can quickly build up the storm and produce a reasonable analysis and forecast. The results also show that when the data from a single Doppler radar are assimilated with properly chosen assimilation configurations, the model is able to predict the evolution of the 8 May 2003 Oklahoma City tornadic thunderstorm well for up to 2.5 h. The implications of the choices of assimilation settings for real-time applications are discussed.
Abstract
When assessed using the difference between urban and rural air temperatures, the urban heat island (UHI) is most prominent during the nighttime. Typically, nocturnal UHI intensity is maintained throughout the night. The UHI intensity over Dallas–Fort Worth (DFW), Texas, however, experienced frequent “collapses” (sudden decreases) around midnight during August 2011, while the region was experiencing an intense heat wave. Observational and modeling studies were conducted to understand this unique phenomenon. Sea-breeze passage was found to be ultimately responsible for the collapses of the nocturnal UHI. Sea-breeze circulation developed along the coast of the Gulf of Mexico during the daytime. During the nighttime, the sea-breeze circulation was advected inland (as far as ~400 km) by the low-level jet-enhanced southerly flow, maintaining the characteristics of sea-breeze fronts, including the enhanced wind shear and vertical mixing. Ahead of the front, surface radiative cooling enhanced the near-surface temperature inversion in rural areas through the night with calm winds. During the frontal passage (around midnight at DFW), the enhanced vertical mixing at the leading edge of the fronts brought warmer air to the surface, leading to rural surface warming events. In contrast, urban effects led to a nearly neutral urban boundary layer. The enhanced mechanical mixing associated with sea-breeze fronts, therefore, did not increase urban surface temperature. The different responses to the sea-breeze frontal passages between rural (warming) and urban areas (no warming) led to the collapse of the UHI. The inland penetration of sea-breeze fronts at such large distances from the coast and their effects on UHI have not been documented in the literature.
Abstract
When assessed using the difference between urban and rural air temperatures, the urban heat island (UHI) is most prominent during the nighttime. Typically, nocturnal UHI intensity is maintained throughout the night. The UHI intensity over Dallas–Fort Worth (DFW), Texas, however, experienced frequent “collapses” (sudden decreases) around midnight during August 2011, while the region was experiencing an intense heat wave. Observational and modeling studies were conducted to understand this unique phenomenon. Sea-breeze passage was found to be ultimately responsible for the collapses of the nocturnal UHI. Sea-breeze circulation developed along the coast of the Gulf of Mexico during the daytime. During the nighttime, the sea-breeze circulation was advected inland (as far as ~400 km) by the low-level jet-enhanced southerly flow, maintaining the characteristics of sea-breeze fronts, including the enhanced wind shear and vertical mixing. Ahead of the front, surface radiative cooling enhanced the near-surface temperature inversion in rural areas through the night with calm winds. During the frontal passage (around midnight at DFW), the enhanced vertical mixing at the leading edge of the fronts brought warmer air to the surface, leading to rural surface warming events. In contrast, urban effects led to a nearly neutral urban boundary layer. The enhanced mechanical mixing associated with sea-breeze fronts, therefore, did not increase urban surface temperature. The different responses to the sea-breeze frontal passages between rural (warming) and urban areas (no warming) led to the collapse of the UHI. The inland penetration of sea-breeze fronts at such large distances from the coast and their effects on UHI have not been documented in the literature.
Abstract
A Doppler radar data assimilation system is developed based on an ensemble Kalman filter (EnKF) method and tested with simulated radar data from a supercell storm. As a first implementation, it is assumed that the forward models are perfect and that the radar data are sampled at the analysis grid points. A general purpose nonhydrostatic compressible model is used with the inclusion of complex multiclass ice microphysics. New aspects of this study compared to previous work include the demonstration of the ability of the EnKF method to retrieve multiple microphysical species associated with a multiclass ice microphysics scheme, and to accurately retrieve the wind and thermodynamic variables. Also new are the inclusion of reflectivity observations and the determination of the relative role of the radial velocity and reflectivity data as well as their spatial coverage in recovering the full-flow and cloud fields. In general, the system is able to reestablish the model storm extremely well after a number of assimilation cycles, and best results are obtained when both radial velocity and reflectivity data, including reflectivity information outside of the precipitation regions, are used. Significant positive impact of the reflectivity assimilation is found even though the observation operator involved is nonlinear. The results also show that a compressible model that contains acoustic modes, hence the associated error growth, performs at least as well as an anelastic model used in previous EnKF studies at the cloud scale.
Flow-dependent and dynamically consistent background error covariances estimated from the forecast ensemble play a critical role in successful assimilation and retrieval. When the assimilation cycles start from random initial perturbations, better results are obtained when the updating of the fields that are not directly related to radar reflectivity is withheld during the first few cycles. In fact, during the first few cycles, the updating of the variables indirectly related to reflectivity hurts the analysis. This is so because the estimated background covariances are unreliable at this stage of the data assimilation process, which is related to the way the forecast ensemble is initialized. Forecasts of supercell storms starting from the best-assimilated initial conditions are shown to remain very good for at least 2 h.
Abstract
A Doppler radar data assimilation system is developed based on an ensemble Kalman filter (EnKF) method and tested with simulated radar data from a supercell storm. As a first implementation, it is assumed that the forward models are perfect and that the radar data are sampled at the analysis grid points. A general purpose nonhydrostatic compressible model is used with the inclusion of complex multiclass ice microphysics. New aspects of this study compared to previous work include the demonstration of the ability of the EnKF method to retrieve multiple microphysical species associated with a multiclass ice microphysics scheme, and to accurately retrieve the wind and thermodynamic variables. Also new are the inclusion of reflectivity observations and the determination of the relative role of the radial velocity and reflectivity data as well as their spatial coverage in recovering the full-flow and cloud fields. In general, the system is able to reestablish the model storm extremely well after a number of assimilation cycles, and best results are obtained when both radial velocity and reflectivity data, including reflectivity information outside of the precipitation regions, are used. Significant positive impact of the reflectivity assimilation is found even though the observation operator involved is nonlinear. The results also show that a compressible model that contains acoustic modes, hence the associated error growth, performs at least as well as an anelastic model used in previous EnKF studies at the cloud scale.
Flow-dependent and dynamically consistent background error covariances estimated from the forecast ensemble play a critical role in successful assimilation and retrieval. When the assimilation cycles start from random initial perturbations, better results are obtained when the updating of the fields that are not directly related to radar reflectivity is withheld during the first few cycles. In fact, during the first few cycles, the updating of the variables indirectly related to reflectivity hurts the analysis. This is so because the estimated background covariances are unreliable at this stage of the data assimilation process, which is related to the way the forecast ensemble is initialized. Forecasts of supercell storms starting from the best-assimilated initial conditions are shown to remain very good for at least 2 h.
Abstract
The ensemble Kalman filter method is applied to correct errors in five fundamental microphysical parameters that are closely involved in the definition of drop/particle size distributions of microphysical species in a commonly used single-moment ice microphysics scheme, for a model-simulated supercell storm, using radar data. The five parameters include the intercept parameters for rain, snow, and hail/graupel and the bulk densities of hail/graupel and snow. The ensemble square root Kalman filter (EnSRF) is employed for simultaneous state and parameter estimation.
The five microphysical parameters are estimated individually or in different combinations starting from different initial guesses. A data selection procedure based on correlation information is introduced, which combined with variance inflation, effectively avoids the collapse of the spread of parameter ensemble, hence filter divergence. Parameter estimation results demonstrate, for the first time, that the ensemble-based method can be used to correct model errors in microphysical parameters through simultaneous state and parameter estimation, using radar reflectivity observations. When error exists in only one of the microphysical parameters, the parameter can be successfully estimated without exception. The estimation of multiple parameters is less reliable, mainly because the identifiability of the parameters becomes weaker and the problem might have no unique solution. The parameter estimation results are found to be very sensitive to the realization of the initial parameter ensemble, which is mainly related to the use of relatively small ensemble sizes. Increasing ensemble size generally improves the parameter estimation. The quality of parameter estimation also depends on the quality of observation data. It is also found that the results of state estimation are generally improved when simultaneous parameter estimation is performed, even when the estimated parameter values are not very accurate.
Abstract
The ensemble Kalman filter method is applied to correct errors in five fundamental microphysical parameters that are closely involved in the definition of drop/particle size distributions of microphysical species in a commonly used single-moment ice microphysics scheme, for a model-simulated supercell storm, using radar data. The five parameters include the intercept parameters for rain, snow, and hail/graupel and the bulk densities of hail/graupel and snow. The ensemble square root Kalman filter (EnSRF) is employed for simultaneous state and parameter estimation.
The five microphysical parameters are estimated individually or in different combinations starting from different initial guesses. A data selection procedure based on correlation information is introduced, which combined with variance inflation, effectively avoids the collapse of the spread of parameter ensemble, hence filter divergence. Parameter estimation results demonstrate, for the first time, that the ensemble-based method can be used to correct model errors in microphysical parameters through simultaneous state and parameter estimation, using radar reflectivity observations. When error exists in only one of the microphysical parameters, the parameter can be successfully estimated without exception. The estimation of multiple parameters is less reliable, mainly because the identifiability of the parameters becomes weaker and the problem might have no unique solution. The parameter estimation results are found to be very sensitive to the realization of the initial parameter ensemble, which is mainly related to the use of relatively small ensemble sizes. Increasing ensemble size generally improves the parameter estimation. The quality of parameter estimation also depends on the quality of observation data. It is also found that the results of state estimation are generally improved when simultaneous parameter estimation is performed, even when the estimated parameter values are not very accurate.
Abstract
The idealized supercell simulations in a previous study by Roberts et al. are further analyzed to clarify the physical mechanisms leading to differences in mesocyclone intensification between an experiment with surface friction applied to the full wind (FWFRIC) and an experiment with friction applied to the environmental wind only (EnvFRIC). The low-level mesocyclone intensifies rapidly during the 3 min preceding tornadogenesis in FWFRIC, while the intensification during the same period is much weaker in EnvFRIC, which fails to produce a tornado. To quantify the mechanisms responsible for this discrepancy in mesocyclone evolution, material circuits enclosing the low-level mesocyclone are initialized and traced back in time, and circulation budgets for these circuits are analyzed. The results show that in FWFRIC, surface drag directly generates a substantial proportion of the final circulation around the mesocyclone, especially below 1 km AGL; in EnvFRIC, circulation budgets indicate the mesocyclone circulation is overwhelmingly barotropic. It is proposed that the import of near-ground, frictionally generated vorticity into the low-level mesocyclone in FWFRIC is a key factor causing the intensification and lowering of the mesocyclone toward the ground, creating a large upward vertical pressure gradient force that leads to tornadogenesis. Similar circulation analyses are also performed for circuits enclosing the tornado at its genesis stage. The frictionally generated circulation component is found to contribute more than half of the final circulation for circuits enclosing the tornado vortex below 400 m AGL, and the frictional contribution decreases monotonically with the height of the final circuit.
Abstract
The idealized supercell simulations in a previous study by Roberts et al. are further analyzed to clarify the physical mechanisms leading to differences in mesocyclone intensification between an experiment with surface friction applied to the full wind (FWFRIC) and an experiment with friction applied to the environmental wind only (EnvFRIC). The low-level mesocyclone intensifies rapidly during the 3 min preceding tornadogenesis in FWFRIC, while the intensification during the same period is much weaker in EnvFRIC, which fails to produce a tornado. To quantify the mechanisms responsible for this discrepancy in mesocyclone evolution, material circuits enclosing the low-level mesocyclone are initialized and traced back in time, and circulation budgets for these circuits are analyzed. The results show that in FWFRIC, surface drag directly generates a substantial proportion of the final circulation around the mesocyclone, especially below 1 km AGL; in EnvFRIC, circulation budgets indicate the mesocyclone circulation is overwhelmingly barotropic. It is proposed that the import of near-ground, frictionally generated vorticity into the low-level mesocyclone in FWFRIC is a key factor causing the intensification and lowering of the mesocyclone toward the ground, creating a large upward vertical pressure gradient force that leads to tornadogenesis. Similar circulation analyses are also performed for circuits enclosing the tornado at its genesis stage. The frictionally generated circulation component is found to contribute more than half of the final circulation for circuits enclosing the tornado vortex below 400 m AGL, and the frictional contribution decreases monotonically with the height of the final circuit.
Abstract
The 12–13 June 2002 convective initiation case from the International H2O Project (IHOP_2002) field experiment over the central Great Plains of the United States is simulated numerically with the Advanced Regional Prediction System (ARPS) at 3-km horizontal resolution. The case involves a developing mesoscale cyclone, a dryline extending from a low center southwestward with a cold front closely behind, which intercepts the midsection of the dryline, and an outflow boundary stretching eastward from the low center resulting from earlier mesoscale convection. Convective initiation occurred in the afternoon at several locations along and near the dryline or near the outflow boundary, but was not captured by the most intensive deployment of observation instruments during the field experiment, which focused instead on the dryline–outflow boundary intersection point.
Standard and special surface and upper-air observations collected during the field experiment are assimilated into the ARPS at hourly intervals in a 6-h preforecast period in the control experiment. This experiment captured the initiation of four groups of convective cells rather well, with timing errors ranging between 10 and 100 min and location errors ranging between 5 and 60 km. The general processes of convective initiation are discussed. Interestingly, a secondary initiation of cells due to the collision between the main outflow boundary and the gust fronts developing out of model-predicted convection earlier is also captured accurately about 7 h into the prediction. The organization of cells into a squall line after 7 h is reproduced less well.
A set of sensitivity experiments is performed in which the impact of assimilating nonstandard data gathered by IHOP_2002, and the length and interval of the data assimilation are examined. Overall, the control experiment that assimilated the most data produced the best forecast although some of the other experiments did better in some aspects, including the timing and location of the initiation of some of the cell groups. Possible reasons for the latter results are suggested. The lateral boundary locations are also found to have significant impacts on the initiation and subsequent evolution of convection, by affecting the interior flow response and/or feeding in more accurate observation information through the boundary, as available gridded analyses from a mesoscale operational model were used as the boundary condition. Another experiment examines the impact of the vertical correlation scale in the analysis scheme on the cold pool analysis and the subsequent forecast. A companion paper will analyze in more detail the process and mechanism of convective initiation, based on the results of a nested 1-km forecast.
Abstract
The 12–13 June 2002 convective initiation case from the International H2O Project (IHOP_2002) field experiment over the central Great Plains of the United States is simulated numerically with the Advanced Regional Prediction System (ARPS) at 3-km horizontal resolution. The case involves a developing mesoscale cyclone, a dryline extending from a low center southwestward with a cold front closely behind, which intercepts the midsection of the dryline, and an outflow boundary stretching eastward from the low center resulting from earlier mesoscale convection. Convective initiation occurred in the afternoon at several locations along and near the dryline or near the outflow boundary, but was not captured by the most intensive deployment of observation instruments during the field experiment, which focused instead on the dryline–outflow boundary intersection point.
Standard and special surface and upper-air observations collected during the field experiment are assimilated into the ARPS at hourly intervals in a 6-h preforecast period in the control experiment. This experiment captured the initiation of four groups of convective cells rather well, with timing errors ranging between 10 and 100 min and location errors ranging between 5 and 60 km. The general processes of convective initiation are discussed. Interestingly, a secondary initiation of cells due to the collision between the main outflow boundary and the gust fronts developing out of model-predicted convection earlier is also captured accurately about 7 h into the prediction. The organization of cells into a squall line after 7 h is reproduced less well.
A set of sensitivity experiments is performed in which the impact of assimilating nonstandard data gathered by IHOP_2002, and the length and interval of the data assimilation are examined. Overall, the control experiment that assimilated the most data produced the best forecast although some of the other experiments did better in some aspects, including the timing and location of the initiation of some of the cell groups. Possible reasons for the latter results are suggested. The lateral boundary locations are also found to have significant impacts on the initiation and subsequent evolution of convection, by affecting the interior flow response and/or feeding in more accurate observation information through the boundary, as available gridded analyses from a mesoscale operational model were used as the boundary condition. Another experiment examines the impact of the vertical correlation scale in the analysis scheme on the cold pool analysis and the subsequent forecast. A companion paper will analyze in more detail the process and mechanism of convective initiation, based on the results of a nested 1-km forecast.
Abstract
The possibility of estimating fundamental parameters common in single-moment ice microphysics schemes using radar observations is investigated for a model-simulated supercell storm by examining parameter sensitivity and identifiability. These parameters include the intercept parameters for rain, snow, and hail/graupel, and the bulk densities of snow and hail/graupel. These parameters are closely involved in the definition of drop/particle size distributions of microphysical species but often assume highly uncertain specified values.
The sensitivity of model forecast within data assimilation cycles to the parameter values, and the issue of solution uniqueness of the estimation problem, are examined. The ensemble square root filter (EnSRF) is employed for model state estimation. Sensitivity experiments show that the errors in the microphysical parameters have a larger impact on model microphysical fields than on wind fields; radar reflectivity observations are therefore preferred over those of radial velocity for microphysical parameter estimation. The model response time to errors in individual parameters are also investigated. The results suggest that radar data should be used at about 5-min intervals for parameter estimation. The response functions calculated from ensemble mean forecasts for all five individual parameters show concave shapes, with unique minima occurring at or very close to the true values; therefore, true values of these parameters can be retrieved at least in those cases where only one parameter contains error.
The identifiability of multiple parameters together is evaluated from their correlations with forecast reflectivity. Significant levels of correlation are found that can be interpreted physically. As the number of uncertain parameters increases, both the level and the area coverage of significant correlations decrease, implying increased difficulties with multiple-parameter estimation. The details of the estimation procedure and the results of a complete set of estimation experiments are presented in Part II of this paper.
Abstract
The possibility of estimating fundamental parameters common in single-moment ice microphysics schemes using radar observations is investigated for a model-simulated supercell storm by examining parameter sensitivity and identifiability. These parameters include the intercept parameters for rain, snow, and hail/graupel, and the bulk densities of snow and hail/graupel. These parameters are closely involved in the definition of drop/particle size distributions of microphysical species but often assume highly uncertain specified values.
The sensitivity of model forecast within data assimilation cycles to the parameter values, and the issue of solution uniqueness of the estimation problem, are examined. The ensemble square root filter (EnSRF) is employed for model state estimation. Sensitivity experiments show that the errors in the microphysical parameters have a larger impact on model microphysical fields than on wind fields; radar reflectivity observations are therefore preferred over those of radial velocity for microphysical parameter estimation. The model response time to errors in individual parameters are also investigated. The results suggest that radar data should be used at about 5-min intervals for parameter estimation. The response functions calculated from ensemble mean forecasts for all five individual parameters show concave shapes, with unique minima occurring at or very close to the true values; therefore, true values of these parameters can be retrieved at least in those cases where only one parameter contains error.
The identifiability of multiple parameters together is evaluated from their correlations with forecast reflectivity. Significant levels of correlation are found that can be interpreted physically. As the number of uncertain parameters increases, both the level and the area coverage of significant correlations decrease, implying increased difficulties with multiple-parameter estimation. The details of the estimation procedure and the results of a complete set of estimation experiments are presented in Part II of this paper.
Abstract
A new efficient dual-resolution (DR) data assimilation algorithm is developed based on the ensemble Kalman filter (EnKF) method and tested using simulated radar radial velocity data for a supercell storm. Radar observations are assimilated on both high-resolution and lower-resolution grids using the EnKF algorithm with flow-dependent background error covariances estimated from the lower-resolution ensemble. It is shown that the flow-dependent and dynamically evolved background error covariances thus estimated are effective in producing quality analyses on the high-resolution grid.
The DR method has the advantage of being able to significantly reduce the computational cost of the EnKF analysis. In the system, the lower-resolution ensemble provides the flow-dependent background error covariance, while the single-high-resolution forecast and analysis provides the benefit of higher resolution, which is important for resolving the internal structures of thunderstorms. The relative smoothness of the covariance obtained from the lower 4-km-resolution ensemble does not appear to significantly degrade the quality of analysis. This is because the cross covariance among different variables is of first-order importance for “retrieving” unobserved variables from the radar radial velocity data.
For the DR analysis, an ensemble size of 40 appears to be a reasonable choice with the use of a 4-km horizontal resolution in the ensemble and a 1-km resolution in the high-resolution analysis. Several sensitivity tests show that the DR EnKF system is quite robust to different observation errors. A 4-km thinned data resolution is a compromise that is acceptable under the constraint of real-time applications. A data density of 8 km leads to a significant degradation in the analysis.
Abstract
A new efficient dual-resolution (DR) data assimilation algorithm is developed based on the ensemble Kalman filter (EnKF) method and tested using simulated radar radial velocity data for a supercell storm. Radar observations are assimilated on both high-resolution and lower-resolution grids using the EnKF algorithm with flow-dependent background error covariances estimated from the lower-resolution ensemble. It is shown that the flow-dependent and dynamically evolved background error covariances thus estimated are effective in producing quality analyses on the high-resolution grid.
The DR method has the advantage of being able to significantly reduce the computational cost of the EnKF analysis. In the system, the lower-resolution ensemble provides the flow-dependent background error covariance, while the single-high-resolution forecast and analysis provides the benefit of higher resolution, which is important for resolving the internal structures of thunderstorms. The relative smoothness of the covariance obtained from the lower 4-km-resolution ensemble does not appear to significantly degrade the quality of analysis. This is because the cross covariance among different variables is of first-order importance for “retrieving” unobserved variables from the radar radial velocity data.
For the DR analysis, an ensemble size of 40 appears to be a reasonable choice with the use of a 4-km horizontal resolution in the ensemble and a 1-km resolution in the high-resolution analysis. Several sensitivity tests show that the DR EnKF system is quite robust to different observation errors. A 4-km thinned data resolution is a compromise that is acceptable under the constraint of real-time applications. A data density of 8 km leads to a significant degradation in the analysis.
Abstract
To clarify the definition of the equation for the temperature toward which the soil skin temperature is restored, the prediction equations in the commonly used force–restore model for soil temperature are rederived from the heat conduction equation. The derivation led to a deep-layer temperature, commonly denoted T 2, that is defined as the soil temperature at depth πd plus a transient term, where d is the e-folding damping depth of soil temperature diurnal oscillations. The corresponding prediction equation for T 2 has the same form as the commonly used one except for an additional term involving the lapse rate of the “seasonal mean” soil temperature and the damping depth d. A term involving the same also appears in the skin temperature prediction equation, which also includes a transient term. In the literature, T 2 was initially defined as the short-term (over several days) mean of the skin temperature, but in practice it is often used as the deep-layer temperature. Such inconsistent use can lead to drift in T 2 prediction over a several-day period, as is documented in this paper. When T 2 is properly defined and initialized, large drift in T 2 prediction is avoided and the surface temperature prediction is usually improved. This is confirmed by four sets of experiments, each for a period during each season of 2000, that are initialized using and verified against measurements of the Oklahoma Atmospheric Surface-Layer Instrumentation System (OASIS) project.
Abstract
To clarify the definition of the equation for the temperature toward which the soil skin temperature is restored, the prediction equations in the commonly used force–restore model for soil temperature are rederived from the heat conduction equation. The derivation led to a deep-layer temperature, commonly denoted T 2, that is defined as the soil temperature at depth πd plus a transient term, where d is the e-folding damping depth of soil temperature diurnal oscillations. The corresponding prediction equation for T 2 has the same form as the commonly used one except for an additional term involving the lapse rate of the “seasonal mean” soil temperature and the damping depth d. A term involving the same also appears in the skin temperature prediction equation, which also includes a transient term. In the literature, T 2 was initially defined as the short-term (over several days) mean of the skin temperature, but in practice it is often used as the deep-layer temperature. Such inconsistent use can lead to drift in T 2 prediction over a several-day period, as is documented in this paper. When T 2 is properly defined and initialized, large drift in T 2 prediction is avoided and the surface temperature prediction is usually improved. This is confirmed by four sets of experiments, each for a period during each season of 2000, that are initialized using and verified against measurements of the Oklahoma Atmospheric Surface-Layer Instrumentation System (OASIS) project.