Search Results
atmosphere’s scale-dependence behavior appropriately, shortcomings in the numerics or parameterizations are likely. In the case of kinetic energy, the evaluation of scaling exponents has provided valuable insights into model performance ( Skamarock 2004 ; Hamilton et al. 2008 ; Bierdel et al. 2012 ; Fang and Kuo 2015 ). For water vapor, Schemann et al. (2013) investigated the scaling behaviors of a GCM, an NWP model, and a large-eddy simulation (LES) and the implication for cloud parameterizations
atmosphere’s scale-dependence behavior appropriately, shortcomings in the numerics or parameterizations are likely. In the case of kinetic energy, the evaluation of scaling exponents has provided valuable insights into model performance ( Skamarock 2004 ; Hamilton et al. 2008 ; Bierdel et al. 2012 ; Fang and Kuo 2015 ). For water vapor, Schemann et al. (2013) investigated the scaling behaviors of a GCM, an NWP model, and a large-eddy simulation (LES) and the implication for cloud parameterizations
GPD was fitted to the subsets of extreme events (i.e., >95th percentile) in the RG and SREs datasets. The extremes in the SREs were obtained in a similar way to the RGs extremes. To make the stations and all the rainfall products comparable, we normalized the modeled return values with the RG-modeled return values at the stations, then averaged over all stations for each dataset. The normalized return values of the RG data were taken as the reference for evaluating the SREs. The performance of
GPD was fitted to the subsets of extreme events (i.e., >95th percentile) in the RG and SREs datasets. The extremes in the SREs were obtained in a similar way to the RGs extremes. To make the stations and all the rainfall products comparable, we normalized the modeled return values with the RG-modeled return values at the stations, then averaged over all stations for each dataset. The normalized return values of the RG data were taken as the reference for evaluating the SREs. The performance of
boosting. However, initial tests indicated slightly worse predictive performance; we thus focus on maximum likelihood-based methods instead. 7 To account for the intertwined choice of scoring rules for model estimation and evaluation ( Gebetsberger et al. 2017 ), we have also evaluated the models using LogS. However, as the results are very similar to those reported here and computation of LogS for the raw ensemble and QRF forecasts is problematic ( Krüger et al. 2016 ), we focus on CRPS
boosting. However, initial tests indicated slightly worse predictive performance; we thus focus on maximum likelihood-based methods instead. 7 To account for the intertwined choice of scoring rules for model estimation and evaluation ( Gebetsberger et al. 2017 ), we have also evaluated the models using LogS. However, as the results are very similar to those reported here and computation of LogS for the raw ensemble and QRF forecasts is problematic ( Krüger et al. 2016 ), we focus on CRPS
statistical postprocessing methods, whose predictive performance is evaluated in section 4 . A meteorological interpretation of what the models have learned is presented in section 5 . Section 6 concludes with a discussion. R code ( R Core Team 2021 ) with implementations of all methods is available online ( https://github.com/benediktschulz/paper_pp_wind_gusts ). 2. Data and notation a. Forecast and observation data Our study is based on the same dataset as Pantillon et al. (2018) and we
statistical postprocessing methods, whose predictive performance is evaluated in section 4 . A meteorological interpretation of what the models have learned is presented in section 5 . Section 6 concludes with a discussion. R code ( R Core Team 2021 ) with implementations of all methods is available online ( https://github.com/benediktschulz/paper_pp_wind_gusts ). 2. Data and notation a. Forecast and observation data Our study is based on the same dataset as Pantillon et al. (2018) and we
performance of current operational systems with respect to tropical rainfall calls for alternative approaches reaching from convection-permitting resolution ( Pante and Knippertz 2019 ) to methods from statistics and machine learning ( Shi et al. 2015 ; Rasp et al. 2020 ; Vogel et al. 2021 ). Before developing and evaluating new models and approaches, it is essential to establish benchmark forecasts in order to systematically assess forecast improvement. Rasp et al. (2020) recently proposed
performance of current operational systems with respect to tropical rainfall calls for alternative approaches reaching from convection-permitting resolution ( Pante and Knippertz 2019 ) to methods from statistics and machine learning ( Shi et al. 2015 ; Rasp et al. 2020 ; Vogel et al. 2021 ). Before developing and evaluating new models and approaches, it is essential to establish benchmark forecasts in order to systematically assess forecast improvement. Rasp et al. (2020) recently proposed
). Conventional observations such as surface stations and weather balloons are scarce at low latitudes, particularly over the vast tropical oceans. Consequently, the observing system is dominated by satellite data, which are heavily skewed toward measuring atmospheric mass variables rather than wind (e.g., Baker et al. 2014 ). However, data denial experiments for periods with a much enhanced radiosonde network during field campaigns over West Africa have shown a relatively small impact on model performance
). Conventional observations such as surface stations and weather balloons are scarce at low latitudes, particularly over the vast tropical oceans. Consequently, the observing system is dominated by satellite data, which are heavily skewed toward measuring atmospheric mass variables rather than wind (e.g., Baker et al. 2014 ). However, data denial experiments for periods with a much enhanced radiosonde network during field campaigns over West Africa have shown a relatively small impact on model performance
–STV relationship over the Asian–Pacific–American region is still unclear. In addition, phase 6 of the Coupled Model Intercomparison Project (CMIP6; Eyring et al. 2016 ) has recently been released. Whether the models of the new version can produce a more realistic ENSO–STV simulation than the last generation (CMIP5) also needs to be evaluated. In this study, we first aim to examine the relationship between ENSO and STV over the Asian–Pacific–American region with CMIP5/6 models in a historical simulation and
–STV relationship over the Asian–Pacific–American region is still unclear. In addition, phase 6 of the Coupled Model Intercomparison Project (CMIP6; Eyring et al. 2016 ) has recently been released. Whether the models of the new version can produce a more realistic ENSO–STV simulation than the last generation (CMIP5) also needs to be evaluated. In this study, we first aim to examine the relationship between ENSO and STV over the Asian–Pacific–American region with CMIP5/6 models in a historical simulation and
regression and the limitations of this approach. In section 4 we evaluate the performance of the models during Northern Hemisphere winter and demonstrate their applicability to an operational ECMWF ensemble forecast of a WCB event during January 2011. The study ends with concluding remarks and an outlook in section 5 . 2. Data a. Predictor dataset The predictor selection as well as the development and evaluation of the logistic regression models is based on ECMWF’s interim reanalysis data (ERA
regression and the limitations of this approach. In section 4 we evaluate the performance of the models during Northern Hemisphere winter and demonstrate their applicability to an operational ECMWF ensemble forecast of a WCB event during January 2011. The study ends with concluding remarks and an outlook in section 5 . 2. Data a. Predictor dataset The predictor selection as well as the development and evaluation of the logistic regression models is based on ECMWF’s interim reanalysis data (ERA
the perturbation method is applicable in any atmospheric model that allows for calculation of the relevant physical process information. The observational data used to evaluate the forecasts and the selected case studies in which the parameterization is tested will be introduced briefly as well as the analysis strategy for the suggested method. a. Physically based stochastic perturbations in the boundary layer We propose a concept of process-based model error representation in terms of a
the perturbation method is applicable in any atmospheric model that allows for calculation of the relevant physical process information. The observational data used to evaluate the forecasts and the selected case studies in which the parameterization is tested will be introduced briefly as well as the analysis strategy for the suggested method. a. Physically based stochastic perturbations in the boundary layer We propose a concept of process-based model error representation in terms of a
NWP forecasts for TC activity in many oceans (e.g., Vitart 2009 ; Belanger et al. 2010 ; Camp et al. 2018 ). Several studies have systematically evaluated these models in terms of predictive skill for different TC occurrence measures ( Lee et al. 2018 , 2020 ; Gregory et al. 2019 ). Lee et al. (2018) found that the Subseasonal to Seasonal (S2S; Vitart et al. 2017 ) models generally have little to zero skill in predicting TC occurrence from week 2 on for all basins relative to
NWP forecasts for TC activity in many oceans (e.g., Vitart 2009 ; Belanger et al. 2010 ; Camp et al. 2018 ). Several studies have systematically evaluated these models in terms of predictive skill for different TC occurrence measures ( Lee et al. 2018 , 2020 ; Gregory et al. 2019 ). Lee et al. (2018) found that the Subseasonal to Seasonal (S2S; Vitart et al. 2017 ) models generally have little to zero skill in predicting TC occurrence from week 2 on for all basins relative to