A global verification of temperature, dewpoint temperature, and wind speed for the new Nonhydrostatic Multiscale Model on the B Grid (NMMB) is computed for a 3-yr period (2010–12) using over 9000 weather stations. The raw model forecasts, as well as bias-removed MOS forecasts, are analyzed and compared to NOAA’s operational GFS. In comparison to the GFS, the NMMB forecasts of temperature, dewpoint temperature, and wind speed are about 10% better, even though the NMMB is run at much coarser resolution and does not yet have its own data assimilation system. However, as a result of several changes in the GFS during the 3-yr period, the MOS computations for GFS are not optimal. Using unbiased MOS forecasts, the global distribution of spatial predictability can be analyzed. Clear spatial patterns emerge, which are partly dependent on the variable. For temperature, the best forecasts can be made for small islands and coastlines, and a clear gradient of decreasing skill with increasing distance from the sea is visible on the continents. For wind speed, this pattern is almost reversed. Dewpoint temperature shows the largest patterns, mainly controlled by the humidity of the climate. Combining temperature, wind speed, and dewpoint temperature in a gross predictability index reveals a clear large-scale pattern. Remarkably, smaller-scale features like mountain ranges are not readily apparent in the bias-free predictability pattern, indicating that the spatial pattern of the gross predictability is controlled at the very large scales.
A new forecast model, applicable for all scales ranging from global to local, has been developed at NOAA’s National Centers for Environmental Prediction (NCEP): the Nonhydrostatic Multiscale Model on the B Grid (NMMB; Janjic 2005, 2010; Janjic and Black 2007; Janjic et al. 2011; Janjic and Gall 2012). The NMMB represents the second generation of nonhydrostatic models developed at NCEP. The model formulation follows the general modeling philosophy of its predecessor, the WRF-NMM (Janjic et al. 2001, 2010; Janjic 2003, 2004, 2010). The nonhydrostatic dynamics were formulated by relaxing the hydrostatic approximation in the hydrostatic NWP formulation based on modeling principles proven in practice. These principles were applied in several generations of models preceding the WRF-NMM and NMMB (Janjic 1977, 1979, 1984a,b), and have been thoroughly tested in NWP and regional climate applications, although the specific numerical schemes employed have evolved significantly over time, and over about two orders of magnitude in resolution. The “isotropic” quadratic conservative horizontal differencing employed in the model conserves a variety of basic and derived dynamical and quadratic quantities and preserves some important properties of differential operators. Among these, the conservation of energy and enstrophy improves the accuracy of the nonlinear dynamics of the model on all scales (Arakawa 1966; Janjic 1984b). Currently, the NMMB uses the regular latitude–longitude grid for the global domain, and a rotated latitude–longitude grid in regional applications. In the vertical, the hybrid pressure–sigma coordinate has been chosen as the primary option. Across the pole polar boundary conditions are specified in the global limit and the polar filter acting on tendencies selectively slows down the wave components of the basic dynamical variables, which would otherwise propagate faster in the zonal direction than the fastest wave propagating in the meridional direction.
The number of available regional high-resolution weather forecasts is increasing. The decrease in computing cost and the development of publicly available modeling frameworks are helping this development. Weather forecasts, new developments of model physics, and parameterizations or postprocessing techniques are often evaluated using limited-area models on smaller or larger regional domains. Unfortunately, verification results are dependent on the time range over which the verification is carried out and to a large degree, as will be shown here, on the geographical region. The dependence on the time range can be reduced by considering several years, but the regional dependence remains. Here, we want to present a global verification together with the spatial pattern of global predictability. The latter can be used to see how regional studies integrate into the global picture and thus allow a better understanding of limited-area model forecast performance.
2. Methods and data
In an attempt to compile a comprehensive global dataset of surface observations, data from official WMO weather stations were combined with additional observations taken by other public institutions [Swiss Federal Institute for Forest, Snow and Landscape Research (SLF); Agroscope Switzerland; and the national weather services of Germany, Spain, Switzerland, Brazil, and Australia] and data archives from the private sector (meteoblue.com and Pessl Instruments). WMO data were obtained from NCEP [NCEP Automated Data Processing (ADP) Global Surface Observational Weather Data, October 1999–continuing (dataset ds461.0 published by the Computational and Information Systems Laboratory (CISL) Data Support Section at the National Center for Atmospheric Research; available online at http://rda.ucar.edu/datasets/ds461.0/)].
In total, over 40 000 weather stations reported data; however, the majority did not fulfill the representativity constraints, leaving just over 9000 stations for verification. In order for a weather station to be considered, it had to report at least once every 3 hours, and provide data for at least 2 years during all seasons. To have spatially representative results, we use temperature, wind speed, and dewpoint temperature, which are observed at almost all stations. Unfortunately, it was not possible to compile a global dataset of hourly precipitation, as observed precipitation data are mostly accumulated for a 24-h period or not shared among different national weather services.
b. Forecast data
The new NMMB was run in a global configuration at a resolution of 0.34° with 60 vertical layers. As the model is very new, it does not have its own data assimilation system yet. Therefore, the NMMB was initialized with the 0.5°-resolution GFS analyses. For a 3-yr period from 2010 to 2012, the NMMB computed a 36-h forecast starting at 0000 UTC for every single day. Note that the model is global, so no boundary conditions are needed. The relatively short forecast range was chosen as this study focuses on verifying the spatial pattern of surface parameters, which adjusts quickly to local conditions within the first forecast hours. Furthermore, the short forecast period reduces problems arising from the suboptimal initialization with GFS analyses and keeps computational costs at a reasonable level.
The global NMMB was run at a configuration closely matching the operational settings of NCEP for the regional NMMB run as the North American Mesoscale Forecast System (NAM). This configuration is also believed to perform well on the global scale, at least in relatively short runs such as those discussed here.
In addition to our own NMMB forecasts, the GFS forecasts of the first 36 h are used for comparison. NOAA’s operational GFS (Sela 1980; Yang et al. 2006) gridded forecasts at full resolution of 0.2° are taken for this purpose. The main differences in the model physics are the use of GFDL radiation, Betts–Miller–Janjic convection, and a Mellor–Yamada–Janjić 2.5-level turbulence closure in the NMMB, and RRTM radiation, simplified Arakawa–Schubert convection, and a first-order nonlocal Pan–Mahrt turbulence closure in the GFS. Unfortunately, the GFS output has a lower time resolution of only 3 hours.
To allow the model to develop features at full model resolution, a spinup period of 12 h is used. The 24-h period based on forecast hours 12–35 is used to construct a continuous 3-yr time series from all daily runs. As GFS data are available only every 3 h, the forecast hours 12, 15, 18, 21, 24, 27, 30, and 33 are used in this comparison for GFS, as well as for NMMB.
c. Bias removal
Raw model forecasts, especially at coarse resolution, are subject to relatively large systematic errors. These biases are caused by errors in the model itself and due to poor representation of an observation location by the grid point. To compare the nonsystematic (nonlinear) part of the forecast error, we remove systematic forecast errors using model output statistics (MOS). The MOS technique (Glahn and Lowry 1972) develops a statistical relationship between observed and model forecast weather elements and applies these relationships to raw model output. A multiple linear regression is used to express the predictand as a linear combination of predictors :
where a and are the regression constant and coefficients, respectively.
MOS equations were derived for each individual station for the NMMB as well as for the GFS model. Thus, each model has different MOS equations, but they were derived with exactly the same algorithm, allowing direct model comparison. For temporal consistency of the MOS prediction, the statistical relation was based on all seasons of the year and all forecast hours of the day. By not splitting the training set into different seasons and by not computing different MOS equations for different forecast hours, the statistical sample becomes very large and problems of statistical overfitting are reduced to a minimum. Hence, the full 3-yr time series can be used for training and verification. While this MOS performs equally well on an independent dataset, it is not quite as optimal in an RMSE sense as an MOS using separate equations for different seasons and lead times. However, we do not need the best possible forecast but, rather, a very stable and reliable bias-removal technique to study spatial predictability patterns. Predictors offered to the multiple linear regression included all the surface variables and upper-air variables up to 500 mb, which were available in both models, as well as derived variables such as thicknesses or climatological predictors like sine and cosines of the day of year, to account for seasonal biases. Every predictor was only offered at its lead time corresponding with the forecast hour. In total, around 40 predictors were offered to the forward–backward selection and around 8–12 predictors were chosen on average. Importantly, no observations were offered as predictors, as the pure model performance is crucial for this work.
d. Statistical analysis
To better understand model performance, statistics resolving the stations but integrating over time and statistics resolving time but integrating over stations are carried out. For all analyses, the forecast of the closest model grid point is taken, without performing horizontal interpolation to the exact location of a station. All errors are computed on the hourly or 3-hourly raw data and no spatial or temporal averaging is done prior to the computation of errors. Thus, if represents a modeled temperature at time t and station i and the corresponding observation, the root-mean-square error and absolute error are computed as
where n is the number of considered data pairs and the summation is done over t or i, respectively.
The following sections discuss the performance of the NMMB global model for 2-m temperature, 10-m wind speed, and 2-m dewpoint temperature, respectively. For better judgment of the forecast quality of the new NMMB, a direct comparison with the GFS is carried out. For each individual variable, we will first look at the time series of forecast errors. That is, the time series of daily errors integrated over all stations for forecast hours 12–35. Second, we integrate the hourly forecast errors over the full 3-yr period resolving every individual station. Displaying these errors on a map reveals clear spatial patterns of global predictability. As a last step, we derive an experimental multivariate global predictability pattern. The 3-yr time series has missing data in 2011, as the 0.2° GFS data could not be restored from the archive and thus also the NMMB data are skipped for this period.
a. Temperature at 2 m
1) Temperature verification
In Fig. 1, the yearly courses of the root-mean-square and absolute temperature errors are shown, respectively. The positive impact of bias removal is clearly visible and reduces the RMSE by around 1 K and the absolute error by approximately 0.8 K. This improvement is fairly constant throughout the entire time series, resulting in a seasonal error pattern that is equal to the raw model forecasts. The seasonal pattern is governed by the Northern Hemisphere, which contains the majority of observational sites. A fairly constant performance is achieved during the summer and larger and more varying error patterns arise in winter. A closer look at the raw forecast errors reveals a superior performance by the global NMMB during the summer and about equal skill in winter months. However, the bias-removed MOS forecast of NMMB is also superior in wintertime.
Examining the mean 2-m temperature error (Fig. 2), we notice a cold bias of the GFS raw forecasts of approximately 0.5 K, which reaches almost 1 K in winter and spring of the years 2010 and 2011. In 2012, we see a reduction in the mean error in GFS caused by model improvements. Note that the GFS forecasts evaluated here are taken from the operational archive and thus are not reruns using a frozen model, as was the case for the NMMB. The most significant changes to GFS were performed in May 2011, when a new thermal roughness length reduced the land surface skin temperature cold bias and the low-level summer warm bias over arid land areas. Furthermore, in September 2012, a lookup table used in the land surface scheme to control minimum canopy resistance and root depth number was updated to reduce excessive evaporation. This update was aimed at mitigating GFS cold and moist biases found in the late afternoon over the central United States when drought conditions existed.
Similar to GFS, the NMMB raw forecast does have a cold bias throughout the year, but it is significantly smaller (0.2 K) and without a clear seasonal pattern. The MOS forecasts of both models are virtually unbiased.
To clearly position the performance of the global NMMB in relation to the GFS, we consider Fig. 3, where the error of every one of the 9000 stations is shown. Note that for each curve the errors are ordered according to size, so the stations do not correspond between curves, and the error at any given station cannot be compared between models. However, we clearly notice smaller root-mean-square and absolute errors in the NMMB results in comparison to those of the GFS. Interestingly, the difference between the NMMB and GFS is even enlarged if systematic biases are removed. However, this might be caused by changes made to the operational GFS model, reducing the accuracy of the MOS. The good performance of NMMB is even more surprising when considering that GFS is run at a significantly higher resolution of 0.2°, as opposed to 0.34° in the case of NMMB. As shown by Müller (2011), resolution has a significant impact on the forecast errors of temperature, dewpoint temperature, and wind speed, as long as the resolution is coarser than 0.1°. Thus, we could expect even better results from NMMB if the model is run at the resolution of GFS. Furthermore, Müller (2011) showed that, for example, 3- and 12-km-resolution models have differences in their raw forecasts, but achieve equal skill after bias removal for the above-mentioned parameters. It is thus a very good sign if skill differences persist or even amplify after bias removal using MOS. Finally, we have to consider the fact that the NMMB does not have its own data assimilation and, hence, is initialized with GFS analyses, which are not in proper balance with the NMMB physics. The results presented in this study are thus expected to represent a worst case of global NMMB performance.
2) Global patterns of temperature errors and predictability
We will first examine model characteristics by looking at the absolute and mean error patterns of the global NMMB raw forecasts and then use the unbiased MOS forecasts for the predictability pattern. The latter emerges in very similar form from the GFS and the NMMB. Thus, we only present maps based on NMMB data. Figure 4 shows the absolute error of the raw shelter temperature forecast. A couple of distinct patterns can be identified. The smallest errors are found on small islands, where temperature is basically without daily variation and almost entirely regulated by the sea surface temperature. The smallest errors on the mainland are found in the United Kingdom and in the northern parts of France and Germany up to Denmark. Large errors are found along the U.S. coastlines, especially along the Pacific and in the Gulf of Mexico. On the mainland, the more continental climates of eastern Russia, Alaska, and northern Canada are most difficult to predict. Not surprisingly, high mountain ranges such as the Himalayas, the Rocky Mountains, the Alps, and the central Andes are also subject to larger errors.
Patterns of the mean error (Fig. 5) in the raw forecasts are less pronounced. Cold biases are found around the coastlines of Greenland, in southern Alaska, over the eastern half of the Black Sea, in Norway, and in Iceland. A trend for underestimation is found in Central America, the eastern parts of Brazil, Indonesia, and the Philippines. Cold biases are also observed in the central Andes and in the Himalayas. The largest warm biases are found in eastern Russia, along the Persian Gulf, on the southern border of the Sahara, and along the Chilean coast. Figure 6 shows the absolute error of the MOS forecasts as a surrogate of global temperature predictability. Note that the color scale has been rescaled from what was used in Fig. 4; because of the large improvements made by MOS, the map would look entirely blue otherwise. In comparison to the raw forecast, the global pattern is much smoother and more clearly defined. Errors of representativeness, mainly caused by the coarse resolution, as well as other model-specific systematic errors, have disappeared. Over the entire globe, the forecasts for smaller islands and all ice-free coastlines have the highest predictability, as the temperature is influenced by the ocean. This is quite visible along the U.S. coast, where the absolute error was largest, but predictability is highest. In addition, clear gradients of decreasing predictability with increasing distance from the sea are present globally. In some regions this gradient is large, as along the North American west coast because of the presence of complex topography. In other regions of simpler terrain, such as in central Europe or Brazil, the gradient is weaker and higher predictability reaches farther inland. Alaska, northeastern Canada, and the central part of the western United States, as well as eastern Russia, have the lowest values of predictability. The more continentally influenced northern part of Sweden is another region of lower predictability. Interestingly, mountain ranges, which were clearly visible in the raw model forecast, are much less evident and have larger-scale predictability values similar to the surrounding.
b. Wind speed at 10 m
1) Wind speed verification
Figure 7 shows the root-mean-square and absolute errors of the GFS and NMMB wind speed forecasts at 10 m above ground. The NMMB raw forecasts are subject to a large seasonal trend, with errors smaller than those of the GFS in the Northern Hemispheric summer, but much larger errors in winter. The summertime deficiency of NMMB is however of a systematic nature, as it can be removed with MOS postprocessing. In contrast to the temperature, there is no noticeable improvement in the GFS performance in 2012. According to the yearly course of the mean wind speed error (Fig. 8), the wintertime skill decrease is caused by a positive bias of the NMMB, which does not exist in summer months. After the completion of the 3-yr runs, we noticed that, inadvertently, the NMMB code included a technique aimed at adjusting the predicted gridbox values of wind speed to observations. We suspect that inadequate application of this technique caused the larger NMMB errors in winter. Note that no such adjustment was done for temperature and dewpoint temperature. However, again it can be seen that the postprocessed MOS forecasts are virtually bias free, which will be important for the analysis of predictability.
Looking at the station plot (Fig. 9), we clearly see the better performance of NMMB in a nonlinear sense in the MOS forecasts but also the deficiency in the raw forecasts. As the wind speed at 10-m height is not a real prognostic variable, but diagnosed from stability (Monin–Obukhov similarity) using the wind on the first model layer, it is evident that better tuning is needed for the derivation of this variable, since there is a large systematic error, which can be removed. As for the temperature, the postprocessed NMMB MOS forecasts outperform GFS, despite the lack of a data assimilation system and much coarser resolution.
2) Global patterns of 10-m wind speed errors and predictability
Figure 10 shows the absolute error of the raw model wind speed forecasts. Similar to temperature, the North American coastlines have large errors. Inland, the largest errors are found in the western half of Russia and Scandinavia. The smallest errors are found throughout the tropics, even though the pattern is a bit noisy. Other patches of small errors are seen in the southern United States, France, Spain, Australia, and South Africa.
Wind speed has a positive bias in the eastern United States, Canada, northern Europe, and Russia (Fig. 11). Negative biases are found along North American and European coastlines, in the Mediterranean, and, in a noisy pattern, at most locations in the Southern Hemisphere.
In the bias-removed predictability image for wind speed (Fig. 12), we have again a much smoother picture with well-defined features. Opposite to temperature, the wind speed is very difficult to predict on islands and along coastlines. In fact, most patterns seem to be opposite to those of temperature. Predictability is increasing with distance from the sea, as can be seen, for example, in Europe, the eastern United States, or in Australia. Rather difficult to predict are latitudes south of 30°S and the more continental climate of the western United States. Tropical climates, if not too exposed to the sea, have very high predictability, as do Russia and the continental part of Scandinavia.
c. Shelter dewpoint temperature
1) Dewpoint verification
In the yearly course of monitoring 2-m dewpoint temperature a seasonal pattern with largest errors in Northern Hemispheric winter is apparent from Fig. 13. Whereas the GFS and NMMB have about the same error in wintertime, the NMMB significantly outperforms GFS from late spring to fall. In contrast to wind speed, this seasonal pattern is only partly systematic and also has a nonlinear component, which cannot be removed by MOS postprocessing. The seasonal trend is equal in both models after postprocessing.
The seasonal trend is readily apparent in the mean error of the raw forecasts (Fig. 14). The amplitudes of both models are reduced in the second half of the time series. This is related to an improvement in data assimilation, as it is also seen in the NMMB, which was unmodified during the entire time period. In the first 18 months, the GFS amplitude was considerably larger than the NMMB amplitude. However, in the last 18 months the amplitudes are about equal but have shifted. Hence, the NMMB is too moist and the GFS too dry. After MOS postprocessing, the biases are almost removed.
The error distribution among stations looks similar to that of temperature (Fig. 15). The NMMB has smaller errors than GFS in the raw forecasts, as well as after bias removal with MOS postprocessing.
2) Global patterns of dewpoint errors and predictability
The spatial distribution for the absolute error of dewpoint temperature looks almost identical to the unbiased predictability pattern (Fig. 17, described in greater detail below) and hence is not shown. The NMMB raw forecasts do have a wet bias in the central United States, in Russia, in Mongolia, and in central Europe, as well as in drier climates across the Southern Hemisphere (Fig. 16). A dry bias is found in the tropics and in the zone of the Sahara as far east as India. The global predictability distribution of dewpoint temperature in Fig. 17 shows the clearest and largest-scale pattern of all the variables considered. In North America and Europe, the pattern is quite similar to that for temperature. Again, features like mountain ranges are no longer as distinct. In summary, the dewpoint predictability increases with the increasing wetness of the climate and this seems to be true for the entire planet. It has to be noted that, again, the more continental western United States belongs to the group of the most difficult areas to predict, whereas the United Kingdom is the easiest to predict outside the tropics.
d. Multivariate global predictability
By considering temperature, wind speed, and dewpoint temperature with equal weight, a multivariate predictability index is derived. The absolute errors of the MOS forecasts are scaled linearly between 0 and 1. The scaling bounds for each variable are set, such that the 10% with the largest error, as well as the 10% with the smallest error, have index values of 0 and 1, respectively. Note that this combination of three different variables is somewhat arbitrary and experimental, but it can provide a more integrative view on the spatial pattern of forecast quality. The results can be seen in Fig. 18. Tropical climates in Southeast Asia, southern India, and Brazil, together with the strongly cyclonically influenced midlatitude region of the United Kingdom, as well as the northern parts of France and Germany up to Denmark, have the highest predictability index. The lowest values and thus the most inaccurate weather forecasts can be expected in the northern parts of Canada, the more continental part of the western United States, Argentina, and Mongolia. Remarkably, this pattern is very clear, having virtually no outliers disturbing the pattern. Furthermore, smaller-scale features like mountain ranges are not readily apparent. This indicates that the spatial pattern of gross predictability is controlled at the very large scales.
A verification of shelter temperature and dewpoint temperature, along with 10-m wind speed, for the new NMMB global model was carried out. For a 3-yr period (2010–12), a database with over 9000 weather stations reporting data at least every 3 hours was compiled to cover the entire globe. The raw model forecasts, as well as bias-removed MOS forecasts, were analyzed and compared to NOAA’s operational GFS model. For the raw model temperature, the absolute and root-mean-square errors are around 2.2 and 3.2 K, respectively. Using MOS postprocessing, these errors can be significantly reduced to 1.5 for absolute error and 2 K for RMSE, respectively. The errors for the NMMB MOS and raw forecasts are about 10% smaller than those for the GFS. However, because of several changes made to GFS during the 3-yr period, the MOS computations for GFS are not optimal. A yearly course in root-mean-square and absolute errors of about 0.5 K is seen in both models, with the worst model performance in the Northern Hemispheric winter and the smallest errors in late summer. The models have small cold biases: approximately 0.5 for GFS and 0.2 K for NMMB. The MOS forecasts are virtually unbiased.
The GFS wind speed at 10-m height has an RMSE of 2.5 m s−1 and an absolute error of around 1.8 m s−1, respectively. The raw NMMB forecasts improve slightly compared to the GFS during the summer, but have larger errors (1 m s−1, root-mean-square; 0.5 m s−1, absolute) in the Northern Hemispheric winter. The NMMB overestimates the wind speed by about 1.5 m s−1 in winter, but this error is purely systematic, as it is completely removed with the MOS postprocessing. This indicates that the parameterization to diagnose the 10-m height wind speed of the NMMB should be improved.
Predicting the dewpoint temperature is much more difficult than the air temperature. The yearly course of the RMSE peaks in the Northern Hemispheric winter at over 4 K for both models. In summer, the errors decrease to 3 K (GFS) and 2.5 K (NMMB). The absolute errors are about 1 K smaller than the RMSE. The MOS postprocessing proves to be very effective, reducing the RMSE to between 2 and 3 K, and the absolute error to between 1.4 and 2 K over the year. Both models have about the same amplitude (1.5 K) in the yearly course of the mean error. However, the amplitude is vertically shifted between the two models. Thus, NMMB has no bias in the summer and is too wet in the winter, and GFS has a very weak wet bias in the late winter and a strong dry bias during the summer and early winter. For both the raw and MOS forecasts, the NMMB reduces the GFS errors by about 10%.
The performance of NMMB is remarkable, since the model is run at much coarser resolution than the operational GFS. Note that as long as the resolution is coarser than about 10 km, which is the case here, an increase in resolution improves the forecast quality, as shown in previous work by the author (Müller 2011). In addition, the NMMB does not have a data assimilation system yet; instead, it was initialized with GFS data, which is physically not well balanced for the dynamics and physics used in the NMMB. Hence, the current results represent the lower bound of the global NMMB’s forecast skill.
Using the unbiased MOS forecasts, a global pattern of spatial predictability can be generated. Clear spatial patterns emerge, which are partly different depending on the variable. For temperature, the best forecasts can be made for small islands and coastlines, and a clear gradient of decreasing skill with increasing distance from the sea is visible over the continents. For the wind speed, this pattern is almost reversed. The dewpoint temperature shows the largest-scale patterns, mainly controlled by the humidity of the climate. Combining the temperature, wind speed, and dewpoint temperature in a gross predictability index reveals a clear and large-scale pattern. The tropical climates in Southeast Asia, southern India, and Brazil, together with the strongly cyclonically influenced midlatitude region of the United Kingdom, as well as the northern parts of France and Germany up to Denmark, have the highest predictability index. The lowest values and thus the most inaccurate weather forecasts can be expected in the northern parts of Canada, the more continental part of the western United States, Argentina, and Mongolia. Remarkably, smaller-scale features like mountain ranges are not readily apparent. This indicates that the spatial pattern of gross predictability is controlled at the very large scales.
We would like to express our gratitude to SLF Switzerland; Agroscope Switzerland; the national weather services of Germany, Spain, Switzerland, Brazil, and Australia; meteoblue.com; and Pessl Instruments for providing observational data. Furthermore, we would like to express our gratitude to Glenn H. White from NOAA and another anonymous reviewer for their valuable input to improve the manuscript.