Comparison of Linear Predictability of Surface Wind Components from Observations with Simulations from RCMs and Reanalysis

Yiwen Mao School of Earth and Ocean Sciences, University of Victoria, Victoria, British Columbia, Canada

Search for other papers by Yiwen Mao in
Current site
Google Scholar
PubMed
Close
and
Adam Monahan School of Earth and Ocean Sciences, University of Victoria, Victoria, British Columbia, Canada

Search for other papers by Adam Monahan in
Current site
Google Scholar
PubMed
Close
Full access

We are aware of a technical issue preventing figures and tables from showing in some newly published articles in the full-text HTML view.
While we are resolving the problem, please use the online PDF version of these articles to view figures and tables.

Abstract

This study compares the predictability of surface wind components by linear statistical downscaling using data from both observations and comprehensive models [regional climate models (RCM) and NCEP-2 reanalysis] in three domains: North America (NAM), Europe–Mediterranean Basin (EMB), and East Asia (EAS). A particular emphasis is placed on predictive anisotropy, a phenomenon referring to unequal predictability of surface wind components in different directions. Simulated predictability by comprehensive models is generally close to that found in observations in flat regions of NAM and EMB, but it is overestimated relative to observations in mountainous terrain. Simulated predictability in EAS shows different structures. In particular, there are regions in EAS where predictability simulated by RCMs is lower than that in observations. Overestimation of predictability by comprehensive models tends to occur in regions of low predictability in observations and can be attributed to small-scale physical processes not resolved by comprehensive models. An idealized mathematical model is used to characterize the predictability of wind components. It is found that the signal strength along the direction of minimum predictability is the dominant control on the strength of predictive anisotropy. The biases in the model representation of the statistical relationship between free-tropospheric circulation and surface winds are interpreted in terms of inadequate simulation of small-scale processes in regional and global models, and the primary cause of predictive anisotropy is attributed to such small-scale processes.

Supplemental information related to this paper is available at the Journals Online website: https://doi.org/10.1175/JAMC-D-17-0283.s1.

© 2018 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Yiwen Mao, ymaopanda@gmail.com

Abstract

This study compares the predictability of surface wind components by linear statistical downscaling using data from both observations and comprehensive models [regional climate models (RCM) and NCEP-2 reanalysis] in three domains: North America (NAM), Europe–Mediterranean Basin (EMB), and East Asia (EAS). A particular emphasis is placed on predictive anisotropy, a phenomenon referring to unequal predictability of surface wind components in different directions. Simulated predictability by comprehensive models is generally close to that found in observations in flat regions of NAM and EMB, but it is overestimated relative to observations in mountainous terrain. Simulated predictability in EAS shows different structures. In particular, there are regions in EAS where predictability simulated by RCMs is lower than that in observations. Overestimation of predictability by comprehensive models tends to occur in regions of low predictability in observations and can be attributed to small-scale physical processes not resolved by comprehensive models. An idealized mathematical model is used to characterize the predictability of wind components. It is found that the signal strength along the direction of minimum predictability is the dominant control on the strength of predictive anisotropy. The biases in the model representation of the statistical relationship between free-tropospheric circulation and surface winds are interpreted in terms of inadequate simulation of small-scale processes in regional and global models, and the primary cause of predictive anisotropy is attributed to such small-scale processes.

Supplemental information related to this paper is available at the Journals Online website: https://doi.org/10.1175/JAMC-D-17-0283.s1.

© 2018 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Yiwen Mao, ymaopanda@gmail.com

1. Introduction

Surface winds are a climatic field of importance in economic and societal sectors including air quality, agriculture, and transport. Global climate models (GCMs) can effectively model large-scale processes (e.g., from synoptic to planetary scales). However, their coarse resolution (typically on the order of 100 km; Church et al. 2013) limits their skill in modeling surface winds as they do not resolve the smaller microscale to mesoscale processes that also influence surface winds. One approach to predicting surface winds is through statistical downscaling (SD) in which a transfer function (TF) is built based on statistical relationships between station-based surface data and large-scale climate fields in the free troposphere. This study focuses on statistical downscaling of surface wind components because, besides wind speed, the direction of wind is also important. For example, studying the transport of airborne substances requires knowledge of the wind vector.

A few previous studies (van der Kamp et al. 2012; Monahan 2012; Culver and Monahan 2013; Sun and Monahan 2013) have shown that SD predictions of wind components are generally better than those of wind speed. These studies have also shown that the predictability of surface wind components by SD with linear TFs is often characterized by predictive anisotropy (i.e., variation of predictability of surface wind components with directions of projection). Salameh et al. (2009) found that only one of the zonal u and meridional υ wind components at stations located in the valleys of French Alps was predicted well using statistical prediction with a generalized additive model as the transfer function. Other studies used linear SD to predict surface wind components projected onto compass directions from 0° to 360° at 10° intervals. For example, van der Kamp et al. (2012) and Culver and Monahan (2013) applied linear SD to predict surface wind components in western and central Canada; Monahan (2012) and Sun and Monahan (2013) studied the prediction of sea surface winds by linear SD. These studies found that the predictability of surface wind components by linear SD generally varies with directions of projection in the regions they considered. Mao and Monahan (2017) further investigated the predictability of surface wind components by linear SD at a large number of land stations across the globe and found that predictive anisotropy is a common feature. Mao and Monahan (2018) showed that predictive anisotropy is not an artifact of the use of a linear SD but is also found using nonlinear regression models.

In general, previous studies have shown that the best or worst predicted wind component is not always the conventional zonal or meridional wind component, and knowledge of the predictability of these two components alone is not sufficient to assess the potential utility of statistical downscaling at a station. It is necessary to know the predictability of surface wind components projected onto directions from 0° to 180° (as the projection along θ is the negative of that along 180° + θ). A question of interest is what limits predictability by SD along certain directions of projections. Salameh et al. (2009) attributed the unequal predictability of zonal and meridional wind components at the location they considered to the orientation of the mountain valley, as the across-valley wind component is characterized by local variability unexplained by large-scale climate fields. Van der Kamp et al. (2012) and Culver and Monahan (2013) found similar topographically oriented predictive anisotropy in the western cordillera of North America, and Mao and Monahan (2017) showed that low predictability and strong predictive anisotropy tend to be associated with regions marked by surface heterogeneity, such as mountainous and coastal regions. These previous studies suggest that topography plays a role in limiting predictability of surface wind components along certain directions. However, topography does not seem to be the only factor determining predictive anisotropy of surface wind components, as some locations are found with maximum predictability aligned across valley rather than along valley (van der Kamp et al. 2012; Culver and Monahan 2013), and evident predictive anisotropy can also occur in regions with relatively flat terrain, as shown in Mao and Monahan (2017), and over the oceans (Monahan 2012; Sun and Monahan 2013). Mao and Monahan (2017) showed that the surface wind components of highest predictability tend to be those of highest variability and with distributions closest to Gaussian and that poor predictability is generally associated with wind components characterized by relatively weak variability and non-Gaussian distribution. It appears that no single factor determines predictive anisotropy.

The goal of this study is to further investigate factors determining predictive anisotropy in order to further develop insight regarding the relationship between large-scale free-tropospheric flow and surface winds as a basis for improving physically based prediction methods, such as regional climate models (RCMs). To this end, we compare how well the predictability of surface wind components can be simulated by a range of different physically based comprehensive models (i.e., different regional climate models and a global reanalysis) in three regions: North America (NAM), Europe–Mediterranean Basin (EMB), and East Asia (EAS), with an emphasis on predictive anisotropy. RCMs are a form of dynamical downscaling in which physical processes are simulated at finer scales than by GCMs. Dynamical downscaling is an alternative to statistical downscaling, and one of the motivations of this study is to assess how well RCMs can represent the observed characteristics of the relationship between free-tropospheric flow and surface winds. In this regard, the accuracy of simulated predictability metrics, such as predictive anisotropy, can be used as an indication of how well RCMs can model physical processes related to the relationship between the large-scale free-tropospheric flow and surface winds. In this way, simulations by RCMs can be used to provide some understanding of the origin of predictive anisotropy. Determining the circumstances in which comprehensive models can or cannot reproduce this statistical relationship provides further understanding of its physical controls.

The simulation accuracy of an RCM depends on a number of factors, including the accurate representation of boundary conditions, the accuracy of driving data, the size of the domain, and the proper parameterization of physical processes (Rummukainen 2010). We only consider simulations from RCMs driven by observationally constrained reanalysis boundary conditions. Although local-scale dynamics near the land surface generally cannot be modeled with good skill by reanalysis (He et al. 2010), large-scale features in the free troposphere are well represented by reanalysis products. Therefore, free-tropospheric climate fields from reanalysis are generally considered reliable boundary conditions (although they do have limitations due to observational constraints and the accuracy of the assimilation models; e.g., Parker 2016). By using reanalysis-driven RCMs, we can focus the discussion of the simulation of predictive anisotropy on physical processes described by regional models rather than considering the potential systematic bias inherent in the driving GCMs.

Neither RCMs nor reanalyses can be regarded as perfect representations of point observations. The difference between point measurements (as in observations) and averages over the scale of a grid box (as in models) is another reason for differences between simulated and observed statistical relationships between free-tropospheric flow and surface winds. Our analysis is not able to distinguish between model biases and the difference between point and spatially averaged quantities. Irrespective of the source of difference, this characterization is useful from the perspective of determining the utility of the RCMs as tools for dynamical downscaling.

We also further elaborate a mathematical model of directional predictability introduced in Mao and Monahan (2017) based on an idealized partitioning of surface wind components into large-scale “signal” and small-scale “noise.” The idealized model provides a conceptually organizing perspective on the controls of the statistical predictability of surface wind components. We only consider linear statistical prediction in this study because the results of Mao and Monahan (2018) show that the predictive skill resulting from nonlinear regression–based TFs is not very different from linear TFs. Finally, while Mao and Monahan (2018) considered statistical prediction of both daily and monthly averaged surface winds, we focus on daily averaged data quantities in this study, as the time period of the RCM simulations considered in this study may not be long enough to give robust results of statistical prediction using monthly averaged data.

This paper is organized as follows: Section 2 presents the data considered and methods used in the comparison of observed and modeled features of statistical prediction. Section 3 introduces the idealized mathematical model used as a conceptual framework for understanding controls on surface predictability. Section 4 compares the structures of statistical prediction in observations and comprehensive models. Inferences based on this comparison are discussed in section 5, and conclusions are given in section 6.

2. Data and methods

Mao and Monahan (2017) studied the characteristics of the linear predictability of surface wind components at 2109 land stations across the globe, most of which are concentrated in the middle latitudes of the Northern Hemisphere. In this study, we consider statistical predictions of surface wind components at a subset of these stations consisting of 557 stations in NAM, 595 stations in EMB, and 715 stations in EAS. These regions are chosen based on the availability of RCM simulations, and because they have higher station densities than other areas of the Northern Hemisphere (Fig. 1). To assess the connection between surface heterogeneity and characteristics of predictability of surface wind components, we classify these stations according to two categories: 1) whether the station is in a mountainous region or in flat terrain (denoted “Mt” or “plain”) and 2) whether the station is adjacent to water or inland (denoted coast and land). These two categories result in the four groups of stations illustrated in Fig. 1. The classification of station locations as mountain or plain is based on the maximum elevation within 0.2° from the station location. If the maximum elevation is larger than 1000 m, the station is classified as a mountain station; otherwise, it is a plain station. The radius of 0.2° is chosen to ensure that the classification is based on local terrain. The elevation data used for the classification are 1 arc-min global relief data from the ETOPO1 Global Relief Model (Amante and Eakins 2009; downloaded from https://www.ngdc.noaa.gov/mgg/global/global.html). Coastal stations are classified using the coastline data provided by the Mapping Toolbox of MATLAB (MathWorks 2016). If the location of a station is within the range of 30 km from the nearest coastal boundary, the station is classified as coastal. In such locations, the surface winds are likely to be influenced by the land–water contrast since sea breezes commonly extend inland as far as 30 km (Oke 2002).

Fig. 1.
Fig. 1.

The locations of the 2109 land stations used for statistical prediction of surface winds. The domains of NAM (557 stations), EMB (595 stations), and EAS (715 stations) are outlined. Stations in the three domains are classified into four groups according to local topography and proximity to water.

Citation: Journal of Applied Meteorology and Climatology 57, 4; 10.1175/JAMC-D-17-0283.1

The RCM simulations in this study are chosen according to available time coverage and resolution from existing runs in the Coordinated Regional Climate Downscaling Experiment (CORDEX) project as well as the North American Regional Climate Change Assessment Program (NARCCAP). One of the purposes of the CORDEX project is to provide a quality-controlled dataset of downscaled information as a model evaluation framework (Giorgi et al. 2009). The NARCCAP project is currently the most comprehensive regional climate-modeling project for climate change impact studies in North America (Mearns et al. 2009). Longer-duration simulations can lead to more robust statistical results, and higher resolution can potentially contribute to modeling the local-scale physical processes with higher accuracy. All RCM simulations used in this study are at least 19 years long. The horizontal resolution of all RCMs in this study is 50 km or finer. Table 1 summarizes the basic information for all datasets used in this study. More detailed information specific to the RCMs is shown in Table 2.

Table 1.

Summary information for each data type considered in this study.

Table 1.
Table 2.

Details of RCMs considered in this study.

Table 2.

a. Predictands and predictors

The statistical predictands in this study are surface wind components (from observations, a reanalysis product, and RCM simulations) projected onto directions varying from 0° to 360° at a 10° interval. The wind component projected onto direction θ is expressed as
e1
where u and υ are the zonal and meridional coordinates, respectively, and θ is measured clockwise from the north. There are a total of 36 surface wind components at each location of interest, but only 18 of these are distinct because .
For observation-based prediction analysis, zonal and meridional wind components are derived from the original hourly wind speed w and direction ϕ measured hourly at 10 m above the ground during a 2-min period ending at the beginning of the hour:
e2
e3
Wind directions in Eqs. (2) and (3) refer to where the wind comes from, measured clockwise from north. Observational data of w and ϕ from global weather stations, from 1 January 1980 to 31 December 2012, are obtained using the WeatherData function of Mathematica 9.0 (Wolfram 2016). This dataset includes a wide range of data sources. Chief among the data sources are the National Weather Service of the National Oceanic and Atmospheric Administration (NOAA), the U.S. National Climatic Data Center (NCDC; now the National Centers for Environmental Information (NCEI)], and the Citizen Weather Observer program. All stations used in this study have network membership in the NCDC, and around 85% of all stations used in this study also belong to the climate observation network of the World Meteorological Organization. Only stations with fewer than 10% missing data for the period under consideration are considered. For prediction using output from RCMs, u and υ are the model output fields of eastward near-surface wind and northward near-surface wind, respectively. For prediction using reanalysis surface winds, u and υ are the eastward wind and northward wind at 10 m from NCEP-2 reanalysis data (Kanamitsu et al. 2002). A range of different reanalysis products exist. We choose to analyze the somewhat older NCEP-2 reanalysis rather than a more recent product because it (or a comparable product) is used to drive the RCM simulations being considered.
To compare prediction results between observations and model products, we wish to obtain values of model fields at locations nearest to the observational stations considered in this study. Since the location of an observational station generally will not correspond exactly to a point in RCM or reanalysis grids, we need to estimate values of u and υ at each station location from the model output. Two methods to estimate u and υ at station locations based on gridpoint values are considered. The first one simply takes u and υ at the grid point nearest to the station. The second representation is based on inverse distance weighting. Specifically, a group of neighboring points surrounding a station are identified, and the representation of u and υ at the station is the weighted average of u and υ at these grid points:
e4
where x stands for u or υ, N is the number of neighboring grid points, and is the inverse square of the distance between each grid point and the station location. In this study, we use weighted averages with . The number of neighboring grid points is chosen based on empirical tests that show no evident difference in values of u and υ by Eq. (4) for . Moreover, the difference in results between the first and second representation is small and does not change the final results (not shown). Note that both representations still potentially suffer from biases resulting from the fact that gridpoint values represent spatial averages on the order of the grid resolution, while station data are point measurements. In particular, the model fields will not include local variability on scales smaller than the grid resolution. Besides the problem of grid resolution, variability of surface winds caused by change of local environment over time (e.g., vegetation growth) is not accounted for by comprehensive models.

The predictors in the study consist of free-tropospheric meteorological fields: temperature T, geopotential height Z, zonal wind U, and meridional wind V at 500 hPa. Following the approach used in Mao and Monahan (2017), the four predictor fields are chosen from a domain of 40° × 40° centered on each station. Previous studies (Monahan 2012; Culver and Monahan 2013) have shown that the correlation structures between surface wind vectors and large-scale free-tropospheric climate variables are often spread across a large area surrounding a station, such that the grid points with high correlation aloft are often not directly above the surface station.

For the prediction of observational data and reanalysis surface fields, the four predictor fields are from the NCEP-2 reanalysis product. Previous studies have shown that the difference among different reanalyses is generally not substantial for the large-scale, free-tropospheric flow (e.g., Culver and Monahan 2013). Since the resolution at which tropospheric variables are available from the NCEP-2 reanalysis is 2.5° × 2.5°, each predictor domain contains 256 grid points. For prediction of RCM surface fields, the four predictor fields are taken from the output of the corresponding RCMs in a domain of 40° × 40° centered on the station. Since the resolution of RCMs is generally much finer than that of the NCEP-2 reanalysis, we subsample the RCM fields to keep those points that are closest to each of the 256 grid points in the domain of the NCEP-2 reanalysis.

b. Prediction of surface wind components

The time period of the observations and the reanalysis product is from 1 January 1980 to 31 December 2012. The available time period for RCMs is generally shorter: approximately 20 years for most of the RCMs used in this study (Table 1). We divide data into seasons of June–August (JJA) and December–February (DJF). The minimum sample size for multiple linear regression based on a comprehensive study by Green (1991) is (where m is the number of predictors). Accordingly, we should have at least data points for each regression model as in this study. While the data size of daily averaged RCM output for a given season with 20 years of data is much larger than the threshold of 82 data points [i.e., approximately ], the size is subthreshold for the monthly averaged RCM output for a given season, . As a robust statistical fit may not be achieved for monthly data, we only consider daily averaged prediction for the analysis in this study.

Statistical prediction presented in this study follows the approach of Mao and Monahan (2017). There is no prior way to determine the locations of grid points with high correlation in the predictor domain; structures of predictability vary from station to station. To determine predictability in a straightforward way that can be generalized for all stations, we fit a regression model using the four predictors (, , , and ) at each grid point (i, j) in the domain and compute the average of the top 2% of the squared correlation coefficient values. Predictability obtained in this way decreases with a larger number of chosen grid points, but empirical experiment shows that the results are not strongly sensitive to including up to 5% of the grid points in the average.

Before carrying out the regression fits, we remove the individual seasonal cycles of the predictands and predictors (, , , and ) at each grid point in the domain of prediction using least squares to estimate the coefficients of the harmonic fit
e5
where with days for daily averaged time series (after removing data of 29 February for convenience), and is the estimated seasonal cycle for the variable under consideration. The deseasonalized time series of , , , and are then scaled by their individual standard deviations in order to obtain standardized predictors. Including a larger number of harmonics in the seasonal cycle has essentially no effect on the resulting regression models. To minimize the influence of any remaining seasonality on the statistical relationship, regression models are fit separately for the DJF and JJA seasons. The vector set of standardized predictors at each grid point (denoted ) is used to predict deseasonalized (denoted ) by a multiple linear regression model:
e6
where is the vector of model parameters, and is the regression model error. Statistical predictability is assessed using leave-one-year-out cross validation. Specifically, for each year of data, the regression model parameters are estimated using data from the rest of the years available in the data. The resulting predictability at each grid point is measured by squared correlation ,
e7
A single measure of predictability across the predictor domain is then computed, denoted :
e8
The average calculated in Eq. (8) is taken over the four grid points with the largest values of within the predictor domain (corresponding to approximately 2% of the grid points in the domain). Predictive anisotropy is then measured by
e9
where and are respectively the minimum and maximum over the 36 values of θ. In other words, and represent respectively the worst and best predictability of surface wind components projected onto directions from 0° to 360°. Values of range between 0 and 1, such that smaller indicates a stronger degree of anisotropy of predictability, and a value of indicates perfectly isotropic predictability. The quantities , , and are the metrics of predictability of surface wind components considered in this study.

3. Idealized model of predictability

We use an idealized model to provide a conceptual framework for the controls on the characteristics of predictability of surface wind components. The idealized model, which extends a similar model in Mao and Monahan (2017), is based on the assumption that surface wind variability can be partitioned into distinct predictive signal and noise contributions when using large-scale free-tropospheric climate variables for statistical prediction. By definition, the signal refers to the part of surface winds perfectly predicted by the large-scale flow, and the noise refers to the part of surface winds originating from small-scale processes and cannot be explained by large-scale predictors. The decomposition implies that the signal and noise are uncorrelated. The least squares linear regression prediction and residual are uncorrelated by construction; this model assigns specific physical interpretations to the linear regression prediction (i.e., the large-scale signal) and residual (i.e., the local noise). A wind vector can be expressed in terms of the components and in an arbitrary orthogonal basis :
e10
where the subscripts s and n respectively denote signal and noise. There is complete freedom to choose the orientation of the basis vectors in this decomposition; they do not need to align with the zonal and meridional directions. The wind component projected onto direction θ (with θ measured clockwise away from ) is
e11
where and . The predictability of surface wind components is then measured by the squared correlation coefficient
e12
It follows that
eq1
We define
e13
e14
The quantities η and ζ represent the fractions of variance of surface wind components projected onto and that can be explained by the large-scale predictors; in other words, they respectively represent the fraction of predictive signal strength of and . As by construction, , and using the definition of the correlation coefficient , we obtain
e15
where and . It follows that we can express in any direction in terms of the signal strength along and (i.e., ζ, η), the anisotropy of variability of surface wind components projected onto directions of the coordinates , and correlations ρ and :
e16

The constraint that follows as each of the squared wind component correlations , , and fall between 0 and 1. This model reduces to the form of that in Mao and Monahan (2017) if and . While the parameters η, ζ, γ, ρ, and are distinct in the model, their observed distributions are not necessarily independent.

4. Results

In this section, we present the findings related to the comparison of metrics of predictability , , and simulated by comprehensive models (i.e., RCMs and reanalysis) with those from observations in the NAM, EMB, and EAS regions.

a. Overview of comprehensive models (RCMs and reanalysis) versus observation

The relationship between metrics of predictability from comprehensive models and observations can be summarized in Taylor diagrams, a graphical tool to assess how closely spatially distributed modeled results match observations by quantifying the spatial correlation between the model and observations, the centered root-mean-square difference, and the spatial standard deviation of observation and model-based fields. To facilitate comparison among different regions, seasons, and terrain types using Taylor diagrams, all fields are normalized by the spatial standard deviation of the corresponding observational fields.

Several overall patterns can be seen from Figs. 24. The difference between mountainous and plain terrain is more evident than the difference between coast and land regions in NAM and EMB, especially in terms of comparison of modeled and observed . In NAM and EMB, modeled predictive structures in the plain regions are closer to observations than in the mountainous region. These systematic patterns between terrain types are less evident in EAS than the other two domains. In general, the differences in predictive structure resulting from various models for different terrain groups are largest for among the three predictability metrics. The pattern of seasonal difference is generally not clear; in general, seasonal differences are more evident for in mountainous regions. Model differences between the two RCMs in each domain are small in general, and no one RCM considered is systematically better than the other in any of the domains considered.

Fig. 2.
Fig. 2.

Taylor diagrams showing the comparison of predictability of surface wind components obtained from observations with those simulated by models (i.e., RCMs NA1, NA2, and NCEP-2 reanalysis; see Table 1) for groups of stations classified by their terrains in NAM.

Citation: Journal of Applied Meteorology and Climatology 57, 4; 10.1175/JAMC-D-17-0283.1

Fig. 3.
Fig. 3.

As in Fig. 2, but in EMB.

Citation: Journal of Applied Meteorology and Climatology 57, 4; 10.1175/JAMC-D-17-0283.1

Fig. 4.
Fig. 4.

As in Fig. 2, but in EAS.

Citation: Journal of Applied Meteorology and Climatology 57, 4; 10.1175/JAMC-D-17-0283.1

b. Models versus observations in NAM and EMB

Maps of the spatial distribution of metrics of predictability for the season of JJA are shown in Figs. 57 to highlight the regional differences. As maps for the season of DJF are very similar to those of JJA, similar conclusions can also be drawn for DJF (see Figs. S1–S3 in the online supplemental material; hereinafter supplemental figures will have a leading S in their number). The comparison of predictability metrics resulting from comprehensive models and observations is quantified using the ratio , where M refers to the values of , , and simulated by comprehensive models, and O refers to the corresponding values from observations.

Fig. 5.
Fig. 5.

(top) Observed daily for JJA data in NAM, EMB, and EAS, respectively. The remaining rows show the comparison of predictability metrics derived from observations with those derived from the two RCM models and NCEP-2 reanalysis in terms of M/O. The color scale is logarithmic. Stations with black outlines are in mountainous regions and those without are in plain regions.

Citation: Journal of Applied Meteorology and Climatology 57, 4; 10.1175/JAMC-D-17-0283.1

Fig. 6.
Fig. 6.

As in Fig. 5, but for .

Citation: Journal of Applied Meteorology and Climatology 57, 4; 10.1175/JAMC-D-17-0283.1

Fig. 7.
Fig. 7.

As in Fig. 5, but for .

Citation: Journal of Applied Meteorology and Climatology 57, 4; 10.1175/JAMC-D-17-0283.1

The relationships between metrics of predictability and topography are similar in NAM and EMB. In these two domains, stations in the mountainous regions (i.e., western NAM and southern EMB) are more likely to have low predictability and strong predictive anisotropy, whereas predictability for stations in the plain region is generally good with weak predictive anisotropy. The values of all three metrics simulated by both RCMs and reanalysis tend to be higher than observations in regions characterized by mountainous terrain, while simulated metrics by comprehensive models are generally in reasonable agreement with observations in plain regions. The contrast between the plain and mountainous regions is more pronounced for the comparison of than , which suggests that is more influenced by small-scale physical processes, such as those associated with complex terrain, which are not resolved by RCMs. Accordingly, the contrast between the good simulation skills of in plain regions and its overestimate in complex terrain suggests that small-scale physical processes not resolved by RCMs and reanalysis contribute to predictive anisotropy. Specifically, the predictability of surface wind components projected onto some directions is limited by small-scale physical processes resulting in low and strong predictive anisotropy (i.e., small α) in reality. As these small-scale physical processes are not captured by comprehensive models, we see higher and thereby weaker predictive anisotropy simulated by comprehensive models. However, these unresolved small-scale processes are not the only control on anisotropy. For example, the relatively strong observed predictive anisotropy and small predictability over the southeastern United States are captured in both RCMs and the reanalysis.

c. Anomalous predictive structures in EAS

In general, the patterns of simulated , , and by RCMs and reanalysis simulated in EAS are different from those in NAM and EMB, although the EAS domain also shows some connection between the terrain and observed metrics of predictability. For example, mountainous terrain is common in central Asia, and this region is generally characterized by low predictability and strong predictive anisotropy in observations. On the other hand, there are more stations with higher observed and weaker predictive anisotropy in northeast China and most of Japan, where the terrain is relatively flat.

However, there is no systematic contrast in values of between mountainous and plain regions in EAS as in NAM and EMB. There is a distinct contrast between the region west of roughly 90°E where simulated values of , , and by the two RCMs are significantly lower than observations, and the region east of 90°E where large values of are common. Among all three domains considered, the region west of 90°E in EAS is the only region where extensive underestimation of , , and αΠ from RCMs is observed. The area north of 45°N and west of 90°E in EAS is particularly notable as this area is characterized in observations and the reanalysis product by relatively high predictability and nearly isotropic directional predictability, while predictability simulated by the RCMs is low and highly anisotropic in this area. The contrast of the underestimation and overestimation for areas divided by 90°E is a systematic bias of the regional dynamic downscaling models used in the two RCMs in this region. The reason of this bias is unclear, but its absence in the (global) reanalysis product suggests that one possible reason could be that upstream information of the background flow outside the model domain is not properly represented by the RCMs.

5. Discussion

As in the previous section, we will focus on JJA results as the seasonal differences between predictability metrics simulated by comprehensive models and from observations are not substantial. Furthermore, only the difference between mountainous and plain terrain is discussed as there is no clear contrast between coastal and inland stations as discussed in section 4. Finally, we only present the analysis in NAM. The analyses of EMB and EAS (east of 90°E) are presented in the supplementary material, and conclusions similar to NAM can also be drawn from the analysis in these two regions, although some patterns in EAS show variations from NAM and EMB possibly because of anomalous predictive structures found in EAS as discussed in the following subsections. The anomalous region west of 90°E in EAS is not included in the following analysis since the underestimation of simulated metrics of predictability by RCMs in this region is a regionally specific systematic bias of the RCMs.

a. Inferences from overestimation by RCMs and reanalysis

As shown in Figs. 57, overestimation of , , and is commonly observed in simulations by RCMs and reanalysis. Figure 8 further explores the overestimation of predictability metrics by comprehensive models by showing the relationships between and observed metrics of predictability, considering both directional and magnitude information.

Fig. 8.
Fig. 8.

Estimated probability density functions of conditioned on observed values of predictability metrics [(left) , (second column) , and (third column) ] for results obtained from the two RCMs and NCEP-2 reanalysis in NAM for JJA. Filled contours are obtained by using data from all stations in the region; red and green contours are obtained by stations only from mountainous and plain regions, respectively. (right) Histogram of the dot product of the unit vectors in the directions of minimum predictability and maximum for both mountainous and plain stations.

Citation: Journal of Applied Meteorology and Climatology 57, 4; 10.1175/JAMC-D-17-0283.1

Figure 8 shows that the ratio of generally exceeds 1 for smaller values of observed predictability metrics and tends to approach 1 as values of predictability metrics increase. That is, in general overestimates of simulated predictability metrics are found in regions where these quantities are small in observations. However, in EAS, when the corresponding observed predictability metrics is relatively large (Figs. S4 and S5), is generally smaller than 1, indicating underestimation of metrics simulated by models. One possible reason of this anomalous structure is that the division by 90°E is approximate, and there are still some variant stations with relatively large underestimates of predictability metrics to the east of 90°E. The last column in Fig. 8 shows that the predominant value of is 1 for both mountainous and plain terrains, indicating that the largest overestimation of predictability by comprehensive models tends to occur along the direction of at most stations. This pattern is evident in all three domains considered.

The general pattern shown in Fig. 8 indicates that when surface winds are influenced mainly by small-scale physical processes (i.e., resulting in the low predictability of surface wind components), simulated surface wind predictability by comprehensive models tends to be inflated. In contrast, when surface winds are dominated by large-scale processes, the predictability of observed and simulated surface wind components by comprehensive models is approximately the same. Moreover, small values of observed are more likely to be overpredicted than small values of indicating that is more influenced by small-scale physical processes than . Pronounced artificial weakening of predictive anisotropy simulated by comprehensive models generally occurs when observed predictive anisotropy is strong, which is an indication of poor predictability in at least one direction of projection of surface wind components. From the pattern of overestimation of predictability metrics, we can infer that 1) comprehensive models (i.e., both RCMs and reanalysis) do a poor job in resolving the small-scale physical processes influencing surface wind variability that are weakly connected to free-tropospheric flow and 2) these small-scale physical processes contribute to predictive anisotropy. It should be noted that while these inferences are based on patterns shown by most stations, there are exceptions. We can find stations that are both characterized by strong observed predictive anisotropy and well represented by RCMs. The predictive structures in these locations (e.g., the southeast United States) are apparently not associated with unresolved small-scale variability.

b. Inferences from the idealized mathematical model

Metrics of predictability can be related to the quantities η, ζ, γ, and ρ in the idealized mathematical models. Figures 911 show the estimated probability density function of predictability metrics given various quantities from the idealized mathematical models. Overall, the relationships are similar for both observations and comprehensive models, despite the unresolved physical processes in comprehensive models resulting in a different small- and/or large-scale decomposition from observations and occupying different regions of idealized model “parameter space.”

Fig. 9.
Fig. 9.

Estimated probability density functions of conditioned on ζ and η in the coordinate system defined by (, ) for observations and all comprehensive models in the NAM domain. Filled contours are obtained by using data from all stations in the region; red and green contours are obtained using stations in mountainous and plain regions, respectively.

Citation: Journal of Applied Meteorology and Climatology 57, 4; 10.1175/JAMC-D-17-0283.1

Fig. 10.
Fig. 10.

Estimated probability density functions of predictability metrics [, , and ] conditioned on ln(γ) for observations and all comprehensive models in the NAM domain. Filled contours are obtained by using data from all stations in the region; red and green contours are obtained using stations in mountainous and plain regions, respectively.

Citation: Journal of Applied Meteorology and Climatology 57, 4; 10.1175/JAMC-D-17-0283.1

Fig. 11.
Fig. 11.

As in Fig. 10, but conditioned on .

Citation: Journal of Applied Meteorology and Climatology 57, 4; 10.1175/JAMC-D-17-0283.1

We focus on the orthogonal coordinates aligned along and across the direction of minimum predictability at each station, denoted (, ). While the directions of and are rarely exactly perpendicular to each other, the direction of is often close to the direction perpendicular to (Mao and Monahan 2017). With this choice of basis, ζ and η [Eqs. (13) and (14)] represent the fractions of predictive signal strength along the direction of minimum and (approximately) maximum predictability, respectively.

Figure 9 shows that predictive anisotropy tends to be weakened with stronger predictive signal strength and that ζ, the fraction of predictive signal along the direction of , has a much stronger control on the strength of predictive anisotropy than that along the direction of . This result provides more evidence that the strength of predictive anisotropy is mostly controlled by the variation of .

Figure 10 shows that values of tend to be lower as γ increases or decreases away from a value of one and that the largest values of generally correspond to . The association between low and suggests that low minimum directional predictability and strong anisotropy of surface wind component variability may be linked to each other by some common small-scale physical processes that can influence both. In contrast, there is generally no pattern between γ and . Finally, the pattern between and γ is similar to that between and γ, which is consistent with the findings from Mao and Monahan (2017) that there is a positive correlation between and the anisotropy of variability of surface wind components.

Finally, Fig. 11 shows that there is a negative relationship between and , consistent with the existence of some common small-scale physical processes that can influence both. Like the relationship between predictability metrics and γ, there is no pattern associated with the relationship between and , but the relationship between and is similar to that between and .

Among all controls considered, the relationships between and η as well as and ζ are the same across all three domains, but the relationships between predictability metrics and the other two wind vector statistics (γ and ρ) are much weaker in EAS than the other two regions (Figs. S17, S18, S22, and S23). The fact that ζ has the strongest relationship with predictive anisotropy is robust across all domains.

In general, the plots of probability density functions of predictability metrics conditioned on statistical measures from the idealized model are similar in both plain and mountainous terrain, which suggests that the small-scale physical processes influencing the predictability of surface winds are found in locations other than those characterized by topographic complexity. However, it should be noted that there are locations where RCMs are able to model relatively strong predictive anisotropy in observation with good skill, which is a clear indication that small-scale physical processes are not responsible for predictive anisotropy at these locations. The results of this study cannot identify the origin of the small-scale physical processes that primarily limit the predictability of surface wind components. Such an investigation is an interesting direction of future research.

6. Conclusions

We have compared characteristics of predictability of surface wind components by linear statistical prediction using both station-based observational data and output from various comprehensive models (RCMs and NCEP-2 reanalysis) in three domains: North America (557 stations), Europe–Mediterranean Basin (595 stations), and East Asia (715 stations). We divided data into four groups according to two categories of terrain: 1) stations adjacent to large water bodies (coastal) or land stations and 2) in mountainous or plain areas. In NAM and EMB, the characteristics of predictability from the use of comprehensive models in plain regions are generally close to those of observations, while mountainous regions are dominated by overestimation of predictability metrics in simulations. In contrast, the difference between mountainous and plain terrains is not obvious in EAS where overestimation of predictability is commonly observed east of 90°E regardless of terrain, and the area west of 90°E is dominated by underestimates in RCMs (a pattern that is not observed in the reanalysis). There is no systematic pattern of characteristics of predictability associated with inland and coastal stations.

Comparison of the minimum and maximum directional predictability, as well as the predictive anisotropy, from observations and simulations by comprehensive models indicates that RCMs cannot resolve small-scale physical processes primarily responsible for limiting the predictability of surface wind components. However, there are exceptions; that is, strong predictive anisotropy in observations can be captured well by RCMs at some stations, indicating that small-scale processes are not responsible for predictive anisotropy at these locations. Interpreting predictability metrics using an idealized mathematical model indicates that variability of is more influenced by small-scale physical processes than that of . Moreover, the anisotropy of fluctuations of surface wind components and the correlation between surface wind components appear to be linked to variability of by some small-scale physical processes.

The important overall conclusions following the results in this study are that the strength of predictive anisotropy is robustly controlled by the variation of . That is, anisotropic prediction occurs because some directions are particularly poorly predicted, not because some are particularly well predicted. Moreover, small-scale processes, which are weakly influenced by the tropospheric flow, are the major contributing factor to predictive anisotropy, but they are not the only factor. However, the origin of such small-scale processes is not clear.

Comprehensive models on the scale of RCMs and the reanalysis used in this study do not capture the small-scale processes that can limit predictability and cause predictive anisotropy. One area of future study is to more precisely identify the scale of the missing small-scale physical processes in RCMs, which may enhance the utility of the RCMs as tools for dynamic downscaling. Small-scale processes related to local terrain are generally not well represented in most comprehensive models because of oversimplification of terrain. By simulating metrics of predictability of surface wind components using mesoscale models with varying spatial resolutions, we can determine how small the model resolution needs to be in order to capture the physical processes related to local features. Special attention is evidently needed to study the physical processes related to local wind systems in EAS and their representations in RCMs.

Acknowledgments

This research was supported by the Discovery Grants program of the Natural Sciences and Engineering Research Council of Canada. We acknowledge the World Climate Research Programme’s Working Group on Regional Climate and the Working Group on Coupled Modelling, former coordinating body of CORDEX and responsible panel for CMIP5. We thank the climate modeling groups (listed in Table 2 of this paper) for producing and making available their model output. We also acknowledge the Earth System Grid Federation infrastructure, an international effort led by the U.S. Department of Energy’s Program for Climate Model Diagnosis and Intercomparison, and other partners in the Global Organization for Earth System Science Portals (GO-ESSP). We also thank Alex Cannon, Bill Merryfield, Lucinda Leonard, Chris Fletcher, Katherine Klink, and two anonymous reviewers for their helpful comments.

REFERENCES

  • Amante, C., and B. W. Eakins, 2009: 1 arc-minute global relief model: Procedures, data sources and analysis. NOAA Tech. Memo. NESDIS NGDC-24, 19 pp.

  • Church, J. A., and Coauthors, 2013: Sea level change. Climate Change 2013: The Physical Science Basis, T. F. Stocker et al., Eds., Cambridge University Press, 1137–1216.

  • Culver, A. M., and A. H. Monahan, 2013: The statistical predictability of surface winds over western and central Canada. J. Climate, 26, 83058322, https://doi.org/10.1175/JCLI-D-12-00425.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Davies, T., M. J. Cullen, A. J. Malcolm, M. Mawson, A. Staniforth, A. White, and N. Wood, 2005: A new dynamical core for the Met Office’s global and regional modelling of the atmosphere. Quart. J. Roy. Meteor. Soc., 131, 17591782, https://doi.org/10.1256/qj.04.101.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Giorgi, F., C. Jones, and G. R. Asrar, 2009: Addressing climate information needs at the regional level: The CORDEX framework. WMOBull., 58, 175183.

    • Search Google Scholar
    • Export Citation
  • Green, S. B., 1991: How many subjects does it take to do a regression analysis. Multivar. Behav. Res., 26, 499510, https://doi.org/10.1207/s15327906mbr2603_7.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • He, Y., A. H. Monahan, C. G. Jones, A. Dai, S. Biner, D. Caya, and K. Winger, 2010: Probability distributions of land surface wind speeds over North America. J. Geophys. Res., 115, D04103, https://doi.org/10.1029/2008JD010708.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kanamitsu, M., W. Ebisuzaki, J. Woollen, S.-K. Yang, J. J. Hnilo, M. Fiorino, and G. L. Potter, 2002: NCEP–DOE AMIP-II Reanalysis (R-2). Bull. Amer. Meteor. Soc., 83, 16311643, https://doi.org/10.1175/BAMS-83-11-1631.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mao, Y., and A. Monahan, 2017: Predictive anisotropy of surface winds by linear statistical prediction. J. Climate, 30, 61836201, https://doi.org/10.1175/JCLI-D-16-0507.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mao, Y., and A. Monahan, 2018: Linear and nonlinear regression prediction of surface wind components. Climate Dyn., https://doi.org/10.1007/s00382-018-4079-5, in press.

    • Crossref
    • Export Citation
  • MathWorks, 2016: Mapping Toolbox. https://www.mathworks.com/help/map/.

  • Mearns, L. O., and Coauthors, 2007: The North American Regional Climate Change Assessment Program dataset (updated 2014). National Center for Atmospheric Research Earth System Grid data portal, Boulder, CO, https://doi.org/10.5065/D6RN35ST.

    • Crossref
    • Export Citation
  • Mearns, L. O., W. Gutowski, R. Jones, R. Leung, S. McGinnis, A. Nunes, and Y. Qian, 2009: A regional climate change assessment program for North America. Eos, Trans. Amer. Geophys. Union, 90, 311, https://doi.org/10.1029/2009EO360002.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Monahan, A. H., 2012: Can we see the wind? Statistical downscaling of historical sea surface winds in the subarctic northeast Pacific. J. Climate, 25, 15111528, https://doi.org/10.1175/2011JCLI4089.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Oke, T. R., 2002: Boundary Layer Climates. Routledge, 464 pp.

  • Parker, W. S., 2016: Reanalyses and observations: What’s the difference? Bull. Amer. Meteor. Soc., 97, 15651572, https://doi.org/10.1175/BAMS-D-14-00226.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rummukainen, M., 2010: State-of-the-art with regional climate models. Wiley Interdiscip. Rev.: Climate Change, 1, 8296, https://doi.org/10.1002/wcc.8.

    • Search Google Scholar
    • Export Citation
  • Salameh, T., P. Drobinski, M. Vrac, and P. Naveau, 2009: Statistical downscaling of near-surface wind over complex terrain in southern France. Meteor. Atmos. Phys., 103, 253265, https://doi.org/10.1007/s00703-008-0330-7.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Skamarock, W. C., J. B. Klemp, J. Dudhia, D. O. Gill, D. M. Barker, W. Wang, and J. G. Powers, 2005: A description of the Advanced Research WRF version 2. NCAR Tech. Note NCAR/TN-468+STR, 88 pp., http://dx.doi.org/10.5065/D6DZ069T.

    • Crossref
    • Export Citation
  • Strandberg, G., and Coauthors, 2015: CORDEX scenarios for Europe from the Rossby Centre regional climate model RCA4. SMHI Rep. Meteorology and Climatology 116, 84 pp., https://www.smhi.se/polopoly_fs/1.90275!/Menu/general/extGroup/attachmentColHold/mainCol1/file/RMK_116.pdf.

  • Sun, C., and A. Monahan, 2013: Statistical downscaling prediction of sea surface winds over the global ocean. J. Climate, 26, 79387956, https://doi.org/10.1175/JCLI-D-12-00722.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • van der Kamp, D., C. L. Curry, and A. H. Monahan, 2012: Statistical downscaling of historical monthly mean winds over a coastal region of complex terrain. II. Predicting wind components. Climate Dyn., 38, 13011311, https://doi.org/10.1007/s00382-011-1175-1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • von Salzen, K., and Coauthors, 2013: The Canadian Fourth Generation Atmospheric Global Climate Model (CANAM4). Part I: Representation of physical processes. Atmos.–Ocean, 51, 104125, https://doi.org/10.1080/07055900.2012.755610.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wolfram, 2016: WeatherData source information. Accessed 1 January 2016, http://reference.wolfram.com/language/note/WeatherDataSourceInformation.html.

Supplementary Materials

Save
  • Amante, C., and B. W. Eakins, 2009: 1 arc-minute global relief model: Procedures, data sources and analysis. NOAA Tech. Memo. NESDIS NGDC-24, 19 pp.

  • Church, J. A., and Coauthors, 2013: Sea level change. Climate Change 2013: The Physical Science Basis, T. F. Stocker et al., Eds., Cambridge University Press, 1137–1216.

  • Culver, A. M., and A. H. Monahan, 2013: The statistical predictability of surface winds over western and central Canada. J. Climate, 26, 83058322, https://doi.org/10.1175/JCLI-D-12-00425.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Davies, T., M. J. Cullen, A. J. Malcolm, M. Mawson, A. Staniforth, A. White, and N. Wood, 2005: A new dynamical core for the Met Office’s global and regional modelling of the atmosphere. Quart. J. Roy. Meteor. Soc., 131, 17591782, https://doi.org/10.1256/qj.04.101.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Giorgi, F., C. Jones, and G. R. Asrar, 2009: Addressing climate information needs at the regional level: The CORDEX framework. WMOBull., 58, 175183.

    • Search Google Scholar
    • Export Citation
  • Green, S. B., 1991: How many subjects does it take to do a regression analysis. Multivar. Behav. Res., 26, 499510, https://doi.org/10.1207/s15327906mbr2603_7.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • He, Y., A. H. Monahan, C. G. Jones, A. Dai, S. Biner, D. Caya, and K. Winger, 2010: Probability distributions of land surface wind speeds over North America. J. Geophys. Res., 115, D04103, https://doi.org/10.1029/2008JD010708.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kanamitsu, M., W. Ebisuzaki, J. Woollen, S.-K. Yang, J. J. Hnilo, M. Fiorino, and G. L. Potter, 2002: NCEP–DOE AMIP-II Reanalysis (R-2). Bull. Amer. Meteor. Soc., 83, 16311643, https://doi.org/10.1175/BAMS-83-11-1631.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mao, Y., and A. Monahan, 2017: Predictive anisotropy of surface winds by linear statistical prediction. J. Climate, 30, 61836201, https://doi.org/10.1175/JCLI-D-16-0507.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mao, Y., and A. Monahan, 2018: Linear and nonlinear regression prediction of surface wind components. Climate Dyn., https://doi.org/10.1007/s00382-018-4079-5, in press.

    • Crossref
    • Export Citation
  • MathWorks, 2016: Mapping Toolbox. https://www.mathworks.com/help/map/.

  • Mearns, L. O., and Coauthors, 2007: The North American Regional Climate Change Assessment Program dataset (updated 2014). National Center for Atmospheric Research Earth System Grid data portal, Boulder, CO, https://doi.org/10.5065/D6RN35ST.

    • Crossref
    • Export Citation
  • Mearns, L. O., W. Gutowski, R. Jones, R. Leung, S. McGinnis, A. Nunes, and Y. Qian, 2009: A regional climate change assessment program for North America. Eos, Trans. Amer. Geophys. Union, 90, 311, https://doi.org/10.1029/2009EO360002.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Monahan, A. H., 2012: Can we see the wind? Statistical downscaling of historical sea surface winds in the subarctic northeast Pacific. J. Climate, 25, 15111528, https://doi.org/10.1175/2011JCLI4089.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Oke, T. R., 2002: Boundary Layer Climates. Routledge, 464 pp.

  • Parker, W. S., 2016: Reanalyses and observations: What’s the difference? Bull. Amer. Meteor. Soc., 97, 15651572, https://doi.org/10.1175/BAMS-D-14-00226.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rummukainen, M., 2010: State-of-the-art with regional climate models. Wiley Interdiscip. Rev.: Climate Change, 1, 8296, https://doi.org/10.1002/wcc.8.

    • Search Google Scholar
    • Export Citation
  • Salameh, T., P. Drobinski, M. Vrac, and P. Naveau, 2009: Statistical downscaling of near-surface wind over complex terrain in southern France. Meteor. Atmos. Phys., 103, 253265, https://doi.org/10.1007/s00703-008-0330-7.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Skamarock, W. C., J. B. Klemp, J. Dudhia, D. O. Gill, D. M. Barker, W. Wang, and J. G. Powers, 2005: A description of the Advanced Research WRF version 2. NCAR Tech. Note NCAR/TN-468+STR, 88 pp., http://dx.doi.org/10.5065/D6DZ069T.

    • Crossref
    • Export Citation
  • Strandberg, G., and Coauthors, 2015: CORDEX scenarios for Europe from the Rossby Centre regional climate model RCA4. SMHI Rep. Meteorology and Climatology 116, 84 pp., https://www.smhi.se/polopoly_fs/1.90275!/Menu/general/extGroup/attachmentColHold/mainCol1/file/RMK_116.pdf.

  • Sun, C., and A. Monahan, 2013: Statistical downscaling prediction of sea surface winds over the global ocean. J. Climate, 26, 79387956, https://doi.org/10.1175/JCLI-D-12-00722.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • van der Kamp, D., C. L. Curry, and A. H. Monahan, 2012: Statistical downscaling of historical monthly mean winds over a coastal region of complex terrain. II. Predicting wind components. Climate Dyn., 38, 13011311, https://doi.org/10.1007/s00382-011-1175-1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • von Salzen, K., and Coauthors, 2013: The Canadian Fourth Generation Atmospheric Global Climate Model (CANAM4). Part I: Representation of physical processes. Atmos.–Ocean, 51, 104125, https://doi.org/10.1080/07055900.2012.755610.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wolfram, 2016: WeatherData source information. Accessed 1 January 2016, http://reference.wolfram.com/language/note/WeatherDataSourceInformation.html.

  • Fig. 1.

    The locations of the 2109 land stations used for statistical prediction of surface winds. The domains of NAM (557 stations), EMB (595 stations), and EAS (715 stations) are outlined. Stations in the three domains are classified into four groups according to local topography and proximity to water.

  • Fig. 2.

    Taylor diagrams showing the comparison of predictability of surface wind components obtained from observations with those simulated by models (i.e., RCMs NA1, NA2, and NCEP-2 reanalysis; see Table 1) for groups of stations classified by their terrains in NAM.

  • Fig. 3.

    As in Fig. 2, but in EMB.

  • Fig. 4.

    As in Fig. 2, but in EAS.

  • Fig. 5.

    (top) Observed daily for JJA data in NAM, EMB, and EAS, respectively. The remaining rows show the comparison of predictability metrics derived from observations with those derived from the two RCM models and NCEP-2 reanalysis in terms of M/O. The color scale is logarithmic. Stations with black outlines are in mountainous regions and those without are in plain regions.

  • Fig. 6.

    As in Fig. 5, but for .

  • Fig. 7.

    As in Fig. 5, but for .

  • Fig. 8.

    Estimated probability density functions of conditioned on observed values of predictability metrics [(left) , (second column) , and (third column) ] for results obtained from the two RCMs and NCEP-2 reanalysis in NAM for JJA. Filled contours are obtained by using data from all stations in the region; red and green contours are obtained by stations only from mountainous and plain regions, respectively. (right) Histogram of the dot product of the unit vectors in the directions of minimum predictability and maximum for both mountainous and plain stations.

  • Fig. 9.

    Estimated probability density functions of conditioned on ζ and η in the coordinate system defined by (, ) for observations and all comprehensive models in the NAM domain. Filled contours are obtained by using data from all stations in the region; red and green contours are obtained using stations in mountainous and plain regions, respectively.

  • Fig. 10.

    Estimated probability density functions of predictability metrics [, , and ] conditioned on ln(γ) for observations and all comprehensive models in the NAM domain. Filled contours are obtained by using data from all stations in the region; red and green contours are obtained using stations in mountainous and plain regions, respectively.

  • Fig. 11.

    As in Fig. 10, but conditioned on .

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 559 329 87
PDF Downloads 285 54 2