1. Introduction
Surface winds are a climatic field of importance in economic and societal sectors including air quality, agriculture, and transport. Global climate models (GCMs) can effectively model large-scale processes (e.g., from synoptic to planetary scales). However, their coarse resolution (typically on the order of 100 km; Church et al. 2013) limits their skill in modeling surface winds as they do not resolve the smaller microscale to mesoscale processes that also influence surface winds. One approach to predicting surface winds is through statistical downscaling (SD) in which a transfer function (TF) is built based on statistical relationships between station-based surface data and large-scale climate fields in the free troposphere. This study focuses on statistical downscaling of surface wind components because, besides wind speed, the direction of wind is also important. For example, studying the transport of airborne substances requires knowledge of the wind vector.
A few previous studies (van der Kamp et al. 2012; Monahan 2012; Culver and Monahan 2013; Sun and Monahan 2013) have shown that SD predictions of wind components are generally better than those of wind speed. These studies have also shown that the predictability of surface wind components by SD with linear TFs is often characterized by predictive anisotropy (i.e., variation of predictability of surface wind components with directions of projection). Salameh et al. (2009) found that only one of the zonal u and meridional υ wind components at stations located in the valleys of French Alps was predicted well using statistical prediction with a generalized additive model as the transfer function. Other studies used linear SD to predict surface wind components projected onto compass directions from 0° to 360° at 10° intervals. For example, van der Kamp et al. (2012) and Culver and Monahan (2013) applied linear SD to predict surface wind components in western and central Canada; Monahan (2012) and Sun and Monahan (2013) studied the prediction of sea surface winds by linear SD. These studies found that the predictability of surface wind components by linear SD generally varies with directions of projection in the regions they considered. Mao and Monahan (2017) further investigated the predictability of surface wind components by linear SD at a large number of land stations across the globe and found that predictive anisotropy is a common feature. Mao and Monahan (2018) showed that predictive anisotropy is not an artifact of the use of a linear SD but is also found using nonlinear regression models.
In general, previous studies have shown that the best or worst predicted wind component is not always the conventional zonal or meridional wind component, and knowledge of the predictability of these two components alone is not sufficient to assess the potential utility of statistical downscaling at a station. It is necessary to know the predictability of surface wind components projected onto directions from 0° to 180° (as the projection along θ is the negative of that along 180° + θ). A question of interest is what limits predictability by SD along certain directions of projections. Salameh et al. (2009) attributed the unequal predictability of zonal and meridional wind components at the location they considered to the orientation of the mountain valley, as the across-valley wind component is characterized by local variability unexplained by large-scale climate fields. Van der Kamp et al. (2012) and Culver and Monahan (2013) found similar topographically oriented predictive anisotropy in the western cordillera of North America, and Mao and Monahan (2017) showed that low predictability and strong predictive anisotropy tend to be associated with regions marked by surface heterogeneity, such as mountainous and coastal regions. These previous studies suggest that topography plays a role in limiting predictability of surface wind components along certain directions. However, topography does not seem to be the only factor determining predictive anisotropy of surface wind components, as some locations are found with maximum predictability aligned across valley rather than along valley (van der Kamp et al. 2012; Culver and Monahan 2013), and evident predictive anisotropy can also occur in regions with relatively flat terrain, as shown in Mao and Monahan (2017), and over the oceans (Monahan 2012; Sun and Monahan 2013). Mao and Monahan (2017) showed that the surface wind components of highest predictability tend to be those of highest variability and with distributions closest to Gaussian and that poor predictability is generally associated with wind components characterized by relatively weak variability and non-Gaussian distribution. It appears that no single factor determines predictive anisotropy.
The goal of this study is to further investigate factors determining predictive anisotropy in order to further develop insight regarding the relationship between large-scale free-tropospheric flow and surface winds as a basis for improving physically based prediction methods, such as regional climate models (RCMs). To this end, we compare how well the predictability of surface wind components can be simulated by a range of different physically based comprehensive models (i.e., different regional climate models and a global reanalysis) in three regions: North America (NAM), Europe–Mediterranean Basin (EMB), and East Asia (EAS), with an emphasis on predictive anisotropy. RCMs are a form of dynamical downscaling in which physical processes are simulated at finer scales than by GCMs. Dynamical downscaling is an alternative to statistical downscaling, and one of the motivations of this study is to assess how well RCMs can represent the observed characteristics of the relationship between free-tropospheric flow and surface winds. In this regard, the accuracy of simulated predictability metrics, such as predictive anisotropy, can be used as an indication of how well RCMs can model physical processes related to the relationship between the large-scale free-tropospheric flow and surface winds. In this way, simulations by RCMs can be used to provide some understanding of the origin of predictive anisotropy. Determining the circumstances in which comprehensive models can or cannot reproduce this statistical relationship provides further understanding of its physical controls.
The simulation accuracy of an RCM depends on a number of factors, including the accurate representation of boundary conditions, the accuracy of driving data, the size of the domain, and the proper parameterization of physical processes (Rummukainen 2010). We only consider simulations from RCMs driven by observationally constrained reanalysis boundary conditions. Although local-scale dynamics near the land surface generally cannot be modeled with good skill by reanalysis (He et al. 2010), large-scale features in the free troposphere are well represented by reanalysis products. Therefore, free-tropospheric climate fields from reanalysis are generally considered reliable boundary conditions (although they do have limitations due to observational constraints and the accuracy of the assimilation models; e.g., Parker 2016). By using reanalysis-driven RCMs, we can focus the discussion of the simulation of predictive anisotropy on physical processes described by regional models rather than considering the potential systematic bias inherent in the driving GCMs.
Neither RCMs nor reanalyses can be regarded as perfect representations of point observations. The difference between point measurements (as in observations) and averages over the scale of a grid box (as in models) is another reason for differences between simulated and observed statistical relationships between free-tropospheric flow and surface winds. Our analysis is not able to distinguish between model biases and the difference between point and spatially averaged quantities. Irrespective of the source of difference, this characterization is useful from the perspective of determining the utility of the RCMs as tools for dynamical downscaling.
We also further elaborate a mathematical model of directional predictability introduced in Mao and Monahan (2017) based on an idealized partitioning of surface wind components into large-scale “signal” and small-scale “noise.” The idealized model provides a conceptually organizing perspective on the controls of the statistical predictability of surface wind components. We only consider linear statistical prediction in this study because the results of Mao and Monahan (2018) show that the predictive skill resulting from nonlinear regression–based TFs is not very different from linear TFs. Finally, while Mao and Monahan (2018) considered statistical prediction of both daily and monthly averaged surface winds, we focus on daily averaged data quantities in this study, as the time period of the RCM simulations considered in this study may not be long enough to give robust results of statistical prediction using monthly averaged data.
This paper is organized as follows: Section 2 presents the data considered and methods used in the comparison of observed and modeled features of statistical prediction. Section 3 introduces the idealized mathematical model used as a conceptual framework for understanding controls on surface predictability. Section 4 compares the structures of statistical prediction in observations and comprehensive models. Inferences based on this comparison are discussed in section 5, and conclusions are given in section 6.
2. Data and methods
Mao and Monahan (2017) studied the characteristics of the linear predictability of surface wind components at 2109 land stations across the globe, most of which are concentrated in the middle latitudes of the Northern Hemisphere. In this study, we consider statistical predictions of surface wind components at a subset of these stations consisting of 557 stations in NAM, 595 stations in EMB, and 715 stations in EAS. These regions are chosen based on the availability of RCM simulations, and because they have higher station densities than other areas of the Northern Hemisphere (Fig. 1). To assess the connection between surface heterogeneity and characteristics of predictability of surface wind components, we classify these stations according to two categories: 1) whether the station is in a mountainous region or in flat terrain (denoted “Mt” or “plain”) and 2) whether the station is adjacent to water or inland (denoted coast and land). These two categories result in the four groups of stations illustrated in Fig. 1. The classification of station locations as mountain or plain is based on the maximum elevation within 0.2° from the station location. If the maximum elevation is larger than 1000 m, the station is classified as a mountain station; otherwise, it is a plain station. The radius of 0.2° is chosen to ensure that the classification is based on local terrain. The elevation data used for the classification are 1 arc-min global relief data from the ETOPO1 Global Relief Model (Amante and Eakins 2009; downloaded from https://www.ngdc.noaa.gov/mgg/global/global.html). Coastal stations are classified using the coastline data provided by the Mapping Toolbox of MATLAB (MathWorks 2016). If the location of a station is within the range of 30 km from the nearest coastal boundary, the station is classified as coastal. In such locations, the surface winds are likely to be influenced by the land–water contrast since sea breezes commonly extend inland as far as 30 km (Oke 2002).
The locations of the 2109 land stations used for statistical prediction of surface winds. The domains of NAM (557 stations), EMB (595 stations), and EAS (715 stations) are outlined. Stations in the three domains are classified into four groups according to local topography and proximity to water.
Citation: Journal of Applied Meteorology and Climatology 57, 4; 10.1175/JAMC-D-17-0283.1
The RCM simulations in this study are chosen according to available time coverage and resolution from existing runs in the Coordinated Regional Climate Downscaling Experiment (CORDEX) project as well as the North American Regional Climate Change Assessment Program (NARCCAP). One of the purposes of the CORDEX project is to provide a quality-controlled dataset of downscaled information as a model evaluation framework (Giorgi et al. 2009). The NARCCAP project is currently the most comprehensive regional climate-modeling project for climate change impact studies in North America (Mearns et al. 2009). Longer-duration simulations can lead to more robust statistical results, and higher resolution can potentially contribute to modeling the local-scale physical processes with higher accuracy. All RCM simulations used in this study are at least 19 years long. The horizontal resolution of all RCMs in this study is 50 km or finer. Table 1 summarizes the basic information for all datasets used in this study. More detailed information specific to the RCMs is shown in Table 2.
Summary information for each data type considered in this study.
Details of RCMs considered in this study.
a. Predictands and predictors





The predictors in the study consist of free-tropospheric meteorological fields: temperature T, geopotential height Z, zonal wind U, and meridional wind V at 500 hPa. Following the approach used in Mao and Monahan (2017), the four predictor fields are chosen from a domain of 40° × 40° centered on each station. Previous studies (Monahan 2012; Culver and Monahan 2013) have shown that the correlation structures between surface wind vectors and large-scale free-tropospheric climate variables are often spread across a large area surrounding a station, such that the grid points with high correlation aloft are often not directly above the surface station.
For the prediction of observational data and reanalysis surface fields, the four predictor fields are from the NCEP-2 reanalysis product. Previous studies have shown that the difference among different reanalyses is generally not substantial for the large-scale, free-tropospheric flow (e.g., Culver and Monahan 2013). Since the resolution at which tropospheric variables are available from the NCEP-2 reanalysis is 2.5° × 2.5°, each predictor domain contains 256 grid points. For prediction of RCM surface fields, the four predictor fields are taken from the output of the corresponding RCMs in a domain of 40° × 40° centered on the station. Since the resolution of RCMs is generally much finer than that of the NCEP-2 reanalysis, we subsample the RCM fields to keep those points that are closest to each of the 256 grid points in the domain of the NCEP-2 reanalysis.
b. Prediction of surface wind components
The time period of the observations and the reanalysis product is from 1 January 1980 to 31 December 2012. The available time period for RCMs is generally shorter: approximately 20 years for most of the RCMs used in this study (Table 1). We divide data into seasons of June–August (JJA) and December–February (DJF). The minimum sample size for multiple linear regression based on a comprehensive study by Green (1991) is
Statistical prediction presented in this study follows the approach of Mao and Monahan (2017). There is no prior way to determine the locations of grid points with high correlation in the predictor domain; structures of predictability vary from station to station. To determine predictability in a straightforward way that can be generalized for all stations, we fit a regression model using the four predictors (



































3. Idealized model of predictability




















The constraint that
4. Results
In this section, we present the findings related to the comparison of metrics of predictability
a. Overview of comprehensive models (RCMs and reanalysis) versus observation
The relationship between metrics of predictability from comprehensive models and observations can be summarized in Taylor diagrams, a graphical tool to assess how closely spatially distributed modeled results match observations by quantifying the spatial correlation between the model and observations, the centered root-mean-square difference, and the spatial standard deviation of observation and model-based fields. To facilitate comparison among different regions, seasons, and terrain types using Taylor diagrams, all fields are normalized by the spatial standard deviation of the corresponding observational fields.
Several overall patterns can be seen from Figs. 2–4. The difference between mountainous and plain terrain is more evident than the difference between coast and land regions in NAM and EMB, especially in terms of comparison of modeled and observed
Taylor diagrams showing the comparison of predictability of surface wind components obtained from observations with those simulated by models (i.e., RCMs NA1, NA2, and NCEP-2 reanalysis; see Table 1) for groups of stations classified by their terrains in NAM.
Citation: Journal of Applied Meteorology and Climatology 57, 4; 10.1175/JAMC-D-17-0283.1
As in Fig. 2, but in EMB.
Citation: Journal of Applied Meteorology and Climatology 57, 4; 10.1175/JAMC-D-17-0283.1
As in Fig. 2, but in EAS.
Citation: Journal of Applied Meteorology and Climatology 57, 4; 10.1175/JAMC-D-17-0283.1
b. Models versus observations in NAM and EMB
Maps of the spatial distribution of metrics of predictability for the season of JJA are shown in Figs. 5–7 to highlight the regional differences. As maps for the season of DJF are very similar to those of JJA, similar conclusions can also be drawn for DJF (see Figs. S1–S3 in the online supplemental material; hereinafter supplemental figures will have a leading S in their number). The comparison of predictability metrics resulting from comprehensive models and observations is quantified using the ratio
(top) Observed daily
Citation: Journal of Applied Meteorology and Climatology 57, 4; 10.1175/JAMC-D-17-0283.1
As in Fig. 5, but for
Citation: Journal of Applied Meteorology and Climatology 57, 4; 10.1175/JAMC-D-17-0283.1
As in Fig. 5, but for
Citation: Journal of Applied Meteorology and Climatology 57, 4; 10.1175/JAMC-D-17-0283.1
The relationships between metrics of predictability and topography are similar in NAM and EMB. In these two domains, stations in the mountainous regions (i.e., western NAM and southern EMB) are more likely to have low predictability and strong predictive anisotropy, whereas predictability for stations in the plain region is generally good with weak predictive anisotropy. The values of all three metrics simulated by both RCMs and reanalysis tend to be higher than observations in regions characterized by mountainous terrain, while simulated metrics by comprehensive models are generally in reasonable agreement with observations in plain regions. The contrast between the plain and mountainous regions is more pronounced for the comparison of
c. Anomalous predictive structures in EAS
In general, the patterns of simulated
However, there is no systematic contrast in values of
5. Discussion
As in the previous section, we will focus on JJA results as the seasonal differences between predictability metrics simulated by comprehensive models and from observations are not substantial. Furthermore, only the difference between mountainous and plain terrain is discussed as there is no clear contrast between coastal and inland stations as discussed in section 4. Finally, we only present the analysis in NAM. The analyses of EMB and EAS (east of 90°E) are presented in the supplementary material, and conclusions similar to NAM can also be drawn from the analysis in these two regions, although some patterns in EAS show variations from NAM and EMB possibly because of anomalous predictive structures found in EAS as discussed in the following subsections. The anomalous region west of 90°E in EAS is not included in the following analysis since the underestimation of simulated metrics of predictability by RCMs in this region is a regionally specific systematic bias of the RCMs.
a. Inferences from overestimation by RCMs and reanalysis
As shown in Figs. 5–7, overestimation of
Estimated probability density functions of
Citation: Journal of Applied Meteorology and Climatology 57, 4; 10.1175/JAMC-D-17-0283.1
Figure 8 shows that the ratio of
The general pattern shown in Fig. 8 indicates that when surface winds are influenced mainly by small-scale physical processes (i.e., resulting in the low predictability of surface wind components), simulated surface wind predictability by comprehensive models tends to be inflated. In contrast, when surface winds are dominated by large-scale processes, the predictability of observed and simulated surface wind components by comprehensive models is approximately the same. Moreover, small values of observed
b. Inferences from the idealized mathematical model
Metrics of predictability can be related to the quantities η, ζ, γ, and ρ in the idealized mathematical models. Figures 9–11 show the estimated probability density function of predictability metrics given various quantities from the idealized mathematical models. Overall, the relationships are similar for both observations and comprehensive models, despite the unresolved physical processes in comprehensive models resulting in a different small- and/or large-scale decomposition from observations and occupying different regions of idealized model “parameter space.”
Estimated probability density functions of
Citation: Journal of Applied Meteorology and Climatology 57, 4; 10.1175/JAMC-D-17-0283.1
Estimated probability density functions of predictability metrics [
Citation: Journal of Applied Meteorology and Climatology 57, 4; 10.1175/JAMC-D-17-0283.1
As in Fig. 10, but conditioned on
Citation: Journal of Applied Meteorology and Climatology 57, 4; 10.1175/JAMC-D-17-0283.1
We focus on the orthogonal coordinates aligned along and across the direction of minimum predictability at each station, denoted (
Figure 9 shows that predictive anisotropy tends to be weakened with stronger predictive signal strength and that ζ, the fraction of predictive signal along the direction of
Figure 10 shows that values of
Finally, Fig. 11 shows that there is a negative relationship between
Among all controls considered, the relationships between
In general, the plots of probability density functions of predictability metrics conditioned on statistical measures from the idealized model are similar in both plain and mountainous terrain, which suggests that the small-scale physical processes influencing the predictability of surface winds are found in locations other than those characterized by topographic complexity. However, it should be noted that there are locations where RCMs are able to model relatively strong predictive anisotropy in observation with good skill, which is a clear indication that small-scale physical processes are not responsible for predictive anisotropy at these locations. The results of this study cannot identify the origin of the small-scale physical processes that primarily limit the predictability of surface wind components. Such an investigation is an interesting direction of future research.
6. Conclusions
We have compared characteristics of predictability of surface wind components by linear statistical prediction using both station-based observational data and output from various comprehensive models (RCMs and NCEP-2 reanalysis) in three domains: North America (557 stations), Europe–Mediterranean Basin (595 stations), and East Asia (715 stations). We divided data into four groups according to two categories of terrain: 1) stations adjacent to large water bodies (coastal) or land stations and 2) in mountainous or plain areas. In NAM and EMB, the characteristics of predictability from the use of comprehensive models in plain regions are generally close to those of observations, while mountainous regions are dominated by overestimation of predictability metrics in simulations. In contrast, the difference between mountainous and plain terrains is not obvious in EAS where overestimation of predictability is commonly observed east of 90°E regardless of terrain, and the area west of 90°E is dominated by underestimates in RCMs (a pattern that is not observed in the reanalysis). There is no systematic pattern of characteristics of predictability associated with inland and coastal stations.
Comparison of the minimum and maximum directional predictability, as well as the predictive anisotropy, from observations and simulations by comprehensive models indicates that RCMs cannot resolve small-scale physical processes primarily responsible for limiting the predictability of surface wind components. However, there are exceptions; that is, strong predictive anisotropy in observations can be captured well by RCMs at some stations, indicating that small-scale processes are not responsible for predictive anisotropy at these locations. Interpreting predictability metrics using an idealized mathematical model indicates that variability of
The important overall conclusions following the results in this study are that the strength of predictive anisotropy is robustly controlled by the variation of
Comprehensive models on the scale of RCMs and the reanalysis used in this study do not capture the small-scale processes that can limit predictability and cause predictive anisotropy. One area of future study is to more precisely identify the scale of the missing small-scale physical processes in RCMs, which may enhance the utility of the RCMs as tools for dynamic downscaling. Small-scale processes related to local terrain are generally not well represented in most comprehensive models because of oversimplification of terrain. By simulating metrics of predictability of surface wind components using mesoscale models with varying spatial resolutions, we can determine how small the model resolution needs to be in order to capture the physical processes related to local features. Special attention is evidently needed to study the physical processes related to local wind systems in EAS and their representations in RCMs.
Acknowledgments
This research was supported by the Discovery Grants program of the Natural Sciences and Engineering Research Council of Canada. We acknowledge the World Climate Research Programme’s Working Group on Regional Climate and the Working Group on Coupled Modelling, former coordinating body of CORDEX and responsible panel for CMIP5. We thank the climate modeling groups (listed in Table 2 of this paper) for producing and making available their model output. We also acknowledge the Earth System Grid Federation infrastructure, an international effort led by the U.S. Department of Energy’s Program for Climate Model Diagnosis and Intercomparison, and other partners in the Global Organization for Earth System Science Portals (GO-ESSP). We also thank Alex Cannon, Bill Merryfield, Lucinda Leonard, Chris Fletcher, Katherine Klink, and two anonymous reviewers for their helpful comments.
REFERENCES
Amante, C., and B. W. Eakins, 2009: 1 arc-minute global relief model: Procedures, data sources and analysis. NOAA Tech. Memo. NESDIS NGDC-24, 19 pp.
Church, J. A., and Coauthors, 2013: Sea level change. Climate Change 2013: The Physical Science Basis, T. F. Stocker et al., Eds., Cambridge University Press, 1137–1216.
Culver, A. M., and A. H. Monahan, 2013: The statistical predictability of surface winds over western and central Canada. J. Climate, 26, 8305–8322, https://doi.org/10.1175/JCLI-D-12-00425.1.
Davies, T., M. J. Cullen, A. J. Malcolm, M. Mawson, A. Staniforth, A. White, and N. Wood, 2005: A new dynamical core for the Met Office’s global and regional modelling of the atmosphere. Quart. J. Roy. Meteor. Soc., 131, 1759–1782, https://doi.org/10.1256/qj.04.101.
Giorgi, F., C. Jones, and G. R. Asrar, 2009: Addressing climate information needs at the regional level: The CORDEX framework. WMOBull., 58, 175–183.
Green, S. B., 1991: How many subjects does it take to do a regression analysis. Multivar. Behav. Res., 26, 499–510, https://doi.org/10.1207/s15327906mbr2603_7.
He, Y., A. H. Monahan, C. G. Jones, A. Dai, S. Biner, D. Caya, and K. Winger, 2010: Probability distributions of land surface wind speeds over North America. J. Geophys. Res., 115, D04103, https://doi.org/10.1029/2008JD010708.
Kanamitsu, M., W. Ebisuzaki, J. Woollen, S.-K. Yang, J. J. Hnilo, M. Fiorino, and G. L. Potter, 2002: NCEP–DOE AMIP-II Reanalysis (R-2). Bull. Amer. Meteor. Soc., 83, 1631–1643, https://doi.org/10.1175/BAMS-83-11-1631.
Mao, Y., and A. Monahan, 2017: Predictive anisotropy of surface winds by linear statistical prediction. J. Climate, 30, 6183–6201, https://doi.org/10.1175/JCLI-D-16-0507.1.
Mao, Y., and A. Monahan, 2018: Linear and nonlinear regression prediction of surface wind components. Climate Dyn., https://doi.org/10.1007/s00382-018-4079-5, in press.
MathWorks, 2016: Mapping Toolbox. https://www.mathworks.com/help/map/.
Mearns, L. O., and Coauthors, 2007: The North American Regional Climate Change Assessment Program dataset (updated 2014). National Center for Atmospheric Research Earth System Grid data portal, Boulder, CO, https://doi.org/10.5065/D6RN35ST.
Mearns, L. O., W. Gutowski, R. Jones, R. Leung, S. McGinnis, A. Nunes, and Y. Qian, 2009: A regional climate change assessment program for North America. Eos, Trans. Amer. Geophys. Union, 90, 311, https://doi.org/10.1029/2009EO360002.
Monahan, A. H., 2012: Can we see the wind? Statistical downscaling of historical sea surface winds in the subarctic northeast Pacific. J. Climate, 25, 1511–1528, https://doi.org/10.1175/2011JCLI4089.1.
Oke, T. R., 2002: Boundary Layer Climates. Routledge, 464 pp.
Parker, W. S., 2016: Reanalyses and observations: What’s the difference? Bull. Amer. Meteor. Soc., 97, 1565–1572, https://doi.org/10.1175/BAMS-D-14-00226.1.
Rummukainen, M., 2010: State-of-the-art with regional climate models. Wiley Interdiscip. Rev.: Climate Change, 1, 82–96, https://doi.org/10.1002/wcc.8.
Salameh, T., P. Drobinski, M. Vrac, and P. Naveau, 2009: Statistical downscaling of near-surface wind over complex terrain in southern France. Meteor. Atmos. Phys., 103, 253–265, https://doi.org/10.1007/s00703-008-0330-7.
Skamarock, W. C., J. B. Klemp, J. Dudhia, D. O. Gill, D. M. Barker, W. Wang, and J. G. Powers, 2005: A description of the Advanced Research WRF version 2. NCAR Tech. Note NCAR/TN-468+STR, 88 pp., http://dx.doi.org/10.5065/D6DZ069T.
Strandberg, G., and Coauthors, 2015: CORDEX scenarios for Europe from the Rossby Centre regional climate model RCA4. SMHI Rep. Meteorology and Climatology 116, 84 pp., https://www.smhi.se/polopoly_fs/1.90275!/Menu/general/extGroup/attachmentColHold/mainCol1/file/RMK_116.pdf.
Sun, C., and A. Monahan, 2013: Statistical downscaling prediction of sea surface winds over the global ocean. J. Climate, 26, 7938–7956, https://doi.org/10.1175/JCLI-D-12-00722.1.
van der Kamp, D., C. L. Curry, and A. H. Monahan, 2012: Statistical downscaling of historical monthly mean winds over a coastal region of complex terrain. II. Predicting wind components. Climate Dyn., 38, 1301–1311, https://doi.org/10.1007/s00382-011-1175-1.
von Salzen, K., and Coauthors, 2013: The Canadian Fourth Generation Atmospheric Global Climate Model (CANAM4). Part I: Representation of physical processes. Atmos.–Ocean, 51, 104–125, https://doi.org/10.1080/07055900.2012.755610.
Wolfram, 2016: WeatherData source information. Accessed 1 January 2016, http://reference.wolfram.com/language/note/WeatherDataSourceInformation.html.