This paper describes a new computationally efficient and statistically robust sampling method for generating dynamically downscaled climatologies. It is based on a Monte Carlo method coupled with stratified sampling. A small yet representative set of “case days” is selected with guidance from a large-scale reanalysis. When downscaled, the sample closely approximates the long-term meteorological record at a location, in terms of the probability density function. The method is demonstrated for the creation of wind maps to help determine the suitability of potential sites for wind energy farms. Turbine hub-height measurements at five U.S. and European tall tower sites are used as a proxy for regional climate model (RCM) downscaled winds to validate the technique. The tower-measured winds provide an independent test of the technique, since RCM-based downscaled winds exhibit an inherent dependence upon the large-scale reanalysis fields from which the case days are sampled; these same reanalysis fields would provide the boundary conditions to the RCM. The new sampling method is compared with the current approach widely used within the wind energy industry for creating wind resource maps, which is to randomly select 365 case days for downscaling, with each day in the calendar year being represented. The new method provides a more accurate and repeatable estimate of the long-term record of winds at each tower location. Additionally, the new method can closely approximate the accuracy of the current (365 day) industry approach using only a 180-day sample, which may render climate downscaling more tractable for those with limited computing resources.
A growing number of weather- and climate-sensitive sectors require reliable, fine spatial resolution, multidecadal climate datasets to help guide their decision making process. Example applications include renewable energy (Pryor et al. 2005a,b), water resource management (Rasmussen et al. 2011), national security (Rife et al. 2010), and human health (Kunkel et al. 1999). Global reanalyses and general circulation model (GCM) simulations represent the most readily available sources of multidecadal gridded climate data for both present-day and future applications. These datasets serve a broad spectrum of users (e.g., Solomon 2007; Rienecker et al. 2011). However, for many applications, global datasets do not have adequate spatial or temporal resolution to resolve the local or regional aspects of weather and climate. Therefore, regional climate models (RCMs) are frequently employed to downscale these comparatively coarse-resolution global datasets to smaller geographic regions, typically on a case-by-case basis, to provide the finer spatial and temporal resolution information needed for a given application.
Although RCMs have successfully been employed for many applications, their biggest limitation is their computational expense (Leung et al. 2003), which usually confines RCM use to public-sector atmospheric research and operational centers, and a handful of large private-sector corporations. Given the increasing demand for specialized regional climatologies (Wang et al. 2004), many users would benefit from a downscaling technique that 1) is computationally efficient and 2) simultaneously yields a highly representative and statistically robust (i.e., repeatable) sample of the full range of meteorological conditions over a multiyear period. In this paper we introduce a technique for objectively selecting a representative sample of “case days” from among a multiyear record of a large-scale climate dataset (i.e., a reanalysis or GCM), for use in generating an RCM-downscaled climatology that is representative of the entire multiyear record. We demonstrate the utility of the technique by applying it to wind maps via “proxy” climate downscaling, for help in determining the suitability of potential sites for wind energy farms. The term “proxy” is used here because, for the sake of demonstration, turbine hub-height tower observations are employed to validate the technique, rather than the RCM-downscaled winds that would ordinarily be generated. Using tower-observed winds likely provides the most stringent and independent test of our technique, because RCM-downscaled winds exhibit an inherent dependence upon the large-scale reanalysis–GCM fields that provide their boundary conditions; these same reanalysis–GCM fields inform the selection of the case days that are downscaled, and they provide the boundary conditions to the RCM. Therefore, the tower-observed winds are likely to be less similar to a given large-scale reanalysis–GCM wind field than an RCM-downscaled wind field would be. The generation of wind maps is a critical first step for the planning and site suitability phase of the wind farm development process (Landberg et al. 2003). The demand for RCM-based wind maps is likely to increase as RCMs become more accurate, and as wind farm development expands into observation-sparse regions where modeling is one of the few options for characterizing the long-term wind variability (Blanco 2009; Bilgili et al. 2011). Therefore, it is desirable to develop a more computationally efficient and reliable approach to generating RCM-based wind maps that accurately characterizes the full range of meteorological variability for yearlong to multidecadal periods.
RCM-based wind maps are typically created from simulations that cover only a small fraction of the entire multidecadal period. The most common “industry” approach is to downscale a 365-day sample drawn from the past 10–30 yr from a global reanalysis (3650–10 950 days), with equal representation for all days of the year. The fundamental assumption is that the wind characteristics in the downscaled sample will be representative of the full 10–30-yr population, while simultaneously allowing the seasonal cycle to be represented (e.g., Schwartz and Elliott 2004; AWS TruePower 2011). Although this approach is efficient, we will show that it has potentially large sources of error, including the high likelihood of undersampling extremes. Additionally, the random methodology yields highly inconsistent results. That is, any given random 365-day sample is likely to produce a wind map that is dissimilar from any other random sample of the same size chosen from the same 10–30-yr population (demonstrated in section 4). This is a serious limitation since typically only a single sample is drawn and downscaled.
As stated earlier, the new technique is validated using turbine hub-height tower measurements, and its improvement relative to the current industry standard technique is assessed based on statistics used to compare the probability density function derived from both techniques. We demonstrate that our new sampling technique will enable more computationally efficient RCM-based downscaling (we can achieve the same result with fewer samples) and, simultaneously, will provide a statistically consistent and representative case-day sample each time it is applied. The new technique therefore makes RCM-based wind maps a more attractive option for the wind energy industry. Sections 2 and 3 describe the datasets and methods employed in this study. Results and discussion are then presented in section 4, followed by the conclusions in section 5.
a. Tall tower measurements
Very few high-quality wind measurements at standard wind turbine hub height (~80 m AGL) exist within the public domain, and fewer still have records that span multiple years. We therefore are very fortunate to have access to a set of five research quality tall tower datasets. These towers were chosen based on the following criteria: 1) they are publicly available, 2) they measure winds at or near turbine hub height within a variety of geographic and climatic regimes, 3) their measurements are both of high quality and reliable (i.e., few missing or suspect data records), and 4) their hourly (or finer temporal resolution) measurement records span at least 6 yr, with one as long as 10 yr. Table 1 summarizes the tower measurements used in this study, with stations ranging from the Pacific Northwest and southern Great Plains of the United States to the coastal plains of the Netherlands. Also included are the correlations with both daily mean wind speed and wind direction obtained from a new global reanalysis dataset, over the towers’ respective periods of record. The measurements consist of either 5- or 10-min averages that are used to calculate hourly mean values.
b. NASA Modern-Era Retrospective Analysis for Research and Applications (MERRA)
The downscaling method described herein makes extensive use of a new state-of-the-science global reanalysis: the National Aeronautics and Space Administration’s (NASA) Modern-Era Retrospective Analysis for Research and Applications (MERRA). MERRA is a reanalysis of the satellite era (1979–present) using the Goddard Earth Observing System Data Assimilation System, version 5 (GEOS-5 DAS). Thorough descriptions of the GOES-5 DAS and the MERRA project appear in Rienecker et al. (2007) and Rienecker et al. (2011). MERRA assimilates an extensive set of measurements from around the globe. Assimilated data most relevant for wind energy applications are winds from rawinsondes, profilers, Next Generation Weather Radar (NEXRAD), land-based stations, aircraft, ship, and Quick Scatterometer (QuikSCAT) winds. An array of satellite-based measurements is also assimilated by MERRA, including Special Sensor Microwave Imager (SSM/I) radiances, Television and Infrared Observation Satellite (TIROS) Operational Vertical Sounder (TOVS) radiances, cloud drift, and water vapor winds. Additionally, MERRA is one of the only reanalyses to assimilate data from the entire constellation of NASA Earth Observing System (EOS) satellites, including the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard the polar-orbiting satellites Aqua and Terra, and the Atmospheric Infrared Sounder (AIRS) radiances. It is important to note that none of the tall towers used for this study were assimilated by MERRA, and accordingly, the relationships found herein between MERRA and the tower data are extendable to any location on the globe.
For this study, we use the native resolution three-dimensional 6-hourly analyzed state (the MAI6NVANA data product), which is available on a 0.5° latitude × 0.67° longitude horizontal grid, with 72 terrain-following vertical layers ranging from near the surface to 0.01 hPa. All output for the 1979–2009 period is obtained from the NASA Mirador server (http://mirador.gsfc.nasa.gov/). The key advantage of MERRA over other widely used reanalysis, such as the National Centers for Environmental Prediction–National Center for Atmospheric Research (NCEP–NCAR) reanalysis, is its comparatively high horizontal and vertical resolution, which better resolves many topographic and littoral features. Kennedy et al. (2011) compared MERRA with atmospheric soundings from the U.S. Department of Energy Atmospheric Radiation Measurement Program Southern Great Plains site (Xie et al. 2010) and found good agreement for the horizontal winds, a result that is relevant for the present study. The MERRA output for winds, temperature, pressure, and humidity are extracted from the four grid points nearest to each of the five tall tower locations to form an inverse distance-weighted average, using the conservative remapping algorithm applied to bilinear interpolation, and is available in the Spherical Coordinate Remapping and Interpolation Package (SCRIP; Jones 1999). We only use MERRA output from its lowest vertical level (~130 m AGL), which is about 50 m above the standard turbine hub height. The 6-hourly outputs from MERRA are used to calculate daily mean wind speeds and wind directions to describe the dominant mode of wind variability on a given day at a given site.
To provide a sense of MERRA’s ability to represent temporal variability in the regional-to-local scale winds, a number of statistics comparing daily mean wind speeds from MERRA and each tower are computed for winter (January–March: JFM) and summer (July–September: JAS), and are shown in Table 2. The range in the yearly seasonal mean values is also shown for each statistic. It is clear from the bias, the centered root-mean-squared-error (CRMSE: the RMSE of the anomalies from the respective seasonal mean), and correlation statistics that there is substantial spatial, intra-annual, and interannual variability in the skill of MERRA. Therefore, potential users are encouraged to first assess the quality of the MERRA (or any reanalysis that may be used) for a given region and period of interest prior to employing the technique introduced in this paper. It is noteworthy that despite the time-variable skill of MERRA noted above, we find no gross discontinuities or spurious trends in the MERRA wind fields, although other authors have found temporal discontinuities in MERRA’s hydrologic fields due to changes in the satellite observing system through time (e.g., Trenberth et al. 2011).
3. Toward efficient climate downscaling
In this section we introduce a computationally efficient method for selecting a highly representative and statistically robust sample of the full range of wind conditions observed at a given location, for use in creating downscaled climatologies. This technique is demonstrated for the application of creating wind maps to help identify locations favorable for wind farm development. In lieu of having a multidecadal record of winds from tower measurements, RCM-based downscaling of a global reanalysis over a location of interest is an ideal alternative for a preliminary assessment of the wind energy potential, especially since RCM downscaling can reveal finescale variability across a proposed wind farm site (length scales of 5 km and larger) that could not easily be discerned via measurements. However, it is not practical to perform high-resolution dynamical downscaling over a multidecadal period for every candidate wind farm, given the large computational expense required (even when a modern supercomputer is available). Thus, techniques have been developed by the wind energy industry to select some part of the meteorological record for downscaling, to estimate statistics that represent the entire multidecadal record.
a. Industry standard method
The approach widely used by the wind energy industry is to select a 365-day sample for downscaling, with equal representation for each day in the calendar year. For each calendar day, a year is randomly chosen, typically from among the most recent 10 yr. For example, for 1 January the year 2005 might be chosen, and for 2 January the year 2009 may be chosen, and so on through 31 December. In this way, consecutive calendar days can be drawn from different years, or from the same year. A large-scale reanalysis (typically the NCEP–NCAR reanalysis) for this collection of 365 days is then dynamically downscaled with an RCM, and the result is assumed to represent the full range of hourly wind conditions observed over a multidecadal period at a prospective wind farm. In practice, the sampling and downscaling are performed only once at a given location (e.g., Schwartz and Elliott 2004; AWS TruePower 2011). As we will show, this procedure yields large sample-to-sample variations in its representativeness, rendering the resulting estimate of the long-term wind speeds highly uncertain.
b. A new Monte Carlo sampling technique
Herein, we describe a new approach for approximating the 5–10-yr record of wind speed and direction at a given location. The technique can potentially be applied to any region and to any set of climatic variables, for any time span of interest. It is an easy-to-implement extension to the current industry wind mapping technique. The basic idea is to select a small yet representative sample of case days that when downscaled closely approximates the long-term local wind record at a location, and is both highly accurate and statistically reliable (i.e., exhibits small sample-to-sample variability). As outlined below, a readily available global reanalysis is used to guide the selection of case days. The current generation of global reanalyses has grid increments that range from about 0.5° to 2.0° in latitude and longitude. The fundamental hypothesis is that the synoptic-scale patterns represented in the reanalysis strongly drive the meteorological conditions on the regional to local scales. Thus, if we can choose a set of case days that represent the occurrence and frequency of all the dominant modes of synoptic-scale wind variability, we may be able to approximate the full range of local wind conditions by downscaling the reanalyses on those days. Our goal is to minimize the number of case days required to characterize the full range of wind conditions at a given location, at a desired level of accuracy and reliability.
Given that a quantitative description of the wind variability on any given day involves many degrees of freedom, it may be difficult to identify and describe the entire range of conditions that contribute to the probability density function (PDF) of wind speed and wind direction over a multiyear period for a location (e.g., Van den Dool 1994; Hamill and Whitaker 2006). We therefore invoke two simplifying assumptions that are specific for wind mapping, following the approaches of Hamill and Whitaker (2006) and Van den Dool (1989). First, because the distributions of the wind speeds and wind directions at turbine hub height (~80 m AGL) are typically all that are required for creating wind maps to identify sites having favorable wind energy potential within a region, only the large-scale reanalysis state for the grid point at that location and hub height is used. Second, it may be sufficient to account only for the daily mean characteristics of the hub-height winds when defining the dominant large-scale modes of wind variability, rather than using the instantaneous 6-hourly values available in most global reanalyses. It should be noted that we experimented with several approaches for selecting the case days, including clustering techniques such as self-organizing maps, and various parameters describing the wind’s state (such as the wind’s temporal and/or spatial variability on a given day), and the technique outlined below yielded (by far) the best overall results.
2) Monte Carlo sets of case days
We start by creating a large number of sets of case days, where each set is a candidate for downscaling. We empirically find that O(200 000) sets of case days are required to find the single set that well represents the actual climatology for the 5–10-yr datasets used for this study,1 and as described below, this calculation is fairly trivial, since it involves resampling from data we already have at hand. A given set of case days is generated using stratified sampling (e.g., Cochran 1977; Thompson 1992). In stratified sampling, the population is partitioned into groups or strata, and a sample is proportionally drawn from each stratum. Similar to the industry standard method, the strata are defined as calendar months populated with dates from the most recent N years, where the number of years can easily be adjusted to suit the needs of a particular application. Consider the situation where we are attempting to approximate the 10-yr record of hub-height winds at a location using only a 180-day sample of case days. In this case, 15 days are randomly drawn (without replacement) from each monthly stratum, where each calendar day and year from within that stratum has an equal probability of being drawn. This ensures that each calendar month is represented equally. We note that this selection procedure may result in the same day of the month being chosen more than once, but each instance will occur in a different year.
3) Using MERRA to choose the most representative set of case days
Next, we obtain the distributions of daily mean wind speeds and daily mean wind directions from MERRA at the prospective wind farm for each of the 200 000 sets of case days that were previously drawn. Again, we only use MERRA output from its lowest vertical level (~130 m AGL), and the 6-hourly outputs are used to form daily means to characterize the dominant mode of wind variability on a given day. Each of the 200 000 sample distributions is then compared to the distribution for the entire population of MERRA analyses corresponding to the record of tower measurements at a given location (2190–3650 daily values; hereafter called the full distribution), and we identify the single sample whose distributions of daily mean wind speed and wind direction most closely match the full distributions of daily mean wind speed and direction.2 The large number of sets ensures that one set will provide a close match to the full distribution’s climatology with a relatively modest sample of case days and therefore will afford downscaling. The quality of the match is judged according to an objective distance metric that is inspired by the χ2 statistic, and amounts to comparing the data histograms for a given N-day sample from MERRA to the corresponding histograms from MERRA’s full 6–10-yr distribution. The distance metric d is defined according to
where tij denotes the relative frequencies of daily mean wind speed and wind direction values occurring in the full distribution (i.e., nvars = 2) and aij denotes the corresponding relative frequencies of speed and direction values within a given sample. The data from each variable’s sample distribution are divided into bins bounded by every fifth percentile (i.e., nbins = 20). This bin width yields the most stringent measure of “closeness” for our distance metric, while simultaneously minimizing quantization errors and uncertainties associated with having too few members within a given bin for a particular sample size. The term wij = 1/tij and weights the squared difference in each bin in proportion to that bin’s frequency in the full distribution, and highlights the single sample whose distribution most closely matches the full distribution for all bins, including the less frequently occurring values at the tails of the distribution. The d values for wind speed and wind direction are then converted to standardized anomalies prior to summation over all nvars and nbins to ensure that each variable’s contribution to the combined distance is equally weighted. The most representative sample therefore has the minimum combined distance from the full distributions’ frequencies.
4) Downscaling the most representative set of case days
In practice, once the single most representative set of case days is identified from the synoptic-scale wind variability in MERRA, the set of reanalyses on those days would be dynamically downscaled with an RCM to approximate the multiyear record of hourly wind speeds and wind directions. However, because RCM downscaling is very expensive, even with a reduced sample size, for proof of concept we instead used the actual observations from the tall towers (as described in section 2) to serve as a proxy for the RCM-downscaled climatology. Therefore, a key assumption in this study is that our downscaling model is perfect, which has the advantage of removing the uncertainties introduced through an unavoidably imperfect regional model that would otherwise be used for the downscaling [e.g., the Weather Research and Forecasting (WRF) model]. Overall, using tower-observed winds likely provides a more stringent and independent test of our technique compared to using RCM-downscaled winds. This is because RCM-downscaled winds exhibit an inherent dependence upon the large-scale reanalysis fields from which the case days are sampled, since the RCM boundary conditions are derived from the reanalysis. Therefore, the tower-observed winds are likely to be less similar to a given large-scale reanalysis–GCM wind field than an RCM-downscaled wind field would be. Using tower observations as the downscaled proxy may also expose limitations of the MERRA reanalysis, such as MERRA’s inability to resolve highly localized effects caused by terrain and/or coastlines. Finally, measurement errors and instrumentation siting issues, inadequate quality control of the measurements, and the fundamental mismatch between point measurements and MERRA’s grid-box-averaged values will reduce the correlation between the tower data and the coarse-resolution MERRA reanalyses (e.g., Rife et al. 2004). Each of these issues should be considered when evaluating the performance of this technique.
a. Performance of the new sampling technique
We first demonstrate the advantages of the Monte Carlo technique using a sample size of 365 case days, which provides a benchmark relative to the method currently used by the wind energy industry. Figure 1 shows the single “best” set of 365 case days chosen for downscaling from among 200 000 Monte Carlo sets of case days generated for the Cabauw, Netherlands, tower site, whose observational record spans 2001–09. Similar to the industry’s current approach, there is equal representation for all days of the calendar year, and nearly equal representation from among each of the 9 yr in the tower record. An interesting feature of the selected case days is their tendency to cluster. For example, there is a cluster of 8 days chosen in mid-November 2001, 4 of which are consecutive. We find that the clustering of days is a common feature for both the Monte Carlo and the industry standard methods, regardless of the number of case days chosen. The occurrence of groups of consecutive days in the sample might be considered an advantage, since it may help reduce the number of individual simulations that must be initialized.3
The Cabauw time series also demonstrates the importance of sampling across multiple years, to adequately represent the full range of downscaled hourly wind conditions at a given location. For example, the January–March 2002 period exhibits considerably stronger winds relative to most other years in the record, with average hourly wind speeds of 9.1 m s−1. By contrast, the same months in 2001, 2003, 2006, and 2009 exhibit average hourly wind speeds of only about 7.0–7.3 m s−1. Given that wind power scales in proportion to the cube of the wind speed (e.g., Morrissey et al. 2010; Barthelmie and Pryor 2003), this ~20% difference in the mean wind speed represents a nearly 50% change in the average energy production. A qualitative inspection of Fig. 1 reveals that the Monte Carlo–based technique adequately represents days exhibiting both high and low wind speeds from among the entire 2001–09 population, including the months of January–March.
Next, we objectively quantify the new Monte Carlo technique’s performance relative to the current industry standard for the 365-day case at the Cabauw site. Again, the industry-standard method selects a 365-day sample for downscaling, where a year is randomly chosen for each day in the calendar year, typically from among the most recent 10 yr. Figure 2 shows an example result for running both techniques at the Cabauw site, where each algorithm has been run a single time, as would be done in a practical setting. The industry standard method represents the full distribution from MERRA’s entire 9-yr record of synoptic-scale daily mean wind speeds and wind directions at Cabauw reasonably well for this single run, with errors in frequencies of about 10%–20%. The new Monte Carlo method appears to provide a much-improved representation of the full distribution’s frequencies, with most errors ranging between 1% and 2% but three of the bins have errors ranging from 5% to 10%. Because each technique employs a random selection process, the results in any given run may depart significantly from the full distribution’s frequencies. We therefore show the outcome of running both algorithms a large number of times (“trials,” hereinafter) to examine their mean error characteristics and their sample-to-sample consistencies. Both techniques are run for 100 trials. Again, there is one set per trial for each technique, but the Monte Carlo technique provides the single “best” set of case days from among 200 000 potential sets. The confidence intervals constructed from all the trials are shown in Fig. 3, along with the sample data histograms that represent the outcome of the 100 trials for the industry standard and Monte Carlo technique. Although it appears that both techniques yield a highly accurate representation of the full distribution of wind speeds and directions using only a 365-day set of case days, it is clear that the industry standard exhibits very low sample-to-sample consistency, as evidenced by the wide 95% confidence intervals (Figs. 3a,d), which bracket uncertainties ranging from about 50% to as high 61% for both wind speed and direction. By comparison, the Monte Carlo technique yields far narrower confidence intervals, with uncertainties ranging from about 20% to nearly 35%. This equates to a roughly 50% reduction in uncertainty for wind speed relative to the industry standard (using the standard percent of improvement formula), and 30%–55% reduction in uncertainty for wind direction (Figs. 3c,f). While this represents a substantial improvement over the current approach, this envelope of uncertainties may still lie outside the tolerances required by wind farm developers. We return to this point later in this section.
Figure 4 shows the performance of both techniques in generating downscaled hourly wind speeds and directions at the Cabauw site. As noted above, tower measurements are used as a proxy for RCM-downscaled winds at a grid point. Therefore, Fig. 4 illustrates whether the Monte Carlo technique yields improvement over the industry standard technique when put into practice; it quantifies whether defining the dominant variability in terms of the daily mean synoptic-scale winds on a given day can be used to adequately characterize the hourly winds realized at the tower scale on that same day. The outcomes from the industry technique have uncertainties ranging from about 20% to as high 45% for both wind speed and direction, while those for the Monte Carlo technique range from about 16% to roughly 30%. This equates to a 10%–25% reduction in uncertainty, with a few bins exhibiting a more than 30% reduction (Figs. 4c,f). The reader may wonder why the uncertainties for the hourly winds are much lower relative to those for the daily wind values from MERRA. This simply reflects the much larger sample sizes for the hourly downscaled winds (24 times as much as for daily), since the widths of the confidence intervals are directly proportional to sample size (e.g., Thompson 1992).
A summary of the Monte Carlo technique’s performance for all five tower sites used in this study is presented in Fig. 5, for both synoptic-scale and downscaled hourly winds. The average relative differences between the sample data histogram frequencies and the full distribution’s histogram frequencies for wind speed and direction are shown for each technique, and for all trials. The goodness-of-fit error (GFE) is defined as
where again ti denotes the frequencies of daily mean wind speed (wind direction) values occurring in the full distribution and ai denotes the corresponding relative frequencies of speed (direction) values within a given sample. Note that the GFE metric is identical to the chi-square test (a common measure of goodness of fit), except that the numerator is not squared. This allows us to express the results as the mean error in the frequency of each bin (in terms of percentage), which is more easily interpretable than the unitless squared value yielded by the chi-square test. It is clear that the Monte Carlo technique has a significantly lower magnitude and spread of the errors (i.e., it provides a more consistent result) for synoptic-scale winds for all stations when compared with the industry standard (Fig. 5a). For the downscaled winds (Fig. 5b), the reduction in error magnitude and spread is about 20%–35% for all stations except Goodnoe Hills, Washington, and Savanna River, South Carolina.
These reductions in goodness-of-fit error are likely to be of great practical significance to the wind industry, since even a small error in the wind resource map can translate into millions of dollars in the estimate of the wind energy resource potential for a location (Tindal 2011). For example, for a reduction in the goodness-of-fit error from 5% to 3% for a hypothetical station whose mean hourly wind speed is 10 m s−1, the uncertainty in wind power production falls from 15% to 10%, which equates to an improvement of 33% (recall that wind power is proportional to the cube of the wind speed). According to the nonparametric Wilcoxon–Mann–Whitney test, the goodness-of-fit errors yielded by the two techniques are distinct with a confidence greater than 99.8% for all stations except Goodnoe Hills, whose errors in the observation-based proxy downscaled winds are likely distinguishable with a confidence of about 99% for wind speed, and nearly 60% for wind direction. The reduction in confidence of distinguishing statistical differences between the two techniques for this latter station was expected given that the fairly complex terrain at that location is largely unresolved by MERRA, which in turn leads to a poor correlation between MERRA and the tower measurements.
To confirm the negative influence of complex terrain on the MERRA-based Monte Carlo technique, in Fig. 6 we examine the relationship between the subgrid-scale terrain complexity (“topographic heterogeneity”) at each station and the difference in the goodness-of-fit error between the two techniques for proxy downscaled hourly winds that is shown in Fig. 5b. The global 30-m digital elevation model (DEM) from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER; EOSDIS 2011) is used to quantify the topographic heterogeneity, by calculating the standard deviation of the ASTER terrain elevation for all grid points within a 55-km square box surrounding each station. The 55-km box approximates the footprint of an individual NASA MERRA grid box, and contains about 4–5 million ASTER grid points depending on the station location. Larger topographic heterogeneity indicates more hilly/mountainous terrain within a given grid box. Inspection of Fig. 6 shows convincingly that the relative error for the local-scale (tower) wind speed and wind direction yielded by the Monte Carlo technique becomes more similar to that from the industry-standard technique as the topography becomes more complex. Because MERRA does not resolve the terrain complexity within the Goodnoe Hills region, it has difficulty correctly identifying the set of days representing the full range of wind variability at the tower scale. As reanalyses continue to improve their representation of terrain-induced flows via reduced grid spacing, the Monte Carlo technique is expected to yield better results for these regions as well. Figure 6 therefore provides a rough guide for would-be users of the Monte Carlo technique, suggesting that it may yield more representative samples of local-scale wind speed and wind direction distributions in low-to-moderate complexity terrain—those locations having a topographic heterogeneity of less than about 100 m as quantified by the 30-m ASTER DEM—but that it may not currently provide an advantage over the industry-standard technique in more complex terrain. Any potential user can quickly calculate the topographic heterogeneity within their region of interest by downloading the freely available ASTER DEM.
b. Sensitivity to the number of case days
As we have shown, even the new Monte Carlo technique yields fairly high values of uncertainty in its estimate of the multiyear record of downscaled hourly winds, specifically, a 20%–33% uncertainty for a 365-day sample. Herein, we attempt to quantify the number of case days required to achieve a given level of accuracy and uncertainty. There are well-established methods to determine the sample size needed to approximate the frequencies of the full population within distance d at a confidence level α. Thompson (1987, 1992) provides an easy-to-use lookup table for estimating the required sample size for a single random sample. The strength of this approach is that it requires no prior knowledge of the underlying population, and it provides a conservative estimate of the sample size, in that the estimate may actually be somewhat larger than necessary to attain the desired precision. Suppose we wish to approximate the frequencies of wind speeds and wind directions to be within 5% of the frequencies of the full distribution with 95% confidence. According to Thompson’s method, a single random sample of 510 case days is required. Similarly, to approximate the full distributions’ frequencies to within 5% at a confidence of 99% requires a sample size of 788 case days. To confirm that this approach is applicable to downscaled winds, a modified version of the industry-standard technique is run using a sample size of 510 days, where the strata are defined in a fashion similar to that in the original algorithm, with each month having equal representation. The same is done for the Monte Carlo technique. The industry-standard technique yields frequencies of wind speed and direction that are within of 3%–4% of the actual PDF for the downscaled wind speeds and directions, while the Monte Carlo method yields frequencies within 1.25–2.5% of the actual PDF (not shown). Thus, even at larger sample sizes the new technique provides a 25%–45% improvement to the industry-standard sampling method.
Thus far we have demonstrated that the new Monte Carlo downscaling technique provides a significantly more reliable estimate of the long-term record of winds compared to the current industry approach for both a 365-case-day and a 510-case-day sample. A natural question that arises is whether the new method can be used to choose a smaller set of case days that, when downscaled, approximates the full distributions of wind speed and direction with uncertainties equivalent to those of the 365-day sample currently used by the industry. This could be of great value to the wind energy industry, since it would provide a more economical means of generating RCM-based wind resource maps that characterize the full range of wind conditions over a multidecadal period, especially for those lacking access to large supercomputing resources. To explore this issue, the Monte Carlo method is applied to progressively smaller sets of case days, ranging from 240 days (20 days per calendar month) down to 24 days (2 days per calendar month). The same is done for the modified version of the industry’s sampling technique. Again, both techniques define strata such that there is equal representation for all months. Figure 7 summarizes the bin-averaged relative differences between the sample data histogram frequencies and the true histogram frequencies for downscaled winds (i.e., from the tower observations) for the various numbers of case days. Visual inspection of Fig. 7 reveals that it is possible to match the accuracy and reliability of the current (365 day) industry approach using 240 case days (20 days per calendar month) chosen with the new Monte Carlo technique, and it is also possible to closely approximate (within 1.25%) the industry standard error using only 180 case days (15 days per calendar month). This means that the computational expense could be cut by one-half—a potentially substantial savings of time, money, and effort. An interesting outcome of this experiment is that regardless of the number of case days chosen for downscaling, the Monte Carlo technique always yields a result superior to the industry standard. The relative errors for the two methods at each case-day size are distinct with a confidence greater than 99.99%, according to the nonparametric Wilcoxon–Mann–Whitney test. Similar results are seen for the other sites with relatively high correlation between MERRA and the tower measurements (not shown).
c. Representation of the mean diurnal cycle
For many regional climate applications, it is important to accurately represent the diurnal cycle of winds, temperature, humidity, and rainfall within the PBL (e.g., Carbone and Tuttle 2008; Rife et al. 2010; Monaghan et al. 2010). The diurnal and vertical structure of the wind has a large impact on wind power production (e.g., Schreck et al. 2008; Marquis et al. 2011). Figure 8 quantifies how reliably the industry-standard and Monte Carlo techniques represent the mean diurnal cycle of wind speed at hub height for the 365-case-day experiments at the Cabauw site. The Monte Carlo sampling technique clearly provides a more reliable estimate of the hourly mean winds as evidenced by the reduced (up to 40%) width of the 95% confidence interval. An interesting aspect of Fig. 8 is the enhanced improvement realized by the Monte Carlo technique between 1000 and 1500 UTC (daytime at this location), when the observed variance in the hourly winds is greatest. This result appears to support our assumption that only the daily mean characteristics of the hub-height winds are needed to select the case days that, when downscaled, represent the dominant wind variability at the local scale, including the diurnal cycle of the wind.
5. Summary and commentary
This paper has introduced a new computationally efficient and statistically robust sampling method for generating downscaled climatologies, and is an easy-to-implement extension of the current technique used within the wind energy industry. We have shown that the new Monte Carlo technique provides a more accurate and statistically reliable estimate of the long-term record of winds compared to the current industry standard approach, for locations having modest to low terrain complexity. At those locations, it reduces the uncertainties in observation-based proxy downscaled wind speeds and directions by 20%–35%, and reduces the goodness-of-fit errors in downscaled wind speeds by 1%–3%. These reductions translate into an improvement of up to 33% over the industry-standard estimates of mean wind speeds and directions. For areas with highly complex terrain, the new technique offers only modest improvements to the current industry standard. Finally, by using an improved method for choosing the case days for downscaling, our method closely approximates the accuracy and reliability of the current (365 day) industry approach using only a 180-day sample, which has the potential to substantially decrease the computational expense of climate downscaling, rendering RCM-based wind resource maps more tractable for those lacking access to large supercomputing resources.
The new approach requires about 40 min of computer time to choose the most representative set of case days from among 200 000 sets, using prototype code that has not been optimized. The current industry approach requires about 20 s to complete using similarly unoptimized code. Given that the computational expense of actually downscaling 365 case days using an RCM is at least two orders of magnitude higher than the expense of running the Monte Carlo technique, the improvements in accuracy and reliability realized by the Monte Carlo technique far outweigh the small additional computational cost.
The new Monte Carlo technique may have significant potential benefits to the early phases of the wind farm development process, since more accurate wind maps could enable developers to make more informed decisions about where they allocate funding and resources for on-site measurement campaigns. And, because we have shown that the Monte Carlo technique can significantly reduce the computational expense of creating wind maps (hence, less time, money, and effort), this savings would be directly passed on to developers who purchase and use such maps for site-feasibility studies. It may also benefit those seeking financing for a prospective wind farm, if coupled with dynamical downscaling, and then refined with microscale modeling and in situ tall tower measurements. Financiers examine key quantiles of the wind speed distribution at a prospective wind farm, to estimate the expected yearly wind energy yield, and the uncertainty in that estimate. The industry refers to quantiles as “P values,” short for “probability of exceedance.” Thus, P90 denotes the 10th percentile of the wind speed distribution, and marks the level of annual wind-driven electricity generation that is estimated to be exceeded 90% of the year. The key wind speed P values typically used by the industry are P50 and P90 (and in some markets P95 and P99), as well as the P90–P50 ratio, which gives a measure of the wind resource’s variability. Thus, the higher the P50 (or P90) value and the closer the P90–P50 ratio is to 1, the greater is the wind energy production potential, which generally leads to more attractive financing terms being offered to a wind farm developer (Tindal 2011). Table 1 demonstrates how reliably each technique approximates the P50 and P90 values for the five sites used for this study. Similar to previous results, the Monte Carlo technique yields a reduction in the uncertainties for the P90 and P50 values, ranging from about 20% at Risø to as high as 48% at Cabauw. If these reductions in uncertainties are actually realized in a downscaled wind resource estimate, then they are likely to be considered highly significant by the wind industry, since the uncertainty in average wind speed is reduced by 30% at Risø, to 36% at Goodnoe Hills, and to 50% at Cabauw. Decreases of this magnitude in the uncertainty of the expected wind speed can lead to an increase of several million dollars in the loan amount offered to a wind farm developer (Tindal 2011).
Follow-on work will explore coupling the new sampling technique with actual RCM-downscaling methods for creating continuous time series of hub-height winds to be used in conjunction with on-site measurements for wind resource estimation. Another important aspect is to test how the quality of a reanalysis’ representation of the local terrain heterogeneity affects the results presented herein. Higher-resolution reanalyses such as the 32-km resolution North American Regional Reanalysis (Mesinger et al. 2006) and the ~38-km resolution Climate Forecast System reanalysis (Saha et al. 2010) may better represent the local terrain complexity and, therefore, yield better results for the Monte Carlo technique. Finally, we hope to explore how this Monte Carlo technique could be generalized for a variety of downscaling applications. For example, the technique could be especially useful for downscaling future climate projections from GCMs, such as those that support the Intergovernmental Panel on Climate Change’s (IPCC) Fifth Assessment Report (AR5). The IPCC has recognized the pressing need for a more refined treatment of regional climate change to augment the global change projections. Using current RCM-downscaling approaches, where typically the entire period for a given 10- or 20-yr time slice is downscaled, it is generally not tractable to downscale the entire suite of emissions scenarios for even a single region and for a single GCM. The new Monte Carlo technique may help mitigate this limitation, by allowing for selection of a small number of representative case days for a greater number of emissions scenarios and/or GCMs for subsequent RCM downscaling, thereby facilitating exploration of the full range of possible future climate change outcomes on the regional to local scales. However, there are clearly challenges for adapting the Monte Carlo technique for such an application, most notably scaling up the methodology from the relatively one-dimensional problem of a small wind farm presented here, to the two-dimensional problem of a large region, where “representative” case days may not be consistent across the domain.
This research was funded by the NASA Research Opportunities in Space and Earth Sciences (ROSES) program (NASA Grant NNX10AB30G). The authors are indebted to Melissa Elkinton, Clint Johnson, and Craig Collier (GL Garrad Hassan) for their insightful comments and discussion throughout this study. We also thank Luca Delle Monache and Sue Ellen Haupt (both at NCAR) for reviewing an early version of the manuscript. Four anonymous reviewers provided valuable comments that improved the manuscript. The following individuals and agencies are gratefully acknowledged for providing the tall tower measurements: Andrea Hahmann (Risø Danish Technical University) provided the Risø measurements; Stel Walker (Oregon State University) provided the Goodnoe Hills, Washington, measurements; and Robert Kurzeja (Savanna River National Laboratory) provided the Savanna River, South Carolina, measurements. The Lamont, Oklahoma, tower measurements were obtained from the Atmospheric Radiation Measurement (ARM) Climate Research Facility online data archive (http://www.arm.gov), and the Cabauw, Netherlands, tower measurements were obtained from the Cabauw Experimental Site for Atmospheric Research Database (http://www.cesar-database.nl/). Daniel Steinhoff (NCAR) helped with processing the ASTER data. The MERRA data were obtained through the NASA Mirador Earth Science Data Search Tool (http://mirador.gsfc.nasa.gov/).
Current affiliation: GL Garrad Hassan, San Diego, California.
The National Center for Atmospheric Research is sponsored by the National Science Foundation.
Note that additional combinations of case days may be required to adequately represent the full range of conditions over a multidecadal record (e.g., 30 yr), including rare events.
All analyses of wind direction data are performed using circular statistics methods (e.g., Fisher 1995).
A 12–24-h initialization (or spinup) period is commonly used for individual climate downscaling simulations (e.g., Qian et al. 2003; Lo et al. 2008; Hahmann et al. 2010; Rife et al. 2010). Thus, in the limit where no groupings of consecutive days exist within a given sample, the total computational burden increases by a factor of 1.5–2.0, which accounts for the 12–24-h spinup period required for each of the 365 individual daily downscaled realizations.