1. Introduction
According to the Department of Energy (DOE), the wind energy industry in the United States added more than 13 gigawatts (GW) of generation capacity in 2021, with the national total rising to over 135 GW, equal to 9% of the nation’s total generation capacity. The states of Kansas, Oklahoma, and North Dakota now receive 30% of their energy needs from wind, while Iowa and South Dakota now get more than 50% of their electricity from wind generation (DOE 2022). With the change to wind-dominant generation practices in different parts of the United States, accurate forecasting of such resources has become especially critical, with a need for increasingly fine resolution in the forecasts as wind can vary greatly in both space and time.
Forecasting of wind is highly dependent on the diurnal cycle. Some hurdles to accurate forecasting include summertime convection, stratification of the planetary boundary layer (PBL) and the frequent occurrence of the nocturnal low-level jet (LLJ). The frequency strength and duration of LLJ events varies seasonally (Weaver et al. 2009; Liang et al. 2015). The height of the LLJ maximum wind speed is commonly below 500 m (e.g., Song et al. 2005; Shapiro et al. 2016; Smith et al. 2019) and in some cases will affect the rotor sweep area (Aird et al. 2021). In a recent study looking at large eddy simulations of LLJs at different heights within the turbine rotor sweep area, the generative ability of turbines beyond the first row was highly dependent on the relative height of the LLJ (Gadde and Stevens 2021). If the core of the LLJ is situated above the rotor sweep area, the high turbulence aids in wake recovery and thus increases power generation. Opposite of that, when the LLJ is positioned below the rotor area, the negative shear region and stability limit wake recovery.
Numerical weather prediction (NWP) of turbine height wind speeds usually relies on PBL schemes to resolve the otherwise complex small-scale interactions and fluxes near the ground. When PBL schemes were developed, they were commonly evaluated using abundantly available 10-m (surface) wind data, which contrasts with hub-height data where publicly available wind data are inconsistent and lacking. Newman and Klein (2014) explored methods of extrapolating the 80-m hub-height winds using the readily available 10-m data. They tested several different power law relationships, as well as two similarity theory relationships with varying success. Their results emphasized the importance of including an atmospheric stability parameterization in the extrapolation formulation. Likewise, biases came about when relating 80-m tower data to 10-m data that were not collocated with the tower (Newman and Klein 2014). This leads to the exacerbation of the original issue, where areas that lack 80-m observations will also struggle with output biases despite improvements in extrapolation methods, even with dense 10-m observations.
In NWP, data at hub height can be interpolated using the vertical grid internal to the model if a sufficient vertical resolution is used; however, these NWP models continue to lack fine enough horizontal grid spacing to resolve turbine plant (TP)-scale processes, since the computational cost of a model with such fine horizontal grid spacing would be prohibitive if the model was run over typical domain sizes. In their study looking at the effects of using a wind farm parameterization scheme in the Weather Research and Forecasting (WRF) Model on the PBL throughout the diurnal cycle, Fitch et al. (2013) found that the wake effects induced by the momentum sink in the model varied drastically depending on the time of day. Reductions of wind speed from a simulated 10-km2 wind farm during the day amounted to about 10%. Under stable conditions, like those expected with a nocturnal PBL, which inhibit vertical mixing of momentum, those reductions in wind speed can grow to upward of 30%.
As part of their research into wind forecasting at turbine hub height, Deppe et al. (2013) used bias corrections to increase wind speed forecast accuracy at turbine hub height. Their corrective efforts were computed using the mean of bias of the wind speed as a function of the range of wind speeds over the hour. Using 80-m tower data from within a TP in northwest Iowa, they found that their bias correction for the WRF forecasts using several different PBL schemes yielded improvements in mean absolute error (MAE) of 13%–20% (Deppe et al. 2013).
Machine learning is an increasingly commonly used technique for forecasting in meteorology with varying levels of complexity ranging from linear regression, like the methods used by Deppe et al. (2013) to complex artificial neural networks (Chase et al. 2022). Moreover, machine learning frameworks have shown skill in short-term, point specific forecasting (Hennon et al. 2022), as well as for improving bias correction for the WRF Model (Sayeed et al. 2023).
Severe weather is a constant threat to infrastructure, with the electrical grid and wind turbines being no exception. Studies exploring the impacts of severe weather on wind turbine function have been well documented (e.g., Su et al. 2017; Chou et al. 2018; Katsaprakakis et al. 2021). Wind power is very heavily dependent on local weather changes and fluctuations in available momentum. Beyond that, the turbines themselves have limitations related to minimum and maximum wind speed and operating temperatures, as well as icing conditions.
In the event of a blackout due to severe weather, turbine plants lack the ability to restore power on the grid (FERC-NERC 2018; Garcia et al. 2019; O’Brien et al. 2022). With wind becoming a substantial source of energy for the power grid, it is especially necessary to understand the risks associated with wind and their effects on blackouts and the operations of black starts, where a black start is the act of electrifying the power grid following a blackout. As a hypothetical example, if a tornado were to disable the only gas-turbine power plant in a tricounty area, nearby, unaffected TPs would not be able to black start the grid with current technology and operations. This hypothetical example is dwarfed by the complexity of operations needed to black start the grid after a real event like the 10 August 2020 derecho, which left hundreds of thousands of people without power and a damage swath that encompassed several Midwestern states.
The present study examines three different machine learning algorithms that were trained and tested using input from TP observations and WRF simulations in central and western Iowa. This research represents a portion of a collaborative effort to enable wind turbines to be incorporated into black start operations on the power grid (Villegas Pico and Gevorgian 2022; Dang and Villegas Pico 2023) and provide skillful wind resource forecasts.
This research is intended to (i) develop a method to improve WRF forecast accuracy to aid in black start operations where the forecasts would be used to flag potential issues for wind generation in the day-2 period such as icing conditions and times where wind speed may fall below the cut-in speed for turbine function, as well as when wind may exceed the cut-out speed of available turbines (hereafter referred to as Task 1), (ii) create forecasts using recent (60-min) errors in the WRF forecast to fine-tune the restoration plan as needed (hereafter referred to as Task 2), and (iii) make a system to produce nowcasts to be used during active black start operations (hereafter referred to as Task 3).
2. Data and methods
a. WRF modeling
To create a relatively large dataset of model output representative of interseasonal variability, the WRFv4.2 Model (Skamarock et al. 2019) was run for 8 months over a 2-yr period. During 2018 and 2019, the months of January, April, July, and October were chosen for simulations, with runs initialized at 1800 UTC and integrated for 36 h to cover the local (CST) day-2 period (0600–0600 UTC). Initial and lateral boundary conditions (ICs, LBCs) for our modeling domain were taken from the North American Mesoscale (NAM) model grid 218 (Rogers et al. 2017). NAM data were acquired using the National Centers for Environmental Information’s Archive Information Request System. The modeling domain covered the western two-thirds of Iowa and small portions of surrounding states (Fig. 1). The 25 600-km2 domain used a horizontal spacing of 3 km. A total of 63 vertical levels were used in the WRF setup to provide sufficient resolution in the layers in and adjacent to the PBL, with 27 levels below 1000 m AGL and 12 of those levels typically being below 150 m AGL. WRF output at hub height (80 m AGL) was interpolated from the available vertical grid. WRF Model outputs were extracted at the grid point closest to each of the TPs.
Several preliminary tests were conducted using different WRF configurations to determine which configuration maximized the accuracy of wind speed forecasts at several key heights, while minimizing model run time to allow possible operational implementation. Data used for the verification of these tests were supplied by the Iowa Atmospheric Observatory tall towers. The setup found to run the fastest, with similar error compared to the other configurations tested, included the Thompson microphysics option (Thompson et al. 2008), Yonsei University PBL scheme (Hong et al. 2006; Hong 2010), unified Noah land surface model (Ek et al. 2003), Dudhia shortwave scheme (Dudhia 1989), and RRTM longwave scheme (Iacono et al. 2008).
b. Observational dataset
Observational data at hub height for individual turbines used in the training of the machine learning models was obtained from the Mid-American Energy Company (MEC) for several TPs of varying sizes in the study domain. Observations were available at 1-min intervals for wind speed and temperature. The data were subject to a rigorous quality control process to ensure data integrity for further uses. Values determined to be erroneous in any way were removed, and those time steps were treated as missing data.
One step of the quality control process involved establishing a range of physically plausible temperatures for each season (Table 1). Values outside these boundaries were removed. The dataset also contained some physically unrealistic segments that evidenced interpolation during periods where sensors may have lost power or data flow was interrupted. These time periods with interpolated data affected both the temperature and wind speed time series. To remove these interpolated segments, trends were analyzed between each time step. If the trend did not fluctuate over a 1-h period or more, the sequence was removed.
Lower and upper bounds of temperatures regarded as reasonable at turbine hub height depending on the season.
Additional quality control was performed by examining the spread of values among the different turbines. There were turbines present in the plant that had consistent deviation from TP median temperature over the course of the dataset, pointing to possible calibration issues. The population data of these deviations from the TP median showed that any persistent deviation above 1°C was nontypical. Data from turbines exceeding these thresholds were removed from the evaluation of mean TP temperature.
c. Machine learning models
To best utilize the computing resources available for this research, artificial neural networks (McCulloch and Pitts 1943) were used for all machine learning purposes. To do so, we worked entirely inside the Tensorflow python package framework while incorporating many parts of the Keras python library (Abadi et al. 2015; Chollet et al. 2015). Data splitting for Task 1 used the first 70% of data from each month for each site for training, followed by the next 10% used for the validation set, and final 20% for the test set. This was done to reduce the risk of autocorrelation during the training process. For the models used for Task 2 and Task 3, random sampling was used to select 80% of the data for the training, with the remaining 20% used for testing. Within the training module, for each tuning sequence, 20% of the training dataset was chosen randomly for validation. Several combinations of parameter sets were tested to minimize model loss.
To address the needs of Task 1, a multioutput artificial neural network was developed to ingest WRF fields and train against TP measurements of wind speed and temperature. The input features for the multioutput neural network are displayed in Table 2 along with the time period from which each field was extracted. To best accommodate the expected output of the model, the neural network was built of six fully connected layers, each assigned to an individual label, all with their own output layer (hereafter referred to as Task 1-NN). A visualization for the Task 1-NN model structure is shown in Fig. S1 in the online supplemental material. The labeled outputs are the minimum, mean, and maximum of wind speed and temperature of the TPs, where minimum and maximum values of temperature and wind speed are made up of the respective highest or lowest value out of all the turbines in each TP at that time. Similarly, series of mean temperature and wind speed are derived from the mean value of all the turbines in each TP. The model outputs predictions at 1-min time steps.
List of input features for the Task 1-NN and the originating time period for each feature.
To utilize as much data in the training process as possible, we considered how this forecast system would be run operationally (workflow visualized in Fig. 2). First, the 1800 UTC (day 1) NAM inputs would be needed for preprocessing for the WRF, with the 36-h WRF prediction to follow. Those data need to be extracted at key points (i.e., TP locations) before the machine learning methods can be applied. With the processing and runtimes factored together, the machine learning implementation for Task 1 of the project would occur somewhere around midnight local time (0600 UTC) at the beginning of the desired day-2 forecast period. Thus, by the time the inputs for the machine learning prediction would be prepared, observations could be gathered that overlap with the first 12 h of the WRF forecast, since it was initialized using 1800 UTC output. Those observations could be compared to WRF fields and incorporated into the training to act as a baseline error for the duration of the whole WRF simulation.
For Task 2, a long short-term memory recurrent neural network (LSTM; Hochreiter and Schmidhuber 1997) was employed to produce the short-range forecasts. This model is designed to predict short-term future errors in WRF simulations based on recent error to improve forecasts for imminent black start operations (hereafter referred to as Task 2-LSTM). The memory in the LSTM makes it an ideal model candidate for producing sequential time steps in the output. A 1-h time series for each input feature (Table 3) was fed into the LSTM and trained against the time series of incremental model error for the next hour.
Input features for Task 2-LSTM.
Task 3 also utilized an LSTM with past observations to make short-term “nowcasts” with the intention that they would support active wind-dominant black start operations (hereafter referred to as Task 3-LSTM). Task 3-LSTM uses 1-h time series to make a 10-min time series prediction of minimum, mean, and maximum of wind speeds and temperatures within the TP. Table 4 displays the features utilized in Task 3-LSTM.
The base structure of the LSTM model used in both Task 2 and Task 3 was the same. The three-dimensional input features (samples, time series, and input fields) need to be trained against three-dimensional output labels, where the length of the time series, and number of features/labels can be dynamic. The structure that met these requirements used an input LSTM layer, followed by a repeat vector layer, LSTM layer, and a time-distributed layer with a nested dense layer for output. The use of the repeat vector layer allowed for inputs to be repeated as many times as needed, which, in this work, was governed by the size of the output time series. The time-distributed dense layer applied the training process to each label. Figures S2 and S3 show the visualizations of the model structures for both Task 2- and Task 3-LSTM.
Figure 2 combines the workflow for all three phases of the project onto one timeline. Also included are single word identifiers that can be associated with the different tasks. As an example, “prepare” closely fits with the goal of Task 1. When the output of the Task 1-NN and the recommendation of the attendant forecasters suggest that black start operations may be needed during the period, forecasts can be “refined” using the Task 2-LSTM. Last, if active black start operations are underway, the Task 3-LSTM would be repeatedly used to directly “support” that work.
For verification purposes, the predictions of Task 1-NN were compared to the original WRF forecasts. For the Task 2 dataset, the incremental errors used as model input are the differences between WRF and observations; therefore, comparing the output from Task 2 to WRF predictions, like was done for Task 1, would not be useful. Task 3 is meant to produce extremely short-range “nowcasts” with the training set being derived from observations alone, whereas the WRF setup was designed to provide a forecast over the 3-km grid for 36 h and is less tuned to the fine time resolution needed for a nowcast. Both models for Task 2 and Task 3 are designed for better accuracy and precision on short time scales that our WRF configuration was not designed to simulate. With the outputs of Task 2 and Task 3 LSTMs not as easily comparable to the WRF output, they are instead evaluated against a persistence forecast, where the last observed wind speed or temperature value is used as a forecast for the remaining hour or 10-min time series.
3. Results and discussion
a. Model training
For Task 1-NN, it was found that the best skill values were obtained when using 256 neurons in each dense layer, with the ideal batch size being 128. The rectified linear unit activation function produced the best performance and model stability. The values of loss for Task 1-NN in Table 5 represent the summation of the loss of each of the six output layers, where loss is the measure of MSE between trained (normalized) model output and the expected values provided as labels. The average loss per output layer is equal to 0.389 for the test set, although a majority of the total loss was contributed by the output of the wind speed fields.
Model loss of tuned and trained multioutput neural network for Task 1, and the LSTMs for Task 2 and Task 3. The values of showing variation in training and validation loss have been included as a measure of model stability from the repeated tuning tests. The loss values are normalized, and thus they are also unitless.
To be able to train the LSTMs for Task 2 and Task 3 using GPU support on our computing system, the NVIDIA Compute Unified Device Architecture for Deep Neural Network implementation was used. This specification requires the use of the hyperbolic tangent activation function. Tuning of Task 2-LSTM yielded the best loss statistics while using LSTM layers populated by 512 neurons each and using a batch size of 256 samples (Table 5). The model loss benefited from an additional LSTM layer before and after the repeat vector layer. Task 3-LSTM was found to have the best performance using 256 neurons per LSTM layer, with a batch size of 64. The loss values were quite consistent across the different datasets for Task 3-LSTM. This differs from the loss values of Task 2-LSTM, where the validation loss was an order of magnitude worse than the training and test loss. This result may be due to the increased complexity of the relationship between the WRF Model and observations, which was used as Task 2-LSTM input. This complexity likely contributed to extra LSTM layers having better loss values than the structure seen in the final version of Task 3-LSTM, which was tested using the same neuron and batch size specifications of Task 2-LSTM.
b. Model output
Mean error for Task 1-NN forecasts decreased by roughly 4 m s−1 compared to the error for the WRF forecasts (Table 6). Mean error for WRF temperature forecasts was less than that of the Task 1-NN forecasts, but MSE shows that the errors were smaller for Task 1-NN. Because the temperature forecasts are most important in regard to the minimum and maximum operating temperatures of the turbine hub, and the forecast of icing conditions when precipitation/fog are present, such minor errors in temperature forecasting should not be problematic.
Comparison of errors for WRF and Task 1-NN (mean values) forecasts for both wind speed and temperature. Units are shared for mean error and MAE, those being m s−1 for wind speed and °C for temperature errors. MSE has units of m2 s−2 and °C2, respectively. IOA is unitless.
If the predicted values from Task 1-NN are assumed to be a replacement for the original WRF forecast values, the overall improvement from the use of Task 1-NN can be calculated. For MSE of wind speed forecasts, Task 1-NN leads to an error reduction of over 80%. For the MSE of temperature, errors were reduced by 28%. Since wind power is related to the cube of the wind speed, power errors would be reduced by over 94%.
When comparing the output of Task 1-NN forecasts to the expected output of the test set (not normalized), mean errors for TP minimum, mean, and maximum are quite small (Table 7). Temperatures had a consistent high bias. Wind speed maximum and temperature minimum had the worst MSE. The bias correction methods of Deppe et al. (2013) produced a MAE of 1.70 m s−1 for the wind speed forecasts of the 1800 UTC NAM initializations for the WRF. In comparison to the result found in Deppe et al. (2013) our Task 1-NN produced an MAE of 1.92 m s−1 showing good improvement over the WRF (Table 6). Although the raw value of Task 1-NN MAE was worse than seen in Deppe et al. (2013), it should be noted that the uncorrected WRF MAE in the present study was over 2 times higher than in Deppe et al. (2013), leading the Task 1-NN to have a greater overall magnitude of improvement.
As in Table 6, but now excluding the WRF output, and focusing on the minimum and maximum values of Task 1-NN forecasts.
As another point of comparison, we looked to examine IOA, calculated from the mean wind speed predictions of Task 1-NN (Table 6). IOA for Task 1-NN improved by 23% over the uncorrected WRF. In a study of a similar nature, Sayeed et al. (2023) looked at creating a convolutional neural network to correct WRFv3.8 10-m wind speeds. Their domain covered all of South Korea and had 27-km grid spacing. Their WRF configurations also included a 6-hourly data assimilation step. Their convolutional neural network was trained using 4 years of input and many more observed sites than used in this study. Their model produced an average increase in IOA of 27% over WRF. The improvement of both models is very close, despite the drastic differences in WRF configuration, and machine learning model used. The convolutional neural network’s strength in spatial pattern recognition may have helped to improve to IOA versus our deep neural network, which lacked any sense of spatial distribution of the TPs.
Figure 3 shows the average MSE for the WRF forecasts over the 60-min forecast period for Task 2-LSTM compared to that of a persistence forecast. Task 2-LSTM vastly outperformed a persistence forecast for both wind speed (Fig. 3a) and temperature (Fig. 3b). Persistence did provide a better forecast for the first time step for wind speed along with the first two time steps for the temperature prediction. As one may expect, MSE in the persistence forecast worsened over the course of the period. Although MSE for Task 2-LSTM also tended to get worse over the period, the magnitude overall was much lower.
Figure 4 shows the average MSE of Task 3-LSTM and persistence forecasts over the 10-min prediction period. Despite the magnitude of errors being higher than the errors seen for the mean, the forecasts for minimum (Fig. 4a) and maximum (Fig. 4c) wind speed within the TP had much lower error than the persistence forecasts. For the TP mean wind speed (Fig. 4b), where the time series included all available turbines, persistence provided a near equal forecast until the sixth minute when the LSTM began to have lower error as compared to observations.
Task 3-LSTM was extremely skilled in forecasting the temperature, with an average MSE around or below 0.1°C. For TP mean temperature, the persistence forecast also performed well (Fig. 4e). The errors in the LSTM forecast for TP mean temperature could be due to noise in the training or validation datasets, which is at least partially a function of the quality control methods used to remove erroneous data points.
4. Summary and conclusions
This work investigated the use of machine learning methods applied both alone and to adjust NWP forecasts to aid in the forecast and execution of black start operations on wind-dominant power systems. NWP simulations were carried out using the WRF Model initialized with 1800 UTC NAM output, for 8 monthlong periods chosen over 2 years. TP observations were provided by MEC at 1-min resolution to match the WRF output. Vigorous quality control was applied to the observed datasets in preparation for their use as inputs to several machine learning models.
The training of the machine learning models for all three major research tasks produced good loss scores after model tuning was conducted. Model predictions were evaluated after denormalization. Overall findings are as follows:
-
The multioutput neural network created for Task 1 produced corrected WRF forecasts with vastly improved skill over the uncorrected WRF.
-
The LSTM trained to predict WRF incremental errors given past error for Task 2 produced much more skillful forecasts than would otherwise be provided through a persistence forecast.
-
The LSTM designed to ingest past observations to predict future temperature and wind speed over a short-term nowcast period to fulfill Task 3 provided forecasts with extremely low error.
-
Over the short forecast period used in Task 3, persistence was a viable forecast method, although the prediction of extreme values within the TPs (minimum/maximum temperature and wind speed) was better when the LSTM was used.
Task 1-NN demonstrated the continued improvement and applicability of deep learning techniques as a corrective measure for NWP models like the WRF. Moreover, Task 2-LSTM and its ability to predict future WRF errors is further evidence of the usefulness of coupling machine learning to NWP model forecasts.
This work could be expanded in several ways. First, one could acquire a new set of MEC observations to test the progression and improvement as newer observations are added to the dataflow for Task 2 forecasts as compared to Task 1, as well as Task 3 compared to Task 2. Moreover, a new MEC observed dataset could allow for the testing of this system by running all the forecast methods as would be done operationally to test their efficiency. This work could be repeated in other regions, especially those with more complex terrain. It may also be worth investigating the use of a convolutional neural network, with the algorithm’s better ability to incorporate spatially distributed (gridded) data, in a similar manner to the one used in Sayeed et al. (2023) for the corrective efforts needed for Task 1.
Acknowledgments.
The funding for this project has been provided under DOE Grant DE-SC0021410. A special thanks goes out to our collaborators at the MidAmerican Energy Company for supplying us with site specific meteorological observations at turbine hub height. The Iowa Atmospheric Observatory towers and associated instrumentation were funded by an NSF EPSCoR grant to the state of Iowa (1101284) and follow-on NSF AGS Grant 1701278. We would also like to acknowledge the hard work of the interdisciplinary team of researchers at Iowa State University not directly involved in the authoring of this manuscript: Dr. Julio C. Lopez, Hoang Dang, and Zhenghan Zhang.
Data availability statement.
Extracted WRF data are stored on a local computer system and can be made available upon request. The MEC observations are subject to a nondisclosure agreement and are not available at this time.
REFERENCES
Abadi, M., and Coauthors, 2015: TensorFlow: Large-scale machine learning on heterogeneous systems. TensorFlow, accessed 16 August 2021, https://tensorflow.org/.
Aird, J. A., R. J. Barthelmie, T. J. Shephard, and S. C. Pryor, 2021: WRF-simulated low-level jets over Iowa: Characterization and sensitivity studies. Wind Energy Sci., 6, 1015–1030, https://doi.org/10.5194/wes-6-1015-2021.
Chase, R. J., D. R. Harrison, A. Burke, G. M. Lackmann, and A. McGovern, 2022: A machine learning tutorial for operational meteorology. Part I: Traditional machine learning. Wea. Forecasting, 37, 1509–1529, https://doi.org/10.1175/WAF-D-22-0070.1.
Chollet, F., and Coauthors, 2015: Keras. GitHub, accessed 14 April 2022, https://github.com/fchollet/keras.
Chou, J.-S., Y.-C. Ou, K.-Y. Lin, and Z.-J. Wang, 2018: Structural failure simulation of onshore wind turbines impacted by strong winds. Eng. Struct., 162, 257–269, https://doi.org/10.1016/j.engstruct.2018.02.006.
Dang, H. P., and H. N. Villegas Pico, 2023: Blackstart and fault ride-through capability of DFIG-based wind turbines. IEEE Trans. Smart Grid, 14, 2060–2074, https://doi.org/10.1109/TSG.2022.3214384.
Deppe, A. J., W. A. Gallus, and E. S. Takle, 2013: A WRF ensemble for improved wind speed forecasts at turbine height. Wea. Forecasting, 28, 212–228, https://doi.org/10.1175/WAF-D-11-00112.1.
DOE, 2022: Wind market reports: 2022 edition. DOE, accessed 14 September 2022, https://www.energy.gov/eere/wind/wind-market-reports-2022-edition.
Dudhia, J., 1989: Numerical study of convection observed during the Winter Monsoon Experiment using a mesoscale two-dimensional model. J. Atmos. Sci., 46, 3077–3107, https://doi.org/10.1175/1520-0469(1989)046<3077:NSOCOD>2.0.CO;2.
Ek, M. B., K. E. Mitchell, Y. Lin, E. Rogers, P. Grunmann, V. Koren, G. Gayno, and J. D. Tarpley, 2003: Implementation of Noah land surface model advances in the National Centers for Environmental Prediction operational mesoscale Eta model. J. Geophys. Res., 108, 8851, https://doi.org/10.1029/2002JD003296.
FERC-NERC, 2018: FERC-NERC-regional entity joint review of restoration and recovery plans: Blackstart resources availability (BRAv). Tech. Rep., 62 pp., https://www.ferc.gov/sites/default/files/2020-05/bsr-report.pdf.
Fitch, A. C., J. K. Lundquist, and J. B. Olson, 2013: Mesoscale influences of wind farms throughout a diurnal cycle. Mon. Wea. Rev., 141, 2173–2198, https://doi.org/10.1175/MWR-D-12-00185.1.
Gadde, S. N., and R. J. A. M. Stevens, 2021: Effect of low-level jet height on wind farm performance. Renewable Sustainable Energy, 13, 013305, https://doi.org/10.1063/5.0026232.
Garcia, J. R., P. W. Connor, L. C. Markel, R. Shan, D. T. Rizy, and A. Tarditi, 2019: Hydropower plants as black start resources. DOE HydroWIRES Tech. Rep. ORNL/SPR-2018/1077, 75 pp., https://www.energy.gov/sites/prod/files/2019/05/f62/Hydro-Black-Start_May2019.pdf.
Hennon, C. C., A. Coleman, and A. Hill, 2022: Short-term weather forecast skill of artificial neural networks. Wea. Forecasting, 37, 1941–1951, https://doi.org/10.1175/WAF-D-22-0009.1.
Hochreiter, S., and J. Schmidhuber, 1997: Long short-term memory. Neural Comput., 9, 1735–1780, https://doi.org/10.1162/neco.1997.9.8.1735.
Hong, S.-Y., 2010: A new stable boundary-layer mixing scheme and its impact on the simulated East Asian summer monsoon. Quart. J. Roy. Meteor. Soc., 136, 1481–1496, https://doi.org/10.1002/qj.665.
Hong, S.-Y., Y. Noh, and J. Dudhia, 2006: A new vertical diffusion package with an explicit treatment of entrainment processes. Mon. Wea. Rev., 134, 2318–2341, https://doi.org/10.1175/MWR3199.1.
Iacono, M. J., J. S. Delamere, E. J. Mlawer, M. W. Shepard, S. A. Clough, and W. D. Collins, 2008: Radiative forcing by long-lived greenhouse gases: Calculations with the AER radiative transfer models. J. Geophys. Res., 113, D13103, https://doi.org/10.1029/2008JD009944.
Katsaprakakis, D. A., N. Papadakis, and I. Ntintakis, 2021: A comprehensive analysis of wind turbine blade damage. Energies, 14, 5974, https://doi.org/10.3390/en14185974.
Liang, Y.-C., J.-Y. Yu, M.-H. Lo, and C. Wang, 2015: The changing influence of El Niño on the Great Plains low-level jet. Atmos. Sci. Lett., 16, 512–517, https://doi.org/10.1002/asl.590.
McCulloch, W. S., and W. Pitts, 1943: A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys., 5, 115–133, https://doi.org/10.1007/BF02478259.
Newman, J. F., and P. M. Klein, 2014: The impacts of atmospheric stability on the accuracy of wind speed extrapolation methods. Resources, 3, 81–105, https://doi.org/10.3390/resources3010081.
O’Brien, J. G., and Coauthors, 2022: Electric grid blackstart: Trends, challenges, and opportunities. PNNL Tech. Rep. PNNL-32773, 57 pp., https://www.pnnl.gov/main/publications/external/technical_reports/PNNL-32773.pdf.
Rogers, E., and Coauthors, 2017: Upgrades to the NCEP North American Mesoscale (NAM) system. Bluebook Rep., 2 pp., https://wmc.meteoinfo.ru/bluebook/uploads/2017/docs/05_Rogers_Eric_mesoscale_modeling.pdf.
Sayeed, A., Y. Choi, J. Jung, Y. Lops, E. Eslami, and A. K. Salman, 2023: A deep convolutional neural network model for improving WRF simulations. IEEE Trans. Neural Network Learn Syst., 34, 750–760, https://doi.org/10.1109/TNNLS.2021.3100902.
Shapiro, A., E. Fedorovich, and S. Rahimi, 2016: A unified theory for the Great Plains nocturnal low-level jet. J. Atmos. Sci., 73, 3037–3057, https://doi.org/10.1175/JAS-D-15-0307.1.
Skamarock, W. C., and Coauthors, 2019: A description of the Advanced Research WRF Model version 4. NCAR Tech. Note NCAR/TN-556+STR, 145 pp., https://doi.org/10.5065/1dfh-6p97.
Smith, E. N., J. G. Gebauer, P. M. Klein, E. Fedorovich, and J. A. Gibbs, 2019: The Great Plains low-level jet during PECAN: Observed and simulated characteristics. Mon. Wea. Rev., 147, 1845–1869, https://doi.org/10.1175/MWR-D-18-0293.1.
Song, J., K. Liao, R. L. Coulter, and B. M. Lesht, 2005: Climatology of the low-level jet at the Southern Great Plains atmospheric boundary layer experiments site. J. Appl. Meteor., 44, 1593–1606, https://doi.org/10.1175/JAM2294.1.
Su, C., Y. Yang, X. Wang, and Z. Hu, 2017: Failures analysis of wind turbines: Case study of a Chinese wind farm. 2016 Prognostics and System Health Management Conf. (PHM-Chengdu), Chengdu, China, Institute of Electrical and Electronics Engineers, 1–6, https://doi.org/10.1109/PHM.2016.7819826.
Thompson, G., P. R. Field, R. M. Rasmussen, and W. D. Hall, 2008: Explicit forecasts of winter precipitation using an improved bulk microphysics scheme. Part II: Implementation of a new snow parameterization. Mon. Wea. Rev., 136, 5095–5115, https://doi.org/10.1175/2008MWR2387.1.
Villegas Pico, H. N., and V. Gevorgian, 2022: Blackstart capability and survivability of wind turbines with fully rated converters. IEEE Trans. Energy Convers., 37, 2482–2497, https://doi.org/10.1109/TEC.2022.3173903.
Weaver, S. J., S. Schubert, and H. Wang, 2009: Warm season variations in the low-level circulation and precipitation over the central United States in observations, AMIP simulations, and idealized SST experiments. J. Climate, 22, 5401–5420, https://doi.org/10.1175/2009JCLI2984.1.
Wilks, D., 1995: Statistical Methods in the Atmospheric Sciences: An Introduction. International Geophysics Series, Vol. 59, Elsevier, 467 pp.
Willmott, C. J., S. G. Ackleson, R. E. Davis, J. J. Feddema, K. M. Klink, D. R. Legates, J. O’Donnell, and C. M. Rowe, 1985: Statistics for the evaluation and comparison of models. J. Geophys. Res., 90, 8995–9005, https://doi.org/10.1029/JC090iC05p08995.