1. Introduction
Atmospheric optical turbulence is highly relevant for optical ground-based astronomy and future free-space optical communication (FSOC). Both applications suffer from light getting distorted when propagating through the turbulent atmosphere. In astronomy, turbulent fluctuations of the atmospheric refractive index, known as optical turbulence (OT), cause blurry images and limit the detection of small objects (Hardy 1998). FSOC links, which use optical beams to transmit data instead of traditional radio waves, experience reduced data rates or even link interruptions due to OT (Kaushal and Kaddoum 2017; Jahid et al. 2022). Therefore, the optical turbulence strength (
Obtaining
However, conducting long-term site surveys at various locations of interest is time consuming and resource intensive. While
This study assesses OTCliM’s performance using an extensive
2. Relevant regression studies
In recent years, different ML techniques have been utilized to derive
The ML-based
3. Methodology
Our proposed OTCliM approach aims to extrapolate 1 year of observed
Proposed OTCliM approach to extrapolate a measured 1-yr time series of OT strength (golden yellow) to multiple years (orange) based on ERA5 reference data (blue). Robust yearly
Citation: Artificial Intelligence for the Earth Systems 4, 2; 10.1175/AIES-D-24-0076.1
We utilize GBMs for the regression step (cf. section 3a) because they are known for their high performance in nonlinear tabular regression tasks. For comparison, we also train GBM models that use in situ observations instead of ERA5 as input data, which serve as performance baselines for OTCliM. Similarly, the traditional W71 model is evaluated with ERA5 inputs to provide a second baseline (cf. section 3b). To ensure our GBM models capture physically reasonable dependencies, we quantify the feature importance assigned to each ERA5 input variable (cf. section 3c). Finally, the performance of the trained models is assessed in terms of their temporal extrapolation and geographical generalization capabilities (cf. section 3d).
a. Gradient boosting regression
Following the MCP framework presented in Fig. 1, we aim to train one ML model for one site using training data from 1 year. The corresponding training data are the input data
The OT strength
The GBM regression models
b. Baseline models
We employ two baseline models to put the performance of OTCliM into perspective. The first model is the traditional empirical W71 model as given by Eqs. (1) and (3). Since the similarity function g(ζ) does not contain site-specific model coefficients, the accuracy of the W71
c. Physical consistency checking using feature importance
Note that SHAP values explain a feature’s contribution to a model’s prediction
d. Performance evaluation
In Eq. (6), the overbar denotes the mean and σ is the standard deviation of true (y) and estimated (
1) Temporal extrapolation
For the MCP application, the temporal extrapolation, the model
2) Cross-site evaluation
The second evaluation strategy probes the geographical generalization capability of a model trained on station s when applied to another station
4. Dataset
To evaluate OTCliM’s performance, we train GBM models for 17 diverse locations across New York State (cf. section 4a). Five years of
a. NYSM
The New York State Mesonet (NYSM) comprises 127 standard weather stations as of 2024 spread across New York State, the United States, and has been fully operational since 2018 (Brotzge et al. 2020). These weather stations measure routine meteorological parameters such as 2 m-temperature, 10 m-wind speed and direction, surface pressure, and several other variables. The sampling rate of these measurements is on the order of seconds, and final values are reported as 10-min aggregates (mean and variance). However, measuring
We utilize the data from these NYSM flux stations to obtain training targets for the OTCliM models. The flux stations are placed in diverse topographical and climatological environments as listed in Table 1. The stations Brooklyn (BKLN), Queens (QUEE), and Staten Island (STAT) are located on rooftops in urban environments where measurements are strongly influenced by their immediate surroundings. Neighboring buildings can, for example, cast shadows or cause wakes, which influence radiation, wind, and, therefore, also the local turbulence (WMO 2023). Since we expect these urban stations to behave differently than the rural stations, they are marked with (*) in the table. For each of the 17 stations, 5 years of measurements are available. Following the notation introduced previously, the set of flux stations is denoted as
Flux stations of the New York State Mesonet used to benchmark the OTCliM approach. The three urban stations are marked with asterisk (*).
1) Structure function approach
The structure function is computed from 5-min nonoverlapping windows of the sonic temperature signal and
2) QA/QC
Since our GBM models are purely data-driven, the quality of the fitted model strongly depends on the quality of the training data, so we apply several QA/QC steps to obtain a clean dataset. First,
Being an observed dataset, the sonic temperature signal contains gaps due to, e.g., power or communication outages or instrument malfunctioning (NYS Mesonet 2023), leading to ∼16% of missing values on average. That leaves ∼84% of the 5-min nonoverlapping windows gap-free and suitable for the computation of
3) Distribution scaling
Here, the subscripts p25 and p75 indicate the 25th and 75th percentiles of the original distribution used for scaling. More details are given in appendix B.
b. ERA5 reanalysis data
The ERA5 reanalysis (Hersbach et al. 2020) serves as the input dataset for OTCliM from which
ERA5 representation of NYSM domain with true locations of NYSM flux stations (gray) and corresponding ERA5 grid box (orange) containing the stations. Urban sites are marked with (*).
Citation: Artificial Intelligence for the Earth Systems 4, 2; 10.1175/AIES-D-24-0076.1
We aim to incorporate all variables linked to the two processes driving atmospheric turbulence: wind shear and buoyancy. Commonly used variables include sensible heat flux and friction velocity, as utilized in W71. However, an advantage of using ML over deriving analytical equations from theory is the ability to include variables that may indirectly influence turbulence. For example, the ERA5 gravity wave dissipation (GWD) rate could be significant in complex mountainous regions where orographic gravity wave drag is known to modulate momentum fluxes (Lilly 1972; Palmer et al. 1986). If there is a relationship between GWD and
A complete list of the ERA5 variables selected as features is presented in Table 2. Many listed features are (partially) redundant and/or (partially) correlated. Since it is initially unknown which features are most suitable for estimating near-surface
ERA5 variables and variables derived thereof serve as features for the OTCliM approach. Derived features do not have ERA5 variable names and are marked with (—).
The features in Table 2 without ERA5 variable names are so-called engineered features, meaning that they are derived from one or more ERA5 variables. In the following, we detail the variable selection and the feature engineering.
1) Shear-related features
The wind direction also serves as a proxy for upstream effects, or fetch, that might influence the observed turbulence. For example, atmospheric turbulence measured at a station close to the coast can be very different if the wind blows from land to sea or from sea to land. The periodicity of Xz is accounted for by including the sines and cosines of Xz as fetch features instead of Xz directly.
2) Buoyancy and stability
The prime candidate to reflect the influence of buoyancy on
3) Surface energy budget
4) Auxiliary features
Finally, we include several auxiliary features more loosely related to wind shear and buoyancy, such as boundary layer dissipation (BLD) rate, convective available potential energy (CAPE), or the aforementioned GWD. Also, certain daily and seasonal patterns exist in meteorology, which we aim to capture through synthetic time-dependent features. Based on the timestamp of the data point, we compute the sines and cosines (for periodicity) of the normalized hour of the day (h′ = 2π h/24 h), the normalized day of the year (day′ = 2π day/365 days), and the normalized month of the year (month′ = 2π month/12 month).
5. Results: Feature importance
The physics governing optical turbulence are the same everywhere—in urban or rural environments or for in-land stations or coastal sites. The modulating processes are always buoyancy and wind shear. However, the ERA5 features that best reflect and predict these processes locally can differ between sites. To assess and quantify potential differences, we present the SHAP-based feature importance values for all trained models in Fig. 3. To make the FI analysis less verbose, we make use of the linearity of the SHAP values [cf. Eq. (4)] and group the SHAP values of features related to similar physical processes as presented in Table 2. These groups are prefixed with Σ.
SHAP value–based FI of all OTCliM models (a) aggregated and (b) per training site. In (b), urban stations are marked with (*), the top 10 features of each station are highlighted in orange, and the global FI averages are indicated as black dashes. (c) Repeating the geopotential height from Fig. 2 to aid geographical interpretation of the results.
Citation: Artificial Intelligence for the Earth Systems 4, 2; 10.1175/AIES-D-24-0076.1
Figure 3a depicts the importance distributions of all features for all models. This global view reveals that buoyancy-related features such as radiation ΣR, sensible heat flux QH, and cloud cover Σcc are key variables for
The FI values are displayed per station in Fig. 3b, allowing us to assess the outliers in more detail. The ten features with the highest average FI for each location are highlighted in orange. Since different stations exhibit different top 10 features, 20 features are shown in total, but their markers are colored in light gray if the features are not in the top 10 of the site in question. For reference, the black lines for each feature represent the global average FI values according to Fig. 3a. Comparing all per-station plots on a high level reveals two key points. First, although the FI values associated with the five models trained for each site exhibit some scatter, the overall importance of features at one site is consistent between models. Therefore, we conclude that the models captured features representative of that site throughout the training years. Second, the color coding highlights that different features are important for different sites, suggesting that the processes dominating turbulence vary between locations.
The three urban stations, BKLN, QUEE, and STAT, and the coastal Southold (SOUT) site deviate most obviously from the mean FI distribution. In particular, BKLN and QUEE show an above-average dependency of estimated
Less drastic but distinct differences are also visible between the nonurban models. Wind shear α, for example, has lower-than-average importance assigned for models trained on lake shores [Fredonia (FRED), Burt (BURT), Ontario (ONTA)]. The models of another set of stations [Whitehall (WHIT), Voorheesville (VOOR), Schuylerville (SCHU), Penn Yan (PENN), Owego (OWEG)] picked up the GWD rate. These sites are located in valleys or mountainous areas where gravity waves could modulate near-surface turbulence (Lilly 1972; Palmer et al. 1986). Still, the dependency is small, and stations Chazy (CHAZ) and Red Hook (REDH), located at the end of valleys but still surrounded by mountains, do not depend on GWD. A more detailed study of the sites’ climatologies would be needed, which is beyond the scope of this work.
Overall, the FI values of most models represent the known physical dependencies of atmospheric turbulence. That supports our confidence that our OTCliM approach is well suited for MCP. The unconventional features picked up by some models are viewed as an advantage of ML-based methods to arrive at accurate predictions even in complex environments. However, we assume that geographical generalization will be more difficult for such models, which is addressed later in this paper.
6. Results: OTCliM performance
After establishing that all OTCliM models picked up physically meaningful dependencies, we turn our analysis to quantifying their performance. The foundation for this analysis is 85 models trained individually for the 17 NYSM stations and the five training years. The temporal extrapolation capabilities of each model are quantified in section 6a to assess the suitability of OTCliM for MCP. To put the MCP scores achieved by our approach into perspective, we compare them to the two baseline models, W71 and the in situ–based GBM models. Since we have a network of stations available, we also assess the geographical generalizability of the OTCliM models by applying models trained on one site to all other sites. The results of this cross-site evaluation study are discussed in section 6b.
a. Temporal extrapolation
The temporal extrapolation performance of each model
Performance of OTCliM models compared to the baseline models. The heatmaps show the performance of individual OTCliM models trained with 1 year of
Citation: Artificial Intelligence for the Earth Systems 4, 2; 10.1175/AIES-D-24-0076.1
The insight that the training year selection has a low influence on the model performance has high practical relevance. It suggests that models can be trained on archive data and do not necessarily require recent observations if and only if the site’s climatology remains comparable between the training period and the envisioned period of model application. Drastic changes in the station’s surroundings, e.g., through construction projects or climate change, would likely break this assumption and, thus, the model’s applicability.
The variability across models trained for different stations is likely due to ERA5 not fully capturing all local effects modulating the turbulence. For example, some landscape features, such as small lakes around PENN and WARS, are smaller than the ERA5 resolution, so they are missed. The relevance of these local features is highlighted when the OTCliM performance (black circles) is compared to that of the in situ baseline models in
A final comparison between OTCliM and the traditional W71 parameterization in the scatterplots of Figs. 4a and 4b shows that OTCliM clearly outperforms W71. That is because the underlying similarity function [cf. Eq. (3)] is empirically determined based on the flow over a flat, unobstructed plain (Wyngaard et al. 1971) and does not adapt to local topography or climatology. This limitation becomes especially evident for the urban sites QUEE and BKLN, where
b. Cross-site evaluation
Next, we assess the geographical generalization capabilities of the OTCliM models by assessing their performance when being evaluated across different sites. Each model
The c/s evaluation performance of OTCliM models. The rows of the heatmaps present the performance of the models trained on site s when evaluated on all other sites
Citation: Artificial Intelligence for the Earth Systems 4, 2; 10.1175/AIES-D-24-0076.1
Overall, the geographical generalization works very well for the nonurban sites. For half of all cases, correlation and RMSE performance degrades by not more than 16% for r and 25% for ϵ compared to the original MCP scores. Considering 75% of the cases, corresponding to the c/s scores of almost all nonurban sites, the absolute performance falls as low as r = 0.56 and ϵ = 0.63, which is still significantly better than the W71 baseline scores of
As expected from the FI analysis, the urban models do not perform well in nonurban locations. Both relative and absolute c/s scores in Fig. 5 are low for STAT, QUEE, and BKLN, resulting in long tails in the c/s histograms. However, the urban models perform better at other urban sites, as indicated by the small square of higher performance in the bottom-right corner of the c/s matrices. It seems that the performance of urban generalization could be linked to the degree of urbanization. BKLN is the most urban site, QUEE has more vegetation than BKLN, and STAT has more vegetation than QUEE, which is reflected in the progression of c/s performance: BKLN generalizes most poorly and depends on the most uncommon features, QUEE performs slightly better, and STAT performs even better with FI values similar to the average case. Nevertheless, this observation might be unique to these New York City stations, which are all relatively close to each other and experience similar mesoscale conditions.
7. Conclusions
This study presents OTCliM, a gradient-boosting-based measure–correlate–predict approach to obtain climatologies of optical turbulence strength from 1 year of
The key conclusion for practice from our work is that 1 year of
Finally, we believe that OTCliM can be highly relevant also for
Besides all these advantages, the critical assumption of OTCliM is that the observed year of
Acknowledgments.
This publication is part of the project FREE—Optical Wireless Superhighways: Free photons (at home and in space) (with project P19-13) of the research programme TTW-Perspectief, which is (partly) financed by the Dutch Research Council (NWO). This research is made possible by the New York State (NYS) Mesonet. Original funding for the NYS Mesonet (NYSM) buildup was provided by Federal Emergency Management Agency Grant FEMA-4085-DR-NY. The continued operation and maintenance of the NYSM is supported by National Mesonet Program, University at Albany, Federal and private grants, and others. Sukanta Basu is grateful for financial support from the State Universityof New York’s Empire Innovation Program.
Data availability statement.
The Python code for training and the trained models are available on
APPENDIX A
Ratio of Stable to Unstable Conditions in Dataset Before and After QA/QC
A three-step quality assurance (QA) and quality control (QC) procedure is described in section 4a(2), which aims at filtering unphysical values from our
Distribution of bulk potential temperature gradient Γ = (θ9 − θ2)/7 m before (blue) and after (orange) applying the QA/QC s in section 4a(2). The split between unstable (Γ < 0, solid bar) and stable (Γ > 0, hatched bar) atmospheric conditions is visualized by the bar charts in each panel or quantified in the respective legend. The QUEE site is omitted fully because of a malfunctioning instrument.
Citation: Artificial Intelligence for the Earth Systems 4, 2; 10.1175/AIES-D-24-0076.1
APPENDIX B
Scaling of log10 Target Data
As described in section 4a(3) of the main text, the
Comparison between distributions of (left) unscaled and (right) scaled
Citation: Artificial Intelligence for the Earth Systems 4, 2; 10.1175/AIES-D-24-0076.1
APPENDIX C
Details about Baseline Models
This section details how the traditional W71
a. W71 parameterization
All variables on which the W71 equations [cf. Eqs. (1)–(3)] depend are available from ERA5 (cf. Table 2). Only the dynamical sensible heat flux QH from ERA5 needs to be converted to its kinematic form
b. In situ GBM models
These upper-bound models are similar to the OTCliM models but use different input data. The OTCliM model of a specific site s is a GBM model (cf. section 3a) that employs ERA5 input data extracted from the grid box containing site s and
REFERENCES
Andreas, E. L., 1988: Estimating C n 2 over snow and sea ice from meteorological data. J. Opt. Soc. Amer. A, 5, 481–495, https://doi.org/10.1364/JOSAA.5.000481.
Arockia Bazil Raj, A., J. Arputha Vijaya Selvi, and S. Durairaj, 2015: Comparison of different models for ground-level atmospheric turbulence strength (C n 2) prediction with a new model according to local weather data for FSO applications. Appl. Opt., 54, 802–815, https://doi.org/10.1364/AO.54.000802.
Beason, M., G. Potvin, D. Sprung, J. McCrae, and S. Gladysz, 2024: Comparative analysis of C n 2 estimation methods for sonic anemometer data. Appl. Opt., 63, E94–E106, https://doi.org/10.1364/AO.520976.
Beyrich, F., O. K. Hartogensis, H. A. R. de Bruin, and H. C. Ward, 2021: Scintillometers. Springer Handbook of Atmospheric Measurements, T. Foken, Ed., Springer Handbooks Springer, 969–997, https://doi.org/10.1007/978-3-030-52171-4_34.
Bolbasova, L. A., A. A. Andrakhanov, and A. Y. Shikhovtsev, 2021: The application of machine learning to predictions of optical turbulence in the surface layer at Baikal Astrophysical Observatory. Mon. Not. Roy. Astron. Soc., 504, 6008–6017, https://doi.org/10.1093/mnras/stab953.
Brotzge, J. A., and Coauthors, 2020: A technical overview of the New York State Mesonet standard network. J. Atmos. Oceanic Technol., 37, 1827–1845, https://doi.org/10.1175/JTECH-D-19-0220.1.
Carta, J. A., S. Velázquez, and P. Cabrera, 2013: A review of Measure-Correlate-Predict (MCP) methods used to estimate long-term wind characteristics at a target site. Renewable Sustainable Energy Rev., 27, 362–400, https://doi.org/10.1016/j.rser.2013.07.004.
Chen, T., and C. Guestrin, 2016: XGBoost: A scalable tree boosting system. Proc. 22nd ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining, San Francisco, CA, Association for Computing Machinery, 785–794, https://doi.org/10.1145/2939672.2939785.
Cherubini, T., and S. Businger, 2013: Another look at the refractive index structure function. J. Appl. Meteor. Climatol., 52, 498–506, https://doi.org/10.1175/JAMC-D-11-0263.1.
Cherubini, T., R. Lyman, and S. Businger, 2021: Forecasting seeing for the Maunakea observatories with machine learning. Mon. Not. Roy. Astron. Soc., 509, 232–245, https://doi.org/10.1093/mnras/stab2916.
Copernicus Climate Change Service, 2021: CERRA sub-daily regional reanalysis data for Europe on single levels from 1984 to present. Copernicus Climate Change Service (C3S) Climate Data Store (CDS), accessed 1 August 2024, https://doi.org/10.24381/CDS.622A565A.
Fuchs, C., and F. Moll, 2015: Ground station network optimization for space-to-ground optical communication links. J. Opt. Commun. Networking, 7, 1148–1159, https://doi.org/10.1364/JOCN.7.001148.
Grachev, A. A., E. L. Andreas, C. W. Fairall, P. S. Guest, and P. O. G. Persson, 2013: The critical Richardson number and limits of applicability of local similarity theory in the stable boundary layer. Bound.-Layer Meteor., 147, 51–82, https://doi.org/10.1007/s10546-012-9771-0.
Hardy, J. W., 1998: Adaptive Optics for Astronomical Telescopes. Oxford Series in Optical and Imaging Sciences, Vol. 16, Oxford University Press, 438 pp.
He, P., and S. Basu, 2016: Development of similarity relationships for energy dissipation rate and temperature structure parameter in stably stratified flows: A direct numerical simulation approach. Environ. Fluid Mech., 16, 373–399, https://doi.org/10.1007/s10652-015-9427-y.
Hersbach, H., and Coauthors, 2020: The ERA5 global reanalysis. Quart. J. Roy. Meteor. Soc., 146, 1999–2049, https://doi.org/10.1002/qj.3803.
Hill, F., and Coauthors, 2006: Site testing for the advanced technology solar telescope. Proc. SPIE, 6276, 62671T, https://doi.org/10.1117/12.673677.
ITU, 2007: Prediction methods required for the design of terrestrial free-space optical links. Recommendation Rec. ITU-R P.1814, International Telecommunication Union, 12 pp., https://www.itu.int/dms_pubrec/itu-r/rec/p/R-REC-P.1814-0-200708-I!!PDF-E.pdf.
Jahid, A., M. H. Alsharif, and T. J. Hall, 2022: A contemporary survey on free space optical communication: Potentials, technical challenges, recent advances and research direction. J. Network Comput. Appl., 200, 103311, https://doi.org/10.1016/j.jnca.2021.103311.
Jellen, C., J. Burkhardt, C. Brownell, and C. Nelson, 2020: Machine learning informed predictor importance measures of environmental parameters in maritime optical turbulence. Appl. Opt., 59, 6379–6389, https://doi.org/10.1364/AO.397325.
Jellen, C., M. Oakley, C. Nelson, J. Burkhardt, and C. Brownell, 2021: Machine-learning informed macro-meteorological models for the near-maritime environment. Appl. Opt., 60, 2938–2951, https://doi.org/10.1364/AO.416680.
Kaimal, J. C., and J. E. Gaynor, 1991: Another look at sonic thermometry. Bound.-Layer Meteor., 56, 401–410, https://doi.org/10.1007/BF00119215.
Kartal, S., S. Basu, and S. J. Watson, 2023: A decision-tree-based measure–correlate–predict approach for peak wind gust estimation from a global reanalysis dataset. Wind Energy Sci., 8, 1533–1551, https://doi.org/10.5194/wes-8-1533-2023.
Kaushal, H., and G. Kaddoum, 2017: Optical communication in space: Challenges and mitigation techniques. IEEE Commun. Surv. Tutorials, 19, 57–96, https://doi.org/10.1109/COMST.2016.2603518.
Ke, G., Q. Meng, T. Finley, T. Wang, W. Chen, W. Ma, Q. Ye, and T.-Y. Liu, 2017: LightGBM: A highly efficient gradient boosting decision tree. NIPS’17: Proceedings of the 31st International Conference on Neural Information Processing Systems, Curran Associates Inc., 3149–3157, https://dl.acm.org/doi/10.5555/3294996.3295074.
Lam, R., and Coauthors, 2023: Learning skillful medium-range global weather forecasting. Science, 382, 1416–1421, https://doi.org/10.1126/science.adi2336.
Lang, S., and Coauthors, 2024: AIFS—ECMWF’s data-driven forecasting system. arXiv, 2406.01465v2, https://doi.org/10.48550/arXiv.2406.01465.
Lilly, D. K., 1972: Wave momentum flux—A GARP problem. Bull. Amer. Meteor. Soc., 53, 17–24, https://doi.org/10.1175/1520-0477-53.1.17.
Lundberg, S. M., and S.-I. Lee, 2017: A unified approach to interpreting model predictions. NIPS’17: Proceedings of the 31st International Conference on Neural Information Processing Systems, Curran Associates Inc., 4768–4777, https://dl.acm.org/doi/10.5555/3295222.3295230.
Masciadri, E., J. Vernin, and P. Bougeault, 1999: 3D mapping of optical turbulence using an atmospheric numerical model. Astron. Astrophys. Suppl. Ser., 137, 185–202, https://doi.org/10.1051/aas:1999474.
Mesinger, F., and Coauthors, 2006: North American regional reanalysis. Bull. Amer. Meteor. Soc., 87, 343–360, https://doi.org/10.1175/BAMS-87-3-343.
Milli, J., T. Rojas, B. Courtney-Barrer, F. Bian, J. Navarrete, F. Kerber, and A. Otarola, 2020: Turbulence nowcast for the Cerro Paranal and Cerro Armazones observatory sites. arXiv, 2012.05674v2, https://doi.org/10.48550/arXiv.2012.05674.
Moene, A. F., 2003: Effects of water vapour on the structure parameter of the refractive index for near-infrared radiation. Bound.-Layer Meteor., 107, 635–653, https://doi.org/10.1023/A:1022807617073.
Molnar, C., 2022: Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. 2nd ed. Christoph Molnar, 317 pp.
Monin, A. S., and A. M. Obukhov, 1954: Basic laws of turbulent mixing in the surface layer of the atmosphere. Contrib. Geophys. Inst. Acad. Sci., 151, e187.
NCEP, 2015: NCEP GFS 0.25 Degree Global Forecast Grids Historical Archive. Research Data Archive at the National Center for Atmospheric Research, Computational and Information Systems Laboratory, accessed 1 August 2024, https://doi.org/10.5065/D65D8PWK.
NYS Mesonet, 2023: New York State Mesonet flux network data. 9 pp., https://nysmesonet.org/documents/NYSM_Readme_Flux.pdf.
Palmer, T. N., G. J. Shutts, and R. Swinbank, 1986: Alleviation of a systematic westerly bias in general circulation and numerical weather prediction models through an orographic gravity wave drag parametrization. Quart. J. Roy. Meteor. Soc., 112, 1001–1039, https://doi.org/10.1002/qj.49711247406.
Pham, T. V., H. Yamano, and I. Susumu, 2023: A placement method of ground stations for optical satellite communications considering cloud attenuation. IEICE Commun. Express, 12, 568–571, https://doi.org/10.23919/comex.2023XBL0092.
Pierzyna, M., R. Saathof, and S. Basu, 2023a: A multi-physics ensemble modeling framework for reliable C n 2 estimation. Proc. SPIE, 12731, 127310N, https://doi.org/10.1117/12.2680997.
Pierzyna, M., R. Saathof, and S. Basu, 2023b: Π-ML: A dimensional analysis-based machine learning parameterization of optical turbulence in the atmospheric surface layer. Opt. Lett., 48, 4484–4487, https://doi.org/10.1364/OL.492652.
Pierzyna, M., O. Hartogensis, S. Basu, and R. Saathof, 2024: Intercomparison of flux-, gradient-, and variance-based optical turbulence (C n 2) parameterizations. Appl. Opt., 63, E107–E119, https://doi.org/10.1364/AO.519942.
Poulenard, S., M. Crosnier, and A. Rissons, 2015: Ground segment design for broadband geostationary satellite with optical feeder link. J. Opt. Commun. Networking, 7, 325–336, https://doi.org/10.1364/JOCN.7.000325.
Rotach, M. W., and Coauthors, 2005: BUBBLE—An urban boundary layer meteorology project. Theor. Appl. Climatol., 81, 231–261, https://doi.org/10.1007/s00704-004-0117-9.
Sadot, D., and N. S. Kopeika, 1992: Forecasting optical turbulence strength on the basis of macroscale meteorology and aerosols: Models and validation. Opt. Eng., 31, 200–212, https://doi.org/10.1117/12.56059.
Savage, M. J., 2009: Estimation of evaporation using a dual-beam surface layer scintillometer and component energy balance measurements. Agric. For. Meteor., 149, 501–517, https://doi.org/10.1016/j.agrformet.2008.09.012.
Schöck, M., and Coauthors, 2009: Thirty meter telescope site testing I: Overview. Publ. Astron. Soc. Pac., 121, 384–395, https://doi.org/10.1086/599287.
Spiliotis, E., 2022: Decision trees for time-series forecasting. Foresight Int. J. Appl. Forecasting, 1, 30–44.
Stull, R. B., 1988: An Introduction to Boundary Layer Meteorology. Kluwer Academic Publishers, 666 pp.
Su, C., X. Wu, S. Wu, Q. Yang, Y. Han, C. Qing, T. Luo, and Y. Liu, 2021: In situ measurements and neural network analysis of the profiles of optical turbulence over the Tibetan Plateau. Mon. Not. Roy. Astron. Soc., 506, 3430–3438, https://doi.org/10.1093/mnras/stab1792.
Su, C.-H., and Coauthors, 2019: BARRA v1.0: The Bureau of Meteorology atmospheric high-resolution Regional Reanalysis for Australia. Geosci. Model Dev., 12, 2049–2068, https://doi.org/10.5194/gmd-12-2049-2019.
Vorontsov, A. M., M. A. Vorontsov, G. A. Filimonov, and E. Polnau, 2020: Atmospheric turbulence study with deep machine learning of intensity scintillation patterns. Appl. Sci., 10, 8136, https://doi.org/10.3390/app10228136.
Wang, C., Q. Wu, M. Weimer, and E. Zhu, 2021: FLAML: A fast and lightweight AutoML library. Proc. Mach. Learn. Syst., 3, 434–447.
Wang, H., B. Li, X. Wu, C. Liu, Z. Hu, and P. Xu, 2015: Prediction model of atmospheric refractive index structure parameter in coastal area. J. Mod. Opt., 62, 1336–1346, https://doi.org/10.1080/09500340.2015.1037801.
Wang, Y., and S. Basu, 2016: Using an artificial neural network approach to estimate surface-layer optical turbulence at Mauna Loa, Hawaii. Opt. Lett., 41, 2334–2337, https://doi.org/10.1364/OL.41.002334.
Wesely, M. L., 1976: The combined effect of temperature and humidity fluctuations on refractive index. J. Appl. Meteor., 15, 43–49, https://doi.org/10.1175/1520-0450(1976)015<0043:TCEOTA>2.0.CO;2.
WMO, 2023: Volume III—Observing systems. WMO 8, 428 pp., https://community.wmo.int/en/activity-areas/imop/wmo-no_8.
Wyngaard, J. C., Y. Izumi, and S. A. Collins, 1971: Behavior of the refractive-index-structure parameter near the ground. J. Opt. Soc. Amer., 61, 1646–1650, https://doi.org/10.1364/JOSA.61.001646.
Zhang, R., J. Huang, X. Wang, J. A. Zhang, and F. Huang, 2016: Effects of precipitation on sonic anemometer measurements of turbulent fluxes in the atmospheric surface layer. J. Ocean Univ. China, 15, 389–398, https://doi.org/10.1007/s11802-016-2804-4.