Browse
Abstract
Numerical weather prediction centers rely on the Gridded Binary Second Edition (GRIB2) file format to efficiently compress and disseminate model output as two-dimensional grids. User processing time and storage requirements are high if many GRIB2 files with size O(100 MB, where B = bytes) need to be accessed routinely. We illustrate one approach to overcome such bottlenecks by reformatting GRIB2 model output from the High-Resolution Rapid Refresh (HRRR) model of the National Centers for Environmental Prediction to a cloud-optimized storage type, Zarr. Archives of the original HRRR GRIB2 files and the resulting Zarr stores on Amazon Web Services (AWS) Simple Storage Service (S3) are available publicly through the Amazon Sustainability Data Initiative. Every hour, the HRRR model produces 18- or 48-hourly GRIB2 surface forecast files of size O(100 MB). To simplify access to the grids in the surface files, we reorganize the HRRR model output for each variable and vertical level into Zarr stores of size O(1 MB), with chunks O(10 kB) containing all forecast lead times for 150 × 150 gridpoint subdomains. Open-source libraries provide efficient access to the compressed Zarr stores using cloud or local computing resources. The HRRR-Zarr approach is illustrated for common applications of sensible weather parameters, including real-time alerts for high-impact situations and retrospective access to output from hundreds to thousands of model runs. For example, time series of surface pressure forecast grids can be accessed using AWS cloud computing resources approximately 40 times as fast from the HRRR-Zarr store as from the HRRR-GRIB2 archive.
Significance Statement
The rapid evolution of computing power and data storage have enabled numerical weather prediction forecasts to be generated faster and with more detail than ever before. The increased temporal and spatial resolution of forecast model output can force end users with finite memory and storage capabilities to make pragmatic decisions about which data to retrieve, archive, and process for their applications. We illustrate an approach to alleviate this access bottleneck for common weather analysis and forecasting applications by using the Amazon Web Services (AWS) Simple Storage Service (S3) to store output from the High-Resolution Rapid Refresh (HRRR) model in Zarr format. Zarr is a relatively new data storage format that is flexible, compressible, and designed to be accessed with open-source software either using cloud or local computing resources. The HRRR-Zarr dataset is publicly available as part of the AWS Sustainability Data Initiative.
Abstract
Numerical weather prediction centers rely on the Gridded Binary Second Edition (GRIB2) file format to efficiently compress and disseminate model output as two-dimensional grids. User processing time and storage requirements are high if many GRIB2 files with size O(100 MB, where B = bytes) need to be accessed routinely. We illustrate one approach to overcome such bottlenecks by reformatting GRIB2 model output from the High-Resolution Rapid Refresh (HRRR) model of the National Centers for Environmental Prediction to a cloud-optimized storage type, Zarr. Archives of the original HRRR GRIB2 files and the resulting Zarr stores on Amazon Web Services (AWS) Simple Storage Service (S3) are available publicly through the Amazon Sustainability Data Initiative. Every hour, the HRRR model produces 18- or 48-hourly GRIB2 surface forecast files of size O(100 MB). To simplify access to the grids in the surface files, we reorganize the HRRR model output for each variable and vertical level into Zarr stores of size O(1 MB), with chunks O(10 kB) containing all forecast lead times for 150 × 150 gridpoint subdomains. Open-source libraries provide efficient access to the compressed Zarr stores using cloud or local computing resources. The HRRR-Zarr approach is illustrated for common applications of sensible weather parameters, including real-time alerts for high-impact situations and retrospective access to output from hundreds to thousands of model runs. For example, time series of surface pressure forecast grids can be accessed using AWS cloud computing resources approximately 40 times as fast from the HRRR-Zarr store as from the HRRR-GRIB2 archive.
Significance Statement
The rapid evolution of computing power and data storage have enabled numerical weather prediction forecasts to be generated faster and with more detail than ever before. The increased temporal and spatial resolution of forecast model output can force end users with finite memory and storage capabilities to make pragmatic decisions about which data to retrieve, archive, and process for their applications. We illustrate an approach to alleviate this access bottleneck for common weather analysis and forecasting applications by using the Amazon Web Services (AWS) Simple Storage Service (S3) to store output from the High-Resolution Rapid Refresh (HRRR) model in Zarr format. Zarr is a relatively new data storage format that is flexible, compressible, and designed to be accessed with open-source software either using cloud or local computing resources. The HRRR-Zarr dataset is publicly available as part of the AWS Sustainability Data Initiative.
Abstract
The detection of multilayer clouds in the atmosphere can be particularly challenging from passive visible and infrared imaging radiometers since cloud boundary information is limited primarily to the topmost cloud layer. Yet detection of low clouds in the atmosphere is important for a number of applications, including aviation nowcasting and general weather forecasting. In this work, we develop pixel-based machine learning–based methods of detecting low clouds, with a focus on improving detection in multilayer cloud situations and specific attention given to improving the Cloud Cover Layers (CCL) product, which assigns cloudiness in a scene into vertical bins. The random forest (RF) and neural network (NN) implementations use inputs from a variety of sources, including GOES Advanced Baseline Imager (ABI) visible radiances, infrared brightness temperatures, auxiliary information about the underlying surface, and relative humidity (which holds some utility as a cloud proxy). Training and independent validation enlists near-global, actively sensed cloud boundaries from the radar and lidar systems on board the CloudSat and CALIPSO satellites. We find that the RF and NN models have similar performances. The probability of detection (PoD) of low cloud increases from 0.685 to 0.815 when using the RF technique instead of the CCL methodology, while the false alarm ratio decreases. The improved PoD of low cloud is particularly notable for scenes that appear to be cirrus from an ABI perspective, increasing from 0.183 to 0.686. Various extensions of the model are discussed, including a nighttime-only algorithm and expansion to other satellite sensors.
Significance Statement
Using satellites to detect the heights of clouds in the atmosphere is important for a variety of weather applications, including aviation weather forecasting. However, detecting low clouds can be challenging if there are other clouds above them. To address this, we have developed machine learning–based models that can be used with passive satellite instruments. These models use satellite observations at visible and infrared wavelengths, an estimate of relative humidity in the atmosphere, and geographic and surface-type information to predict whether low clouds are present. Our results show that these models have significant skill at predicting low clouds, even in the presence of higher cloud layers.
Abstract
The detection of multilayer clouds in the atmosphere can be particularly challenging from passive visible and infrared imaging radiometers since cloud boundary information is limited primarily to the topmost cloud layer. Yet detection of low clouds in the atmosphere is important for a number of applications, including aviation nowcasting and general weather forecasting. In this work, we develop pixel-based machine learning–based methods of detecting low clouds, with a focus on improving detection in multilayer cloud situations and specific attention given to improving the Cloud Cover Layers (CCL) product, which assigns cloudiness in a scene into vertical bins. The random forest (RF) and neural network (NN) implementations use inputs from a variety of sources, including GOES Advanced Baseline Imager (ABI) visible radiances, infrared brightness temperatures, auxiliary information about the underlying surface, and relative humidity (which holds some utility as a cloud proxy). Training and independent validation enlists near-global, actively sensed cloud boundaries from the radar and lidar systems on board the CloudSat and CALIPSO satellites. We find that the RF and NN models have similar performances. The probability of detection (PoD) of low cloud increases from 0.685 to 0.815 when using the RF technique instead of the CCL methodology, while the false alarm ratio decreases. The improved PoD of low cloud is particularly notable for scenes that appear to be cirrus from an ABI perspective, increasing from 0.183 to 0.686. Various extensions of the model are discussed, including a nighttime-only algorithm and expansion to other satellite sensors.
Significance Statement
Using satellites to detect the heights of clouds in the atmosphere is important for a variety of weather applications, including aviation weather forecasting. However, detecting low clouds can be challenging if there are other clouds above them. To address this, we have developed machine learning–based models that can be used with passive satellite instruments. These models use satellite observations at visible and infrared wavelengths, an estimate of relative humidity in the atmosphere, and geographic and surface-type information to predict whether low clouds are present. Our results show that these models have significant skill at predicting low clouds, even in the presence of higher cloud layers.
Abstract
An instrumentation package for wind and turbulence observations in the atmospheric boundary layer on an unmanned aerial vehicle (UAV) called BOREAL has been developed. BOREAL is a fixed-wing UAV built by BOREAL company, which weighs up to 25 kg (5 kg of payload) and has a wingspan of 4.2 m. With a light payload and optimal weather conditions, it has a flight endurance of 9 h. The instrumental payload was designed in order to measure every parameter required for the computation of the three wind components, at a rate of 100 s−1, which is fast enough to capture turbulence fluctuations: a GPS–inertial measurement unit (IMU) platform measures the three components of the groundspeed a well as the attitude angles; the airplane nose has been replaced by a five-hole probe in order to measure the angles of attack and sideslip, according to the so-called radome technique. This probe was calibrated using computational fluid dynamics (CFD) simulations and wind tunnel tests. The remaining instruments are a Pitot tube for static and dynamic pressure measurement and temperature/humidity sensors in dedicated housings. The optimal airspeed at which the vibrations are significantly reduced to an acceptable level was defined from qualification flights. With appropriate flight patterns, the reliability of the mean wind estimates, through self-consistency and comparison with observations performed at 60 m on an instrumented tower could be assessed. Promising first observations of turbulence up to frequencies around 10 Hz and corresponding to a spatial resolution to the order of 3 m are hereby presented.
Abstract
An instrumentation package for wind and turbulence observations in the atmospheric boundary layer on an unmanned aerial vehicle (UAV) called BOREAL has been developed. BOREAL is a fixed-wing UAV built by BOREAL company, which weighs up to 25 kg (5 kg of payload) and has a wingspan of 4.2 m. With a light payload and optimal weather conditions, it has a flight endurance of 9 h. The instrumental payload was designed in order to measure every parameter required for the computation of the three wind components, at a rate of 100 s−1, which is fast enough to capture turbulence fluctuations: a GPS–inertial measurement unit (IMU) platform measures the three components of the groundspeed a well as the attitude angles; the airplane nose has been replaced by a five-hole probe in order to measure the angles of attack and sideslip, according to the so-called radome technique. This probe was calibrated using computational fluid dynamics (CFD) simulations and wind tunnel tests. The remaining instruments are a Pitot tube for static and dynamic pressure measurement and temperature/humidity sensors in dedicated housings. The optimal airspeed at which the vibrations are significantly reduced to an acceptable level was defined from qualification flights. With appropriate flight patterns, the reliability of the mean wind estimates, through self-consistency and comparison with observations performed at 60 m on an instrumented tower could be assessed. Promising first observations of turbulence up to frequencies around 10 Hz and corresponding to a spatial resolution to the order of 3 m are hereby presented.
Abstract
The Clouds and the Earth’s Radiant Energy System (CERES) project has provided the climate community 20 years of globally observed top of the atmosphere (TOA) fluxes critical for climate and cloud feedback studies. The CERES Flux By Cloud Type (FBCT) product contains radiative fluxes by cloud type, which can provide more stringent constraints when validating models and also reveal more insight into the interactions between clouds and climate. The FBCT product provides 1° regional daily and monthly shortwave (SW) and longwave (LW) cloud-type fluxes and cloud properties sorted by seven pressure layers and six optical depth bins. Historically, cloud-type fluxes have been computed using radiative transfer models based on observed cloud properties. Instead of relying on radiative transfer models, the FBCT product utilizes Moderate Resolution Imaging Spectroradiometer (MODIS) radiances partitioned by cloud type within a CERES footprint to estimate the cloud-type broadband fluxes. The MODIS multichannel derived broadband fluxes were compared with the CERES observed footprint fluxes and were found to be within 1% and 2.5% for LW and SW, respectively, as well as being mostly free of cloud property dependencies. These biases are mitigated by constraining the cloud-type fluxes within each footprint with the CERES Single Scanner Footprint (SSF) observed flux. The FBCT all-sky and clear-sky monthly averaged fluxes were found to be consistent with the CERES SSF1deg product. Several examples of FBCT data are presented to highlight its utility for scientific applications.
Abstract
The Clouds and the Earth’s Radiant Energy System (CERES) project has provided the climate community 20 years of globally observed top of the atmosphere (TOA) fluxes critical for climate and cloud feedback studies. The CERES Flux By Cloud Type (FBCT) product contains radiative fluxes by cloud type, which can provide more stringent constraints when validating models and also reveal more insight into the interactions between clouds and climate. The FBCT product provides 1° regional daily and monthly shortwave (SW) and longwave (LW) cloud-type fluxes and cloud properties sorted by seven pressure layers and six optical depth bins. Historically, cloud-type fluxes have been computed using radiative transfer models based on observed cloud properties. Instead of relying on radiative transfer models, the FBCT product utilizes Moderate Resolution Imaging Spectroradiometer (MODIS) radiances partitioned by cloud type within a CERES footprint to estimate the cloud-type broadband fluxes. The MODIS multichannel derived broadband fluxes were compared with the CERES observed footprint fluxes and were found to be within 1% and 2.5% for LW and SW, respectively, as well as being mostly free of cloud property dependencies. These biases are mitigated by constraining the cloud-type fluxes within each footprint with the CERES Single Scanner Footprint (SSF) observed flux. The FBCT all-sky and clear-sky monthly averaged fluxes were found to be consistent with the CERES SSF1deg product. Several examples of FBCT data are presented to highlight its utility for scientific applications.
Abstract
The accumulated remote sensing data of altimeters and scatterometers have provided new opportunities for ocean state forecasting and have improved our knowledge of ocean–atmosphere exchanges. Studies on multivariate, multistep, spatiotemporal sequence forecasts of sea level anomalies (SLA) for different modalities, however, remain problematic. In this paper, we present a novel hybrid and multivariate deep neural network, named HMnet3, which can be used for SLA forecasting in the South China Sea (SCS). First, a spatiotemporal sequence forecasting network is trained by an improved convolutional long short-term memory (ConvLSTM) network using a channelwise attention mechanism and multivariate data from 1993 to 2015. Then a time series forecasting network is trained by an improved long short-term memory (LSTM) network, which is realized by ensemble empirical mode decomposition (EEMD). Finally, the two networks are combined by a successive correction method to produce SLA forecasts for lead times of up to 15 days, with a special focus on the open sea and coastal regions of the SCS. During the testing period of 2016–18, the performance of HMnet3 with sea surface temperature anomaly (SSTA), wind speed anomaly (SPDA), and SLA data is much better than those of state-of-the-art dynamic and statistical (ConvLSTM, persistence, and climatology) forecast models. Stricter testbeds for trial simulation experiments with real-time datasets are investigated, where the eddy classification metrics of HMnet3 are favorable for all properties, especially for those of small-scale eddies.
Abstract
The accumulated remote sensing data of altimeters and scatterometers have provided new opportunities for ocean state forecasting and have improved our knowledge of ocean–atmosphere exchanges. Studies on multivariate, multistep, spatiotemporal sequence forecasts of sea level anomalies (SLA) for different modalities, however, remain problematic. In this paper, we present a novel hybrid and multivariate deep neural network, named HMnet3, which can be used for SLA forecasting in the South China Sea (SCS). First, a spatiotemporal sequence forecasting network is trained by an improved convolutional long short-term memory (ConvLSTM) network using a channelwise attention mechanism and multivariate data from 1993 to 2015. Then a time series forecasting network is trained by an improved long short-term memory (LSTM) network, which is realized by ensemble empirical mode decomposition (EEMD). Finally, the two networks are combined by a successive correction method to produce SLA forecasts for lead times of up to 15 days, with a special focus on the open sea and coastal regions of the SCS. During the testing period of 2016–18, the performance of HMnet3 with sea surface temperature anomaly (SSTA), wind speed anomaly (SPDA), and SLA data is much better than those of state-of-the-art dynamic and statistical (ConvLSTM, persistence, and climatology) forecast models. Stricter testbeds for trial simulation experiments with real-time datasets are investigated, where the eddy classification metrics of HMnet3 are favorable for all properties, especially for those of small-scale eddies.
Abstract
Observations of thermodynamic and kinematic parameters associated with derivatives of the thermodynamics and wind fields, namely, advection, vorticity, divergence, and deformation, can be obtained by applying Green’s theorem to a network of observing sites. The five nodes that comprise the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) profiling network, spaced 50–80 km apart, are used to obtain measurements of these parameters over a finite region. To demonstrate the applicability of this technique at this location, it is first applied to gridded model output from the High-Resolution Rapid Refresh (HRRR) numerical weather prediction model, using profiles from the locations of ARM network sites, so that values calculated from this method can be directly compared to finite difference calculations. Good agreement is found between both approaches as well as between the model and values calculated from the observations. Uncertainties for the observations are obtained via a Monte Carlo process in which the profiles are randomly perturbed in accordance with their known error characteristics. The existing size of the ARM network is well suited to capturing these parameters, with strong correlations to model values and smaller uncertainties than a more closely spaced network, yet it is small enough that it avoids the tendency for advection to go to zero over a large area.
Abstract
Observations of thermodynamic and kinematic parameters associated with derivatives of the thermodynamics and wind fields, namely, advection, vorticity, divergence, and deformation, can be obtained by applying Green’s theorem to a network of observing sites. The five nodes that comprise the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) profiling network, spaced 50–80 km apart, are used to obtain measurements of these parameters over a finite region. To demonstrate the applicability of this technique at this location, it is first applied to gridded model output from the High-Resolution Rapid Refresh (HRRR) numerical weather prediction model, using profiles from the locations of ARM network sites, so that values calculated from this method can be directly compared to finite difference calculations. Good agreement is found between both approaches as well as between the model and values calculated from the observations. Uncertainties for the observations are obtained via a Monte Carlo process in which the profiles are randomly perturbed in accordance with their known error characteristics. The existing size of the ARM network is well suited to capturing these parameters, with strong correlations to model values and smaller uncertainties than a more closely spaced network, yet it is small enough that it avoids the tendency for advection to go to zero over a large area.
Abstract
Using NOAA’s S-band High-Power Snow-Level Radar (HPSLR), a technique for estimating the rain drop size distribution (DSD) above the radar is presented. This technique assumes the DSD can be described by a four parameter, generalized gamma distribution (GGD). Using the radar’s measured average Doppler velocity spectrum and a value (assumed, measured, or estimated) of the vertical air motion w, an estimate of the GGD is obtained. Four different methods can be used to obtain w. One method that estimates a mean mass-weighted raindrop diameter Dm from the measured reflectivity Z produces realistic DSDs compared to prior literature examples. These estimated DSDs provide evidence that the radar can retrieve the smaller drop sizes constituting the “drizzle” mode part of the DSD. This estimation technique was applied to 19 h of observations from Hankins, North Carolina. Results support the concept that DSDs can be modeled using GGDs with a limited range of parameters. Further work is needed to validate the described technique for estimating DSDs in more varied precipitation types and to verify the vertical air motion estimates.
Abstract
Using NOAA’s S-band High-Power Snow-Level Radar (HPSLR), a technique for estimating the rain drop size distribution (DSD) above the radar is presented. This technique assumes the DSD can be described by a four parameter, generalized gamma distribution (GGD). Using the radar’s measured average Doppler velocity spectrum and a value (assumed, measured, or estimated) of the vertical air motion w, an estimate of the GGD is obtained. Four different methods can be used to obtain w. One method that estimates a mean mass-weighted raindrop diameter Dm from the measured reflectivity Z produces realistic DSDs compared to prior literature examples. These estimated DSDs provide evidence that the radar can retrieve the smaller drop sizes constituting the “drizzle” mode part of the DSD. This estimation technique was applied to 19 h of observations from Hankins, North Carolina. Results support the concept that DSDs can be modeled using GGDs with a limited range of parameters. Further work is needed to validate the described technique for estimating DSDs in more varied precipitation types and to verify the vertical air motion estimates.
Abstract
We present a data assimilation package for use with ocean circulation models in analysis, forecasting, and system evaluation applications. The basic functionality of the package is centered on a multivariate linear statistical estimation for a given predicted/background ocean state, observations, and error statistics. Novel features of the package include support for multiple covariance models, and the solution of the least squares normal equations either using the covariance matrix or its inverse—the information matrix. The main focus of this paper, however, is on the solution of the analysis equations using the information matrix, which offers several advantages for solving large problems efficiently. Details of the parameterization of the inverse covariance using Markov random fields are provided and its relationship to finite-difference discretizations of diffusion equations are pointed out. The package can assimilate a variety of observation types from both remote sensing and in situ platforms. The performance of the data assimilation methodology implemented in the package is demonstrated with a yearlong global ocean hindcast with a 1/4° ocean model. The code is implemented in modern Fortran, supports distributed memory, shared memory, multicore architectures, and uses climate and forecasts compliant Network Common Data Form for input/output. The package is freely available with an open source license from www.tendral.com/tsis/.
Abstract
We present a data assimilation package for use with ocean circulation models in analysis, forecasting, and system evaluation applications. The basic functionality of the package is centered on a multivariate linear statistical estimation for a given predicted/background ocean state, observations, and error statistics. Novel features of the package include support for multiple covariance models, and the solution of the least squares normal equations either using the covariance matrix or its inverse—the information matrix. The main focus of this paper, however, is on the solution of the analysis equations using the information matrix, which offers several advantages for solving large problems efficiently. Details of the parameterization of the inverse covariance using Markov random fields are provided and its relationship to finite-difference discretizations of diffusion equations are pointed out. The package can assimilate a variety of observation types from both remote sensing and in situ platforms. The performance of the data assimilation methodology implemented in the package is demonstrated with a yearlong global ocean hindcast with a 1/4° ocean model. The code is implemented in modern Fortran, supports distributed memory, shared memory, multicore architectures, and uses climate and forecasts compliant Network Common Data Form for input/output. The package is freely available with an open source license from www.tendral.com/tsis/.
Abstract
A detailed description is given of how the liquid water content (LWC) and the ice water content (IWC) can be determined accurately and absolutely from the measured water Raman spectra of clouds. All instrumental and spectroscopic parameters that affect the accuracy of the water-content measurement are discussed and quantified; specifically, these are the effective absolute differential Raman backscattering cross section of water vapor
Abstract
A detailed description is given of how the liquid water content (LWC) and the ice water content (IWC) can be determined accurately and absolutely from the measured water Raman spectra of clouds. All instrumental and spectroscopic parameters that affect the accuracy of the water-content measurement are discussed and quantified; specifically, these are the effective absolute differential Raman backscattering cross section of water vapor
Abstract
The provision of reliable results from numerical wave models implemented over vast ocean areas can be considered as a time-consuming process. In this regard, the estimation of areas with maximum similarity in wave climate spatial areas and the determination of associated representative point locations for these areas can play an important role in climate research and in engineering applications. To deal with this issue, we apply a state-of-the-art clustering method, Geo-SOM, to determine geographical areas with similar wave regimes, in terms of mean wave direction (MWD), mean wave period (T0), and significant wave height (Hs). Although this method has many strengths, a weakness is related to detection and accounting of the most extreme and rare events. To resolve this deficiency, an initial preprocessing method (called PG-Geo-SOM) is applied. To evaluate the performance of this method, extreme wave parameters, including Hs and T0, are calculated. We simulate the present climate, represented as 1979 to 2017, compared to the future climate, 2060–98, following the Intergovernmental Panel on Climate Change (IPCC) future scenario RCP8.5 in the northwestern Atlantic Ocean. In this approach, the wave parameter data are divided into distinct groups, or clusters, motivated by their geographical positions. For each cluster, the centroid spatial point and the time series of data are extracted, for Hs, MWD, and T0. Extreme values are estimated for 5-, 10-, 25-, 50-, and 100-yr return periods, using Gumbel, exponential, and Weibull stochastic models, for both present and future climates. Results show that for parameter T0, the impact of climate change for the study area is a decreasing trend, while for Hs, in coastal and shelf areas up to about 1000 km from the coastline, increasing trends are estimated, and in open-ocean areas, far from the coast, decreasing trends are obtained.
Abstract
The provision of reliable results from numerical wave models implemented over vast ocean areas can be considered as a time-consuming process. In this regard, the estimation of areas with maximum similarity in wave climate spatial areas and the determination of associated representative point locations for these areas can play an important role in climate research and in engineering applications. To deal with this issue, we apply a state-of-the-art clustering method, Geo-SOM, to determine geographical areas with similar wave regimes, in terms of mean wave direction (MWD), mean wave period (T0), and significant wave height (Hs). Although this method has many strengths, a weakness is related to detection and accounting of the most extreme and rare events. To resolve this deficiency, an initial preprocessing method (called PG-Geo-SOM) is applied. To evaluate the performance of this method, extreme wave parameters, including Hs and T0, are calculated. We simulate the present climate, represented as 1979 to 2017, compared to the future climate, 2060–98, following the Intergovernmental Panel on Climate Change (IPCC) future scenario RCP8.5 in the northwestern Atlantic Ocean. In this approach, the wave parameter data are divided into distinct groups, or clusters, motivated by their geographical positions. For each cluster, the centroid spatial point and the time series of data are extracted, for Hs, MWD, and T0. Extreme values are estimated for 5-, 10-, 25-, 50-, and 100-yr return periods, using Gumbel, exponential, and Weibull stochastic models, for both present and future climates. Results show that for parameter T0, the impact of climate change for the study area is a decreasing trend, while for Hs, in coastal and shelf areas up to about 1000 km from the coastline, increasing trends are estimated, and in open-ocean areas, far from the coast, decreasing trends are obtained.