Search Results
You are looking at 31 - 40 of 60 items for
- Author or Editor: Yi Jin x
- Refine by Access: All Content x
Abstract
This paper proposes a new method to properly define and accurately determine the vortex center of a model-predicted tropical cyclone (TC) from a dynamic perspective. Ideally, a dynamically determined TC vortex center should maximize the gradient wind balance or, equivalently, minimize the gradient wind imbalance measured by an energy norm over the TC vortex. In practice, however, such an energy norm cannot be used to easily and unambiguously determine the TC vortex center. An alternative yet practical approach is developed to dynamically and unambiguously define the TC vortex center. In this approach, the TC vortex core of near-solid-body rotation is modeled by a simple parametric vortex constrained by the gradient wind balance. Therefore, the modeled vortex can fit simultaneously the perturbation pressure and streamfunction of the TC vortex part (extracted from the model-predicted fields) over the TC vortex core area (within the radius of maximum tangential wind), while the misfit is measured by a properly defined cost function. Minimizing this cost function yields the desired dynamic optimality condition that can uniquely define the TC vortex center. Using this dynamic optimality condition, a new method is developed in the form of iterative least squares fit to accurately determine the TC vortex center. The new method is shown to be efficient and effective for finding the TC vortex center that accurately satisfies the dynamic optimality condition.
Abstract
This paper proposes a new method to properly define and accurately determine the vortex center of a model-predicted tropical cyclone (TC) from a dynamic perspective. Ideally, a dynamically determined TC vortex center should maximize the gradient wind balance or, equivalently, minimize the gradient wind imbalance measured by an energy norm over the TC vortex. In practice, however, such an energy norm cannot be used to easily and unambiguously determine the TC vortex center. An alternative yet practical approach is developed to dynamically and unambiguously define the TC vortex center. In this approach, the TC vortex core of near-solid-body rotation is modeled by a simple parametric vortex constrained by the gradient wind balance. Therefore, the modeled vortex can fit simultaneously the perturbation pressure and streamfunction of the TC vortex part (extracted from the model-predicted fields) over the TC vortex core area (within the radius of maximum tangential wind), while the misfit is measured by a properly defined cost function. Minimizing this cost function yields the desired dynamic optimality condition that can uniquely define the TC vortex center. Using this dynamic optimality condition, a new method is developed in the form of iterative least squares fit to accurately determine the TC vortex center. The new method is shown to be efficient and effective for finding the TC vortex center that accurately satisfies the dynamic optimality condition.
Abstract
In this study, the authors numerically investigate the response of an axisymmetric tropical cyclone (TC) vortex to the vertical fluxes of momentum, heat, and moisture induced by roll vortices (rolls) in the boundary layer. To represent the vertical fluxes induced by rolls, a two-dimensional high-resolution Single-Grid Roll-Resolving Model (SRM) is embedded at multiple horizontal grid points in the mesoscale COAMPS for Tropical Cyclones (COAMPS-TC) model domain. Idealized experiments are conducted with the SRM embedded within 3 times the radius of maximum wind of an axisymmetric TC. The results indicate that the rolls induce changes in the boundary layer wind distribution and cause a moderate (approximately 15%) increase in the TC intensification rate by increasing the boundary layer convergence in the eyewall region and induce more active eyewall convection. The numerical experiments also suggest that the roll-induced tangential momentum flux is most important in contributing to the TC intensification process, and the rolls generated at different radii (within the range considered in this study) all have positive contributions. The results are not qualitatively impacted by the initial TC vortex or the setup of the vertical diffusivity in COAMPS-TC.
Abstract
In this study, the authors numerically investigate the response of an axisymmetric tropical cyclone (TC) vortex to the vertical fluxes of momentum, heat, and moisture induced by roll vortices (rolls) in the boundary layer. To represent the vertical fluxes induced by rolls, a two-dimensional high-resolution Single-Grid Roll-Resolving Model (SRM) is embedded at multiple horizontal grid points in the mesoscale COAMPS for Tropical Cyclones (COAMPS-TC) model domain. Idealized experiments are conducted with the SRM embedded within 3 times the radius of maximum wind of an axisymmetric TC. The results indicate that the rolls induce changes in the boundary layer wind distribution and cause a moderate (approximately 15%) increase in the TC intensification rate by increasing the boundary layer convergence in the eyewall region and induce more active eyewall convection. The numerical experiments also suggest that the roll-induced tangential momentum flux is most important in contributing to the TC intensification process, and the rolls generated at different radii (within the range considered in this study) all have positive contributions. The results are not qualitatively impacted by the initial TC vortex or the setup of the vertical diffusivity in COAMPS-TC.
Abstract
Cloud-top verification is inherently difficult because of large uncertainties in the estimates of observed cloud-top height. Misplacement of cloud top associated with transmittance through optically thin cirrus is one of the most common problems. Forward radiative models permit a direct comparison of predicted and observed radiance, but uncertainties in the vertical position of clouds remain. In this work, synthetic brightness temperatures are compared with forecast cloud-top heights so as to investigate potential errors and develop filters to remove optically thin ice clouds. Results from a statistical analysis reveal that up to 50% of the clouds with brightness temperatures as high as 280 K are actually optically thin cirrus. The filters successfully removed most of the thin ice clouds, allowing for the diagnosis of very specific errors. The results indicate a strong negative bias in midtropospheric cloud cover in the model, as well as a lack of land-based convective cumuliform clouds. The model also predicted an area of persistent stratus over the North Atlantic Ocean that was not apparent in the observations. In contrast, high cloud tops associated with deep convection were well simulated, as were mesoscale areas of enhanced trade cumulus coverage in the Sargasso Sea.
Abstract
Cloud-top verification is inherently difficult because of large uncertainties in the estimates of observed cloud-top height. Misplacement of cloud top associated with transmittance through optically thin cirrus is one of the most common problems. Forward radiative models permit a direct comparison of predicted and observed radiance, but uncertainties in the vertical position of clouds remain. In this work, synthetic brightness temperatures are compared with forecast cloud-top heights so as to investigate potential errors and develop filters to remove optically thin ice clouds. Results from a statistical analysis reveal that up to 50% of the clouds with brightness temperatures as high as 280 K are actually optically thin cirrus. The filters successfully removed most of the thin ice clouds, allowing for the diagnosis of very specific errors. The results indicate a strong negative bias in midtropospheric cloud cover in the model, as well as a lack of land-based convective cumuliform clouds. The model also predicted an area of persistent stratus over the North Atlantic Ocean that was not apparent in the observations. In contrast, high cloud tops associated with deep convection were well simulated, as were mesoscale areas of enhanced trade cumulus coverage in the Sargasso Sea.
Abstract
Projections of future sea level changes are usually based on global climate models (GCMs). However, the changes in shallow coastal regions, like the marginal seas near China, cannot be fully resolved in GCMs. To improve regional sea level simulations, a high-resolution (~8 km) regional ocean model is set up for the marginal seas near China for both the historical (1994–2015) and future (2079–2100) periods under representative concentration pathways (RCPs) 4.5 and 8.5. The historical ocean simulations are evaluated at different spatiotemporal scales, and the model is then integrated for the future period, driven by projected monthly climatological climate change signals from eight GCMs individually via both surface and open boundary conditions. The downscaled ocean changes derived by comparing historical and future experiments reveal greater spatial details than those from GCMs, such as a low dynamic sea level (DSL) center of −0.15 m in the middle of the South China Sea (SCS). As a novel test, the downscaled results driven by the ensemble mean forcings are almost identical with the ensemble average results from individually downscaled cases. Forcing of the DSL change and increased cyclonic circulation in the SCS are dominated by the climate change signals from the Pacific, while the DSL change in the East China marginal seas is caused by both local atmosphere forcing and signals from the Pacific. The method of downscaling developed in this study is a useful modeling protocol for adaptation and mitigation planning for future oceanic climate changes.
Abstract
Projections of future sea level changes are usually based on global climate models (GCMs). However, the changes in shallow coastal regions, like the marginal seas near China, cannot be fully resolved in GCMs. To improve regional sea level simulations, a high-resolution (~8 km) regional ocean model is set up for the marginal seas near China for both the historical (1994–2015) and future (2079–2100) periods under representative concentration pathways (RCPs) 4.5 and 8.5. The historical ocean simulations are evaluated at different spatiotemporal scales, and the model is then integrated for the future period, driven by projected monthly climatological climate change signals from eight GCMs individually via both surface and open boundary conditions. The downscaled ocean changes derived by comparing historical and future experiments reveal greater spatial details than those from GCMs, such as a low dynamic sea level (DSL) center of −0.15 m in the middle of the South China Sea (SCS). As a novel test, the downscaled results driven by the ensemble mean forcings are almost identical with the ensemble average results from individually downscaled cases. Forcing of the DSL change and increased cyclonic circulation in the SCS are dominated by the climate change signals from the Pacific, while the DSL change in the East China marginal seas is caused by both local atmosphere forcing and signals from the Pacific. The method of downscaling developed in this study is a useful modeling protocol for adaptation and mitigation planning for future oceanic climate changes.
Abstract
The impact of dissipative heating on tropical cyclone (TC) intensity forecasts is investigated using the U.S. Navy’s operational mesoscale model (the Coupled Ocean/Atmosphere Mesoscale Prediction System). A physically consistent method of including dissipative heating is developed based on turbulent kinetic energy dissipation to ensure energy conservation. Mean absolute forecast errors of track and surface maximum winds are calculated for eighteen 48-h simulations of 10 selected TC cases over both the Atlantic basin and the northwest Pacific. Simulation results suggest that the inclusion of dissipative heating improves surface maximum wind forecasts by 10%–20% at 15-km resolution, while it has little impact on the track forecasts. The resultant improvement from the inclusion of the dissipative heating increases to 29% for the surface maximum winds at 5-km resolution for Hurricane Isabel (2003), where dissipative heating produces an unstable layer at low levels and warms a deep layer of the troposphere. While previous studies depicted a 65 m s−1 threshold for the dissipative heating to impact the TC intensity, it is found that dissipative heating has an effect on the TC intensity when the TC is of moderate strength with the surface maximum wind speed at 45 m s−1. Sensitivity tests reveal that there is significant nonlinear interaction between the dissipative heating from the surface friction and that from the turbulent kinetic energy dissipation in the interior atmosphere. A conceptualized description is given for the positive feedback mechanism between the two processes. The results presented here suggest that it is necessary to include both processes in a mesoscale model to better forecast the TC structure and intensity.
Abstract
The impact of dissipative heating on tropical cyclone (TC) intensity forecasts is investigated using the U.S. Navy’s operational mesoscale model (the Coupled Ocean/Atmosphere Mesoscale Prediction System). A physically consistent method of including dissipative heating is developed based on turbulent kinetic energy dissipation to ensure energy conservation. Mean absolute forecast errors of track and surface maximum winds are calculated for eighteen 48-h simulations of 10 selected TC cases over both the Atlantic basin and the northwest Pacific. Simulation results suggest that the inclusion of dissipative heating improves surface maximum wind forecasts by 10%–20% at 15-km resolution, while it has little impact on the track forecasts. The resultant improvement from the inclusion of the dissipative heating increases to 29% for the surface maximum winds at 5-km resolution for Hurricane Isabel (2003), where dissipative heating produces an unstable layer at low levels and warms a deep layer of the troposphere. While previous studies depicted a 65 m s−1 threshold for the dissipative heating to impact the TC intensity, it is found that dissipative heating has an effect on the TC intensity when the TC is of moderate strength with the surface maximum wind speed at 45 m s−1. Sensitivity tests reveal that there is significant nonlinear interaction between the dissipative heating from the surface friction and that from the turbulent kinetic energy dissipation in the interior atmosphere. A conceptualized description is given for the positive feedback mechanism between the two processes. The results presented here suggest that it is necessary to include both processes in a mesoscale model to better forecast the TC structure and intensity.
Abstract
This study examines a multimodel comparison of regional-scale convection-permitting ensembles including both physics and initial condition uncertainties for the probabilistic prediction of Hurricanes Sandy (2012) and Edouard (2014). The model cores examined include COAMPS-TC, HWRF, and WRF-ARW. Two stochastic physics schemes were also applied using the WRF-ARW model. Each ensemble was initialized with the same initial condition uncertainties represented by the analysis perturbations from a WRF-ARW-based real-time cycling ensemble Kalman filter. It is found that single-core ensembles were capable of producing similar ensemble statistics for track and intensity for the first 36–48 h of model integration, with biases in the ensemble mean evident at longer forecast lead times along with increased variability in spread. The ensemble spread of a multicore ensemble with members sampled from single-core ensembles was generally as large or larger than any constituent model, especially at longer lead times. Systematically varying the physic parameterizations in the WRF-ARW ensemble can alter both the forecast ensemble mean and spread to resemble the ensemble performance using a different forecast model. Compared to the control WRF-ARW experiment, the application of the stochastic kinetic energy backscattering scheme had minimal impact on the ensemble spread of track and intensity for both cases, while the use of stochastic perturbed physics tendencies increased the ensemble spread in track for Sandy and in intensity for both cases. This case study suggests that it is important to include model physics uncertainties for probabilistic TC prediction. A single-core multiphysics ensemble can capture the ensemble mean and spread forecasted by a multicore ensemble for the presented case studies.
Abstract
This study examines a multimodel comparison of regional-scale convection-permitting ensembles including both physics and initial condition uncertainties for the probabilistic prediction of Hurricanes Sandy (2012) and Edouard (2014). The model cores examined include COAMPS-TC, HWRF, and WRF-ARW. Two stochastic physics schemes were also applied using the WRF-ARW model. Each ensemble was initialized with the same initial condition uncertainties represented by the analysis perturbations from a WRF-ARW-based real-time cycling ensemble Kalman filter. It is found that single-core ensembles were capable of producing similar ensemble statistics for track and intensity for the first 36–48 h of model integration, with biases in the ensemble mean evident at longer forecast lead times along with increased variability in spread. The ensemble spread of a multicore ensemble with members sampled from single-core ensembles was generally as large or larger than any constituent model, especially at longer lead times. Systematically varying the physic parameterizations in the WRF-ARW ensemble can alter both the forecast ensemble mean and spread to resemble the ensemble performance using a different forecast model. Compared to the control WRF-ARW experiment, the application of the stochastic kinetic energy backscattering scheme had minimal impact on the ensemble spread of track and intensity for both cases, while the use of stochastic perturbed physics tendencies increased the ensemble spread in track for Sandy and in intensity for both cases. This case study suggests that it is important to include model physics uncertainties for probabilistic TC prediction. A single-core multiphysics ensemble can capture the ensemble mean and spread forecasted by a multicore ensemble for the presented case studies.
Abstract
This study examines the dependence of tropical cyclone (TC) intensity forecast errors on track forecast errors in the Coupled Ocean–Atmosphere Mesoscale Prediction System for Tropical Cyclones (COAMPS-TC) model. Using real-time forecasts and retrospective experiments during 2015–18, verification of TC intensity errors conditioned on different 5-day track error thresholds shows that reducing the 5-day track errors by 50%–70% can help reduce the absolute intensity errors by 18%–20% in the 2018 version of the COAMPS-TC model. Such impacts of track errors on the TC intensity errors are most persistent at 4–5-day lead times in all three major ocean basins, indicating a significant control of global models on the forecast skill of the COAMPS-TC model. It is of interest to find, however, that lowering the 5-day track errors below 80 n mi (1 n mi = 1.852 km) does not reduce TC absolute intensity errors further. Instead, the 4–5-day intensity errors appear to be saturated at around 10–12 kt (1 kt ≈ 0.51 m s−1) for cases with small track errors, thus suggesting the existence of some inherent intensity errors in regional models. Additional idealized simulations under a perfect model scenario reveal that the COAMPS-TC model possesses an intrinsic intensity variation at the TC mature stage in the range of 4–5 kt, regardless of the large-scale environment. Such intrinsic intensity variability in the COAMPS-TC model highlights the importance of potential chaotic TC dynamics, rather than model deficiencies, in determining TC intensity errors at 4–5-day lead times. These results suggest a fundamental limit in the improvement of TC intensity forecasts by numerical models that one should consider in future model development and evaluation.
Abstract
This study examines the dependence of tropical cyclone (TC) intensity forecast errors on track forecast errors in the Coupled Ocean–Atmosphere Mesoscale Prediction System for Tropical Cyclones (COAMPS-TC) model. Using real-time forecasts and retrospective experiments during 2015–18, verification of TC intensity errors conditioned on different 5-day track error thresholds shows that reducing the 5-day track errors by 50%–70% can help reduce the absolute intensity errors by 18%–20% in the 2018 version of the COAMPS-TC model. Such impacts of track errors on the TC intensity errors are most persistent at 4–5-day lead times in all three major ocean basins, indicating a significant control of global models on the forecast skill of the COAMPS-TC model. It is of interest to find, however, that lowering the 5-day track errors below 80 n mi (1 n mi = 1.852 km) does not reduce TC absolute intensity errors further. Instead, the 4–5-day intensity errors appear to be saturated at around 10–12 kt (1 kt ≈ 0.51 m s−1) for cases with small track errors, thus suggesting the existence of some inherent intensity errors in regional models. Additional idealized simulations under a perfect model scenario reveal that the COAMPS-TC model possesses an intrinsic intensity variation at the TC mature stage in the range of 4–5 kt, regardless of the large-scale environment. Such intrinsic intensity variability in the COAMPS-TC model highlights the importance of potential chaotic TC dynamics, rather than model deficiencies, in determining TC intensity errors at 4–5-day lead times. These results suggest a fundamental limit in the improvement of TC intensity forecasts by numerical models that one should consider in future model development and evaluation.
Abstract
This study reveals a possible cause of model bias in simulating the western Pacific subtropical high (WPSH) variability via an examination of an Atmospheric Model Intercomparison Project (AMIP) simulation produced by the atmospheric general circulation model (AGCM) developed at Taiwan’s Central Weather Bureau (CWB). During boreal summer, the model overestimates the quasi-biennial (2–3 yr) band of WPSH variability but underestimates the low-frequency (3–5 yr) band of variability. The overestimation of the quasi-biennial WPSH sensitivity is found to be due to the model’s stronger sensitivity to the central Pacific El Niño–Southern Oscillation (CP ENSO) that has a leading periodicity in the quasi-biennial band. The model underestimates the low-frequency WPSH variability because of its weaker sensitivity to the eastern Pacific (EP) ENSO that has a leading periodicity in the 3–5-yr band. These different model sensitivities are shown to be related to the relative strengths of the mean Hadley and Walker circulations simulated in the model. An overly strong Hadley circulation causes the CWB AGCM to be overly sensitive to the CP ENSO, while an overly weak Walker circulation results in a weak sensitivity to the EP ENSO. The relative strengths of the simulated mean Hadley and Walker circulations are critical to a realistic simulation of the summer WPSH variability in AGCMs. This conclusion is further supported using AMIP simulations produced by three other AGCMs, including the CanAM4, GISS-E2-R, and IPSL-CM5A-MR models.
Abstract
This study reveals a possible cause of model bias in simulating the western Pacific subtropical high (WPSH) variability via an examination of an Atmospheric Model Intercomparison Project (AMIP) simulation produced by the atmospheric general circulation model (AGCM) developed at Taiwan’s Central Weather Bureau (CWB). During boreal summer, the model overestimates the quasi-biennial (2–3 yr) band of WPSH variability but underestimates the low-frequency (3–5 yr) band of variability. The overestimation of the quasi-biennial WPSH sensitivity is found to be due to the model’s stronger sensitivity to the central Pacific El Niño–Southern Oscillation (CP ENSO) that has a leading periodicity in the quasi-biennial band. The model underestimates the low-frequency WPSH variability because of its weaker sensitivity to the eastern Pacific (EP) ENSO that has a leading periodicity in the 3–5-yr band. These different model sensitivities are shown to be related to the relative strengths of the mean Hadley and Walker circulations simulated in the model. An overly strong Hadley circulation causes the CWB AGCM to be overly sensitive to the CP ENSO, while an overly weak Walker circulation results in a weak sensitivity to the EP ENSO. The relative strengths of the simulated mean Hadley and Walker circulations are critical to a realistic simulation of the summer WPSH variability in AGCMs. This conclusion is further supported using AMIP simulations produced by three other AGCMs, including the CanAM4, GISS-E2-R, and IPSL-CM5A-MR models.
Abstract
Outputs from coupled general circulation models (CGCMs) are used in examining tropical Pacific decadal variability (TPDV) and their relationships with El Niño–Southern Oscillation (ENSO). Herein TPDV is classified as either ENSO-induced TPDV (EIT) or ENSO-like TPDV (ELT), based on their correlations with a decadal modulation index of ENSO amplitude and spatial pattern. EIT is identified by the leading EOF mode of the low-pass filtered equatorial subsurface temperature anomalies and is highly correlated with the decadal ENSO modulation index. This mode is characterized by an east–west dipole structure along the equator. ELT is usually defined by the first EOF mode of subsurface temperature, of which the spatial structure is similar to ENSO. Generally, this mode is insignificantly correlated with the decadal modulation of ENSO. EIT closely interacts with the residuals induced by ENSO asymmetries, both of which show similar spatial structures. On the other hand, ELT is controlled by slowly varying ocean adjustments analogous to a recharge oscillator of ENSO. Both types of TPDV have similar spectral peaks on a decadal-to-interdecadal time scale. Interestingly, the variances of both types of TPDV depend on the strength of connection between El Niño–La Niña residuals and EIT, such that the strong two-way feedback between them enhances EIT and reduces ELT. The strength of the two-way feedback is also related to ENSO variability. The flavors of El Niño–La Niña with respect to changes in the tropical Pacific mean state tend to be well simulated when ENSO variability is larger in CGCMs. As a result, stronger ENSO variability leads to intensified interactive feedback between ENSO residuals and enhanced EIT in CGCMs.
Abstract
Outputs from coupled general circulation models (CGCMs) are used in examining tropical Pacific decadal variability (TPDV) and their relationships with El Niño–Southern Oscillation (ENSO). Herein TPDV is classified as either ENSO-induced TPDV (EIT) or ENSO-like TPDV (ELT), based on their correlations with a decadal modulation index of ENSO amplitude and spatial pattern. EIT is identified by the leading EOF mode of the low-pass filtered equatorial subsurface temperature anomalies and is highly correlated with the decadal ENSO modulation index. This mode is characterized by an east–west dipole structure along the equator. ELT is usually defined by the first EOF mode of subsurface temperature, of which the spatial structure is similar to ENSO. Generally, this mode is insignificantly correlated with the decadal modulation of ENSO. EIT closely interacts with the residuals induced by ENSO asymmetries, both of which show similar spatial structures. On the other hand, ELT is controlled by slowly varying ocean adjustments analogous to a recharge oscillator of ENSO. Both types of TPDV have similar spectral peaks on a decadal-to-interdecadal time scale. Interestingly, the variances of both types of TPDV depend on the strength of connection between El Niño–La Niña residuals and EIT, such that the strong two-way feedback between them enhances EIT and reduces ELT. The strength of the two-way feedback is also related to ENSO variability. The flavors of El Niño–La Niña with respect to changes in the tropical Pacific mean state tend to be well simulated when ENSO variability is larger in CGCMs. As a result, stronger ENSO variability leads to intensified interactive feedback between ENSO residuals and enhanced EIT in CGCMs.
Abstract
The ability to construct nitrate maps in the Southern Ocean (SO) from sparse observations is important for marine biogeochemistry research, as it offers a geographical estimate of biological productivity. The goal of this study is to infer the skill of constructed SO nitrate maps using varying data sampling strategies. The mapping method uses multivariate empirical orthogonal functions (MEOFs) constructed from nitrate, salinity, and potential temperature (N-S-T) fields from a biogeochemical general circulation model simulation Synthetic N-S-T datasets are created by sampling modeled N-S-T fields in specific regions, determined either by random selection or by selecting regions over a certain threshold of nitrate temporal variances. The first 500 MEOF modes, determined by their capability to reconstruct the original N-S-T fields, are projected onto these synthetic N-S-T data to construct time-varying nitrate maps. Normalized root-mean-square errors (NRMSEs) are calculated between the constructed nitrate maps and the original modeled fields for different sampling strategies. The sampling strategy according to nitrate variances is shown to yield maps with lower NRMSEs than mapping adopting random sampling. A k-means cluster method that considers the N-S-T combined variances to identify key regions to insert data is most effective in reducing the mapping errors. These findings are further quantified by a series of mapping error analyses that also address the significance of data sampling density. The results provide a sampling framework to prioritize the deployment of biogeochemical Argo floats for constructing nitrate maps.
Abstract
The ability to construct nitrate maps in the Southern Ocean (SO) from sparse observations is important for marine biogeochemistry research, as it offers a geographical estimate of biological productivity. The goal of this study is to infer the skill of constructed SO nitrate maps using varying data sampling strategies. The mapping method uses multivariate empirical orthogonal functions (MEOFs) constructed from nitrate, salinity, and potential temperature (N-S-T) fields from a biogeochemical general circulation model simulation Synthetic N-S-T datasets are created by sampling modeled N-S-T fields in specific regions, determined either by random selection or by selecting regions over a certain threshold of nitrate temporal variances. The first 500 MEOF modes, determined by their capability to reconstruct the original N-S-T fields, are projected onto these synthetic N-S-T data to construct time-varying nitrate maps. Normalized root-mean-square errors (NRMSEs) are calculated between the constructed nitrate maps and the original modeled fields for different sampling strategies. The sampling strategy according to nitrate variances is shown to yield maps with lower NRMSEs than mapping adopting random sampling. A k-means cluster method that considers the N-S-T combined variances to identify key regions to insert data is most effective in reducing the mapping errors. These findings are further quantified by a series of mapping error analyses that also address the significance of data sampling density. The results provide a sampling framework to prioritize the deployment of biogeochemical Argo floats for constructing nitrate maps.