Search Results

You are looking at 1 - 10 of 15 items for :

  • Model performance/evaluation x
  • Modern Era Retrospective-Analysis for Research and Applications (MERRA) x
  • All content x
Clear All
Mark Decker, Michael A. Brunke, Zhuo Wang, Koichi Sakaguchi, Xubin Zeng, and Michael G. Bosilovich

directly with near-surface observations over various climate regimes and regions. Specifically, the atmospheric quantities used to force land surface models together with the fluxes of energy and moisture from the land surface to the atmosphere in the various reanalyses must be objectively evaluated. This study utilizes flux tower observations from 33 different locations in the Northern Hemisphere to evaluate the various reanalysis products from the different centers. The flux network (FLUXNET) is

Full access
Behnjamin J. Zib, Xiquan Dong, Baike Xi, and Aaron Kennedy

be random overlapped ( Xu and Randall, 1996 ). A more accurate system description and evaluation of R2 is documented in Kanamitsu et al. (2002) . 3) 20CR reanalysis NOAA's 20CR dataset uses a new version of the NCEP atmosphere–land model along with an Ensemble Kalman Filter data assimilation technique ( Whitaker and Hamill 2002 ) that assimilates only surface pressure reports and observations while using observed the Hadley Centre Sea Ice and SST dataset (HadISST) sea surface temperatures and

Full access
Derek J. Posselt, Andrew R. Jongeward, Chuan-Yuan Hsu, and Gerald L. Potter

, and it lends itself naturally to a process-based statistical examination of both the properties of the system and the evaluation of large-scale models. In particular, collection of the sample of objects into a database of deep convective statistics makes it more straightforward to use in the analysis of general circulation model (GCM) output, for which it is not as important that each object be reproduced in its observed location but instead that the distribution of cloud and radiative properties

Full access
Benjamin A. Schenkel and Robert E. Hart

, 4DVAR allows for the influence of an observation to be more strongly controlled by model dynamics ( Thépaut et al. 1996 ). As a result, 4DVAR should yield improved performance in observation deficient regions in which TCs are typically found ( Thépaut 2006 ; Whitaker et al. 2009 ; Dee et al. 2011 ). A second unique property is the use of TC wind profile retrievals within the JRA-25 for all Best-Track TCs with a maximum 10-m wind speed (VMAX 10m ) greater than or equal to 34 kt. TC wind profile

Full access
Kyle F. Itterly and Patrick C. Taylor

timing and intensity unrealistically force the surface water and energy budget, leading to errors in surface runoff and evaporation ( Del Genio and Wu 2010 ). Decker et al. (2012) found significant errors in the diurnal cycle of surface turbulent fluxes in reanalysis models as well. Slingo et al. (2003) evaluated the diurnal cycle of the Hadley Centre Coupled Model, version 3 (HadCM3) GCM over the tropics and found the largest differences between the GCM and observations occur over the Maritime

Full access
Michele M. Rienecker, Max J. Suarez, Ronald Gelaro, Ricardo Todling, Julio Bacmeister, Emily Liu, Michael G. Bosilovich, Siegfried D. Schubert, Lawrence Takacs, Gi-Kong Kim, Stephen Bloom, Junye Chen, Douglas Collins, Austin Conaty, Arlindo da Silva, Wei Gu, Joanna Joiner, Randal D. Koster, Robert Lucchesi, Andrea Molod, Tommy Owens, Steven Pawson, Philip Pegion, Christopher R. Redder, Rolf Reichle, Franklin R. Robertson, Albert G. Ruddick, Meta Sienkiewicz, and Jack Woollen

Observing System (GEOS) atmospheric model and data assimilation system (DAS). The system, the input data streams and their sources, and the observation and background error statistics are documented fully in Rienecker et al. (2008 , henceforth R2008) . Unlike the atmospheric reanalyses from centers focused on operational weather prediction, the GEOS atmospheric DAS was developed with NASA instrument teams and the science community as the primary customers. Hence, the performance drivers of the GEOS

Full access
Yonghong Yi, John S. Kimball, Lucas A. Jones, Rolf H. Reichle, and Kyle C. McDonald

sensing retrievals, and in situ measurements distributed around the globe. The objectives of this study were to 1) evaluate the uncertainty and relative accuracy of MERRA against in situ observations and the previous GEOS-4 analysis for selected land surface meteorological variables, and 2) examine relationships and accuracy differences between MERRA estimates and independent satellite microwave remote sensing products to clarify the potential value of the satellite observations for model assimilation

Full access
Rolf H. Reichle, Randal D. Koster, Gabriëlle J. M. De Lannoy, Barton A. Forman, Qing Liu, Sarith P. P. Mahanama, and Ally Touré

-Land estimates (discussed below) are based on the offline replay configuration by construction, and thus comparing them to the MERRA estimates generated offline under replay mode allows a more careful isolation of the impacts of the precipitation corrections and model parameter revisions on the accuracy of the product. b. Evaluation data 1) Precipitation observations We use the Global Precipitation Climatology Project (GPCP) precipitation pentad (5-day) product version 2.1 ( Huffman et al. 2009 ; Xie et al

Full access
Michael G. Bosilovich, Franklin R. Robertson, and Junye Chen

global reanalyses. In this study, we evaluate the global energy and water cycles of a new reanalysis, the NASA Modern Era Retrospective-Analysis for Research and Applications (MERRA) ( Rienecker et al. 2011 ). MERRA data are derived from the Goddard Earth Observing System version 5 (GEOS-5) data assimilation system, which is a combination of a NASA general circulation model ( Rienecker et al. 2007 ) and the gridpoint statistical interpolation (GSI) analysis developed in collaboration with the

Full access
Michael A. Brunke, Zhuo Wang, Xubin Zeng, Michael Bosilovich, and Chung-Lin Shie

category C LH and SH fluxes, respectively. c. Evaluation of the bulk variables To understand the total biases in Figs. 3 – 7 and Table 1 and the general levels of performance in Table 2 , the contributions to the biases are examined by first looking at the accuracy of the bulk variables (wind speed, near-surface air temperature, SST, and near-surface air humidity) that are used as input into the bulk algorithms. First, Fig. 8a compares the biases in 2-m specific humidity from all of the products

Full access