Search Results

You are looking at 1 - 10 of 25,541 items for :

  • All content x
Clear All
Thomas M. Smith, Phillip A. Arkin, John J. Bates, and George J. Huffman

developed land regions, and satellite estimates are needed for any globally complete analysis of precipitation. Besides incomplete sampling, gauge observations may have biases, such as those due to blowing snow over high latitudes (e.g., Groisman et al. 1991 ; Huffman et al. 1997 ; Bogdanova et al. 2002 ). Near-global precipitation estimates from satellite-based observations are available beginning in the 1970s. Because satellite observations are dense compared to gauges, satellite-based observations

Full access
Mathieu Vrac and Petra Friederichs

represent volume-integrated dynamical variables. Moreover, simulated data are associated with potential biases in the sense their statistical distribution differs from the distribution of the observations. This is partly because global climate models (GCMs) have too low a spatial resolution to be employed directly in most of the impact models (e.g., Meehl 2007 ; Christensen et al. 2008 ). Regional climate models (RCMs) reduce some of the biases but not those unrelated to spatial resolution ( Maraun

Full access
Murry L. Salby, Patrick J. McBride, and Patrick F. Callaghan

diurnal cycle of convection. Those intrinsic features of convection prevent the construction of synoptic maps from an individual platform ( Salby 1989 ), while introducing a bias into the time-mean structure ( Gruber and Krueger 1984 ; Hartmann et al. 1991 ). The above limitations stem from the convective pattern being sampled too infrequently in space and time to represent global behavior properly. The limitations are circumvented by high-resolution global cloud imagery (GCI), which is composited

Full access
Martin Aleksandrov Ivanov, Jürg Luterbacher, and Sven Kotlarski

more realistic parameterizations at fine scales allows RCMs to better reproduce the local mechanisms that shape regional climates ( Laprise 2008 ; Rummukainen 2010 ). Both GCM and RCM fields can exhibit substantial systematic differences from gridded observational data ( Christensen et al. 2008 ; Sillmann et al. 2013 ; Kotlarski et al. 2014 ). Such discrepancies between simulated and observed fields are commonly referred to as biases ( Pan et al. 2001 ). Model output is recommended to be

Open access
Lijing Cheng, Jiang Zhu, Franco Reseghetti, and Qingping Liu

conductivity–temperature–depth (CTD), some discrepancies have become increasingly evident. For example, in a report describing several intercomparisons involving a total of about 2000 XBT profiles, Anderson (1980) pointed out several problems in XBT measurements yielding a general positive bias in measurements with XBT probes. Unfortunately, that paper, with its important but widely ignored list of troubles and errors in the XBT system, remained unknown until recent years. In the 1980s and early 1990s

Full access
Simon A. Good

and Deep Blue probes, which both reach 760 m, and the T5 and T10 instruments that measure to 1830 and 200 m, respectively ( Lockheed Martin Corporation 2005 ). The most common manufacturer is Sippican Inc. (now Lockheed Martin Sippican), followed by Tsurumi Seiki (TSK) and then Sparton. Unfortunately, metadata describing the type and manufacturer are unavailable for about 50% of XBTs ( Ishii and Kimoto 2009 ). A variety of potential sources of bias have been identified for XBT data and these can

Full access
Christian Kerkhoff, Hans R. Künsch, and Christoph Schär

due to the chosen future emission pathway; (ii) the model uncertainty, which encompasses structural and parametric contributions; and (iii) internal climate variability ( Hawkins and Sutton 2009 ). Multimodel ensembles aim at disentangling these sources of uncertainty. Still, combining information from multimodel ensembles is challenging. Since our models are only imperfect representations of the truth, each ensemble member entails systematic errors or biases that are apparent when comparing model

Full access
Keith F. Brill

a thorough understanding of the measures being computed from the joint distributions of the probabilities involved. The general approach given in this paper is made specific to address directly the issue of “hedging” as described by Marzban (1998) and others. Here, hedging refers to the action on the part of forecasters to use the fact that performance measures may be more favorable if bias is different from one. Therefore, expanding or contracting a forecast area or adjusting a threshold

Full access
Keith F. Brill and Fedor Mesinger

1. Introduction This note extends the analysis of Brill (2009) to include new performance measures, bias-adjusted threat and equitable threat scores, derived by Mesinger (2008) . The work of Mesinger (2008) was motivated by heuristic evidence of frequently misleading bias sensitivities for the threat score (TS) and equitable threat score (ETS), such as shown by Baldwin and Kain (2006) using a geometrical model. Specifically, this sensitivity was demonstrated to undermine the presumably

Full access
Raquel Lorente-Plazas and Joshua P. Hacker

1. Introduction In statistics, the term bias is broadly used when errors are systematic instead of random (i.e., when the mean of the error distribution is not zero). Data assimilation (DA) algorithms in wide use today rely on the basic assumptions of unbiased observations and models. In those systems, observations with assumed random errors are used to correct the random errors in a model-forecast background estimate. The underlying theories allow for known biases to be corrected prior to

Full access