Search Results
You are looking at 1 - 10 of 16 items for
- Author or Editor: Peter W. Thorne x
- Refine by Access: All Content x
Abstract
This study develops an innovative approach to homogenize discontinuities in both mean and variance in global subdaily radiosonde temperature data from 1958 to 2018. First, temperature natural variations and changes are estimated using reanalyses and removed from the radiosonde data to construct monthly and daily difference series. A penalized maximal F test and an improved Kolmogorov–Smirnov test are then applied to the monthly and daily difference series to detect spurious shifts in the mean and variance, respectively. About 60% (40%) of the changepoints appear in the mean (variance), and ~56% of them are confirmed by available metadata. The changepoints display a country-dependent pattern likely due to changes in national radiosonde networks. Mean segment length is 7.2 (14.6) years for the mean (variance)-based detection. A mean (quantile)-matching method using up to 5 years of data from two adjacent mean (variance)-based segments is used to adjust the earlier segments relative to the latest segment. The homogenized series is obtained by adding the two homogenized difference series back to the subtracted reference series. The homogenized data exhibit more spatially coherent trends and temporally consistent variations than the raw data, and lack the spurious tropospheric cooling over North China and Mongolia seen in several reanalyses and raw datasets. The homogenized data clearly show a warming maximum around 300 hPa over 30°S–30°N, consistent with model simulations, in contrast to the raw data. The results suggest that spurious changes are numerous and significant in the radiosonde records and our method can greatly improve their homogeneity.
Abstract
This study develops an innovative approach to homogenize discontinuities in both mean and variance in global subdaily radiosonde temperature data from 1958 to 2018. First, temperature natural variations and changes are estimated using reanalyses and removed from the radiosonde data to construct monthly and daily difference series. A penalized maximal F test and an improved Kolmogorov–Smirnov test are then applied to the monthly and daily difference series to detect spurious shifts in the mean and variance, respectively. About 60% (40%) of the changepoints appear in the mean (variance), and ~56% of them are confirmed by available metadata. The changepoints display a country-dependent pattern likely due to changes in national radiosonde networks. Mean segment length is 7.2 (14.6) years for the mean (variance)-based detection. A mean (quantile)-matching method using up to 5 years of data from two adjacent mean (variance)-based segments is used to adjust the earlier segments relative to the latest segment. The homogenized series is obtained by adding the two homogenized difference series back to the subtracted reference series. The homogenized data exhibit more spatially coherent trends and temporally consistent variations than the raw data, and lack the spurious tropospheric cooling over North China and Mongolia seen in several reanalyses and raw datasets. The homogenized data clearly show a warming maximum around 300 hPa over 30°S–30°N, consistent with model simulations, in contrast to the raw data. The results suggest that spurious changes are numerous and significant in the radiosonde records and our method can greatly improve their homogeneity.
Abstract
Radiosonde humidity records represent the only in situ observations of tropospheric water vapor content with multidecadal length and quasi-global coverage. However, their use has been hampered by ubiquitous and large discontinuities resulting from changes to instrumentation and observing practices. Here a new approach is developed to homogenize historical records of tropospheric (up to 100 hPa) dewpoint depression (DPD), the archived radiosonde humidity parameter. Two statistical tests are used to detect changepoints, which are most apparent in histograms and occurrence frequencies of the daily DPD: a variant of the Kolmogorov–Smirnov (K–S) test for changes in distributions and the penalized maximal F test (PMFred) for mean shifts in the occurrence frequency for different bins of DPD. These tests capture most of the apparent discontinuities in the daily DPD data, with an average of 8.6 changepoints (∼1 changepoint per 5 yr) in each of the analyzed radiosonde records, which begin as early as the 1950s and ended in March 2009. Before applying breakpoint adjustments, artificial sampling effects are first adjusted by estimating missing DPD reports for cold (T < −30°C) and dry (DPD artificially set to 30°C) conditions using empirical relationships at each station between the anomalies of air temperature and vapor pressure derived from recent observations when DPD reports are available under these conditions. Next, the sampling-adjusted DPD is detrended separately for each of the 4–10 quantile categories and then adjusted using a quantile-matching algorithm so that the earlier segments have histograms comparable to that of the latest segment. Neither the changepoint detection nor the adjustment uses a reference series given the stability of the DPD series.
Using this new approach, a homogenized global, twice-daily DPD dataset (available online at www.cgd.ucar.edu/cas/catalog/) is created for climate and other applications based on the Integrated Global Radiosonde Archive (IGRA) and two other data sources. The adjusted-daily DPD has much smaller and spatially more coherent trends during 1973–2008 than the raw data. It implies only small changes in relative humidity in the lower and middle troposphere. When combined with homogenized radiosonde temperature, other atmospheric humidity variables can be calculated, and these exhibit spatially more coherent trends than without the DPD homogenization. The DPD adjustment yields a different pattern of change in humidity parameters compared to the apparent trends from the raw data. The adjusted estimates show an increase in tropospheric water vapor globally.
Abstract
Radiosonde humidity records represent the only in situ observations of tropospheric water vapor content with multidecadal length and quasi-global coverage. However, their use has been hampered by ubiquitous and large discontinuities resulting from changes to instrumentation and observing practices. Here a new approach is developed to homogenize historical records of tropospheric (up to 100 hPa) dewpoint depression (DPD), the archived radiosonde humidity parameter. Two statistical tests are used to detect changepoints, which are most apparent in histograms and occurrence frequencies of the daily DPD: a variant of the Kolmogorov–Smirnov (K–S) test for changes in distributions and the penalized maximal F test (PMFred) for mean shifts in the occurrence frequency for different bins of DPD. These tests capture most of the apparent discontinuities in the daily DPD data, with an average of 8.6 changepoints (∼1 changepoint per 5 yr) in each of the analyzed radiosonde records, which begin as early as the 1950s and ended in March 2009. Before applying breakpoint adjustments, artificial sampling effects are first adjusted by estimating missing DPD reports for cold (T < −30°C) and dry (DPD artificially set to 30°C) conditions using empirical relationships at each station between the anomalies of air temperature and vapor pressure derived from recent observations when DPD reports are available under these conditions. Next, the sampling-adjusted DPD is detrended separately for each of the 4–10 quantile categories and then adjusted using a quantile-matching algorithm so that the earlier segments have histograms comparable to that of the latest segment. Neither the changepoint detection nor the adjustment uses a reference series given the stability of the DPD series.
Using this new approach, a homogenized global, twice-daily DPD dataset (available online at www.cgd.ucar.edu/cas/catalog/) is created for climate and other applications based on the Integrated Global Radiosonde Archive (IGRA) and two other data sources. The adjusted-daily DPD has much smaller and spatially more coherent trends during 1973–2008 than the raw data. It implies only small changes in relative humidity in the lower and middle troposphere. When combined with homogenized radiosonde temperature, other atmospheric humidity variables can be calculated, and these exhibit spatially more coherent trends than without the DPD homogenization. The DPD adjustment yields a different pattern of change in humidity parameters compared to the apparent trends from the raw data. The adjusted estimates show an increase in tropospheric water vapor globally.
Abstract
Water vapor constitutes the most significant greenhouse gas, is a key driver of many atmospheric processes, and hence, is fundamental to understanding the climate system. It is a major factor in human “heat stress,” whereby increasing humidity reduces the ability to stay cool. Until now no truly global homogenized surface humidity dataset has existed with which to assess recent changes. The Met Office Hadley Centre and Climatic Research Unit Global Surface Humidity dataset (HadCRUH), described herein, provides a homogenized quality controlled near-global 5° by 5° gridded monthly mean anomaly dataset in surface specific and relative humidity from 1973 to 2003. It consists of land and marine data, and is geographically quasi-complete over the region 60°N–40°S.
Between 1973 and 2003 surface specific humidity has increased significantly over the globe, tropics, and Northern Hemisphere. Global trends are 0.11 and 0.07 g kg−1 (10 yr)−1 for land and marine components, respectively. Trends are consistently larger in the tropics and in the Northern Hemisphere during summer, as expected: warmer regions exhibit larger increases in specific humidity for a given temperature change under conditions of constant relative humidity, based on the Clausius–Clapeyron equation. Relative humidity trends are not significant when averaged over the landmass of the globe, tropics, and Northern Hemisphere, although some seasonal changes are significant.
A strong positive bias is apparent in marine humidity data prior to 1982, likely owing to a known change in reporting practice for dewpoint temperature at this time. Consequently, trends in both specific and relative humidity are likely underestimated over the oceans.
Abstract
Water vapor constitutes the most significant greenhouse gas, is a key driver of many atmospheric processes, and hence, is fundamental to understanding the climate system. It is a major factor in human “heat stress,” whereby increasing humidity reduces the ability to stay cool. Until now no truly global homogenized surface humidity dataset has existed with which to assess recent changes. The Met Office Hadley Centre and Climatic Research Unit Global Surface Humidity dataset (HadCRUH), described herein, provides a homogenized quality controlled near-global 5° by 5° gridded monthly mean anomaly dataset in surface specific and relative humidity from 1973 to 2003. It consists of land and marine data, and is geographically quasi-complete over the region 60°N–40°S.
Between 1973 and 2003 surface specific humidity has increased significantly over the globe, tropics, and Northern Hemisphere. Global trends are 0.11 and 0.07 g kg−1 (10 yr)−1 for land and marine components, respectively. Trends are consistently larger in the tropics and in the Northern Hemisphere during summer, as expected: warmer regions exhibit larger increases in specific humidity for a given temperature change under conditions of constant relative humidity, based on the Clausius–Clapeyron equation. Relative humidity trends are not significant when averaged over the landmass of the globe, tropics, and Northern Hemisphere, although some seasonal changes are significant.
A strong positive bias is apparent in marine humidity data prior to 1982, likely owing to a known change in reporting practice for dewpoint temperature at this time. Consequently, trends in both specific and relative humidity are likely underestimated over the oceans.
Historically, meteorological observations have been made for operational forecasting rather than long-term monitoring purposes, so that there have been numerous changes in instrumentation and procedures. Hence to create climate quality datasets requires the identification, estimation, and removal of many nonclimatic biases from the historical data. Construction of a number of new tropospheric temperature climate datasets has highlighted previously unrecognized uncertainty in multidecadal temperature trends aloft. The choice of dataset can even change the sign of upper-air trends relative to those reported at the surface. So structural uncertainty introduced unintentionally through dataset construction choices is important and needs to be understood and mitigated. A number of ways that this could be addressed for historical records are discussed, as is the question of How it needs to be reduced through future coordinated observing systems with long-term monitoring as a driver, enabling explicit calculation, and removal of nonclimatic biases. Although upper-air temperature records are used to illustrate the arguments, it is strongly believed that the findings are applicable to all long-term climate datasets and variables. A full characterization of observational uncertainty is as vitally important as recent intensive efforts to understand climate model uncertainties if the goal to rigorously reduce the uncertainty regarding both past and future climate changes is to be achieved.
Historically, meteorological observations have been made for operational forecasting rather than long-term monitoring purposes, so that there have been numerous changes in instrumentation and procedures. Hence to create climate quality datasets requires the identification, estimation, and removal of many nonclimatic biases from the historical data. Construction of a number of new tropospheric temperature climate datasets has highlighted previously unrecognized uncertainty in multidecadal temperature trends aloft. The choice of dataset can even change the sign of upper-air trends relative to those reported at the surface. So structural uncertainty introduced unintentionally through dataset construction choices is important and needs to be understood and mitigated. A number of ways that this could be addressed for historical records are discussed, as is the question of How it needs to be reduced through future coordinated observing systems with long-term monitoring as a driver, enabling explicit calculation, and removal of nonclimatic biases. Although upper-air temperature records are used to illustrate the arguments, it is strongly believed that the findings are applicable to all long-term climate datasets and variables. A full characterization of observational uncertainty is as vitally important as recent intensive efforts to understand climate model uncertainties if the goal to rigorously reduce the uncertainty regarding both past and future climate changes is to be achieved.
Abstract
Over much of the globe, the temporal extent of meteorological records is limited, yet a wealth of data remains in paper or image form in numerous archives. To date, little attention has been given to the role that students might play in efforts to rescue these data. Here we summarize an ambitious research-led, accredited teaching experiment in which undergraduate students successfully transcribed more than 1,300 station years of daily precipitation data and associated metadata across Ireland over the period 1860–1939. We explore i) the potential for integrating data rescue activities into the classroom, ii) the ability of students to produce reliable transcriptions and, iii) the learning outcomes for students. Data previously transcribed by Met Éireann (Ireland’s National Meteorological Service) were used as a benchmark against which it was ascertained that students were as accurate as the professionals. Details on the assignment, its planning and execution, and student-aids used are provided. The experience highlights the benefits that can accrue for data rescue through innovative collaboration between national meteorological services and academic institutions. At the same time, students have gained valuable learning outcomes and firsthand understanding of the processes that underpin data rescue and analysis. The success of the project demonstrates the potential to extend data rescue in the classroom to other universities, thus providing both an enriched learning experience for the students and a lasting legacy to the scientific community.
Abstract
Over much of the globe, the temporal extent of meteorological records is limited, yet a wealth of data remains in paper or image form in numerous archives. To date, little attention has been given to the role that students might play in efforts to rescue these data. Here we summarize an ambitious research-led, accredited teaching experiment in which undergraduate students successfully transcribed more than 1,300 station years of daily precipitation data and associated metadata across Ireland over the period 1860–1939. We explore i) the potential for integrating data rescue activities into the classroom, ii) the ability of students to produce reliable transcriptions and, iii) the learning outcomes for students. Data previously transcribed by Met Éireann (Ireland’s National Meteorological Service) were used as a benchmark against which it was ascertained that students were as accurate as the professionals. Details on the assignment, its planning and execution, and student-aids used are provided. The experience highlights the benefits that can accrue for data rescue through innovative collaboration between national meteorological services and academic institutions. At the same time, students have gained valuable learning outcomes and firsthand understanding of the processes that underpin data rescue and analysis. The success of the project demonstrates the potential to extend data rescue in the classroom to other universities, thus providing both an enriched learning experience for the students and a lasting legacy to the scientific community.
Abstract
The uncertainty in Extended Reconstructed SST (ERSST) version 4 (v4) is reassessed based upon 1) reconstruction uncertainties and 2) an extended exploration of parametric uncertainties. The reconstruction uncertainty (U r ) results from using a truncated (130) set of empirical orthogonal teleconnection functions (EOTs), which yields an inevitable loss of information content, primarily at a local level. The U r is assessed based upon 32 ensemble ERSST.v4 analyses with the spatially complete monthly Optimum Interpolation SST product. The parametric uncertainty (U p ) results from using different parameter values in quality control, bias adjustments, and EOT definition etc. The U p is assessed using a 1000-member ensemble ERSST.v4 analysis with different combinations of plausible settings of 24 identified internal parameter values. At the scale of an individual grid box, the SST uncertainty varies between 0.3° and 0.7°C and arises from both U r and U p . On the global scale, the SST uncertainty is substantially smaller (0.03°–0.14°C) and predominantly arises from U p . The SST uncertainties are greatest in periods and locales of data sparseness in the nineteenth century and relatively small after the 1950s. The global uncertainty estimates in ERSST.v4 are broadly consistent with independent estimates arising from the Hadley Centre SST dataset version 3 (HadSST3) and Centennial Observation-Based Estimates of SST version 2 (COBE-SST2). The uncertainty in the internal parameter values in quality control and bias adjustments can impact the SST trends in both the long-term (1901–2014) and “hiatus” (2000–14) periods.
Abstract
The uncertainty in Extended Reconstructed SST (ERSST) version 4 (v4) is reassessed based upon 1) reconstruction uncertainties and 2) an extended exploration of parametric uncertainties. The reconstruction uncertainty (U r ) results from using a truncated (130) set of empirical orthogonal teleconnection functions (EOTs), which yields an inevitable loss of information content, primarily at a local level. The U r is assessed based upon 32 ensemble ERSST.v4 analyses with the spatially complete monthly Optimum Interpolation SST product. The parametric uncertainty (U p ) results from using different parameter values in quality control, bias adjustments, and EOT definition etc. The U p is assessed using a 1000-member ensemble ERSST.v4 analysis with different combinations of plausible settings of 24 identified internal parameter values. At the scale of an individual grid box, the SST uncertainty varies between 0.3° and 0.7°C and arises from both U r and U p . On the global scale, the SST uncertainty is substantially smaller (0.03°–0.14°C) and predominantly arises from U p . The SST uncertainties are greatest in periods and locales of data sparseness in the nineteenth century and relatively small after the 1950s. The global uncertainty estimates in ERSST.v4 are broadly consistent with independent estimates arising from the Hadley Centre SST dataset version 3 (HadSST3) and Centennial Observation-Based Estimates of SST version 2 (COBE-SST2). The uncertainty in the internal parameter values in quality control and bias adjustments can impact the SST trends in both the long-term (1901–2014) and “hiatus” (2000–14) periods.
Abstract
The monthly global 2° × 2° Extended Reconstructed Sea Surface Temperature (ERSST) has been revised and updated from version 4 to version 5. This update incorporates a new release of ICOADS release 3.0 (R3.0), a decade of near-surface data from Argo floats, and a new estimate of centennial sea ice from HadISST2. A number of choices in aspects of quality control, bias adjustment, and interpolation have been substantively revised. The resulting ERSST estimates have more realistic spatiotemporal variations, better representation of high-latitude SSTs, and ship SST biases are now calculated relative to more accurate buoy measurements, while the global long-term trend remains about the same. Progressive experiments have been undertaken to highlight the effects of each change in data source and analysis technique upon the final product. The reconstructed SST is systematically decreased by 0.077°C, as the reference data source is switched from ship SST in ERSSTv4 to modern buoy SST in ERSSTv5. Furthermore, high-latitude SSTs are decreased by 0.1°–0.2°C by using sea ice concentration from HadISST2 over HadISST1. Changes arising from remaining innovations are mostly important at small space and time scales, primarily having an impact where and when input observations are sparse. Cross validations and verifications with independent modern observations show that the updates incorporated in ERSSTv5 have improved the representation of spatial variability over the global oceans, the magnitude of El Niño and La Niña events, and the decadal nature of SST changes over 1930s–40s when observation instruments changed rapidly. Both long- (1900–2015) and short-term (2000–15) SST trends in ERSSTv5 remain significant as in ERSSTv4.
Abstract
The monthly global 2° × 2° Extended Reconstructed Sea Surface Temperature (ERSST) has been revised and updated from version 4 to version 5. This update incorporates a new release of ICOADS release 3.0 (R3.0), a decade of near-surface data from Argo floats, and a new estimate of centennial sea ice from HadISST2. A number of choices in aspects of quality control, bias adjustment, and interpolation have been substantively revised. The resulting ERSST estimates have more realistic spatiotemporal variations, better representation of high-latitude SSTs, and ship SST biases are now calculated relative to more accurate buoy measurements, while the global long-term trend remains about the same. Progressive experiments have been undertaken to highlight the effects of each change in data source and analysis technique upon the final product. The reconstructed SST is systematically decreased by 0.077°C, as the reference data source is switched from ship SST in ERSSTv4 to modern buoy SST in ERSSTv5. Furthermore, high-latitude SSTs are decreased by 0.1°–0.2°C by using sea ice concentration from HadISST2 over HadISST1. Changes arising from remaining innovations are mostly important at small space and time scales, primarily having an impact where and when input observations are sparse. Cross validations and verifications with independent modern observations show that the updates incorporated in ERSSTv5 have improved the representation of spatial variability over the global oceans, the magnitude of El Niño and La Niña events, and the decadal nature of SST changes over 1930s–40s when observation instruments changed rapidly. Both long- (1900–2015) and short-term (2000–15) SST trends in ERSSTv5 remain significant as in ERSSTv4.
Abstract
The monthly Extended Reconstructed Sea Surface Temperature (ERSST) dataset, available on global 2° × 2° grids, has been revised herein to version 4 (v4) from v3b. Major revisions include updated and substantially more complete input data from the International Comprehensive Ocean–Atmosphere Data Set (ICOADS) release 2.5; revised empirical orthogonal teleconnections (EOTs) and EOT acceptance criterion; updated sea surface temperature (SST) quality control procedures; revised SST anomaly (SSTA) evaluation methods; updated bias adjustments of ship SSTs using the Hadley Centre Nighttime Marine Air Temperature dataset version 2 (HadNMAT2); and buoy SST bias adjustment not previously made in v3b.
Tests show that the impacts of the revisions to ship SST bias adjustment in ERSST.v4 are dominant among all revisions and updates. The effect is to make SST 0.1°–0.2°C cooler north of 30°S but 0.1°–0.2°C warmer south of 30°S in ERSST.v4 than in ERSST.v3b before 1940. In comparison with the Met Office SST product [the Hadley Centre Sea Surface Temperature dataset, version 3 (HadSST3)], the ship SST bias adjustment in ERSST.v4 is 0.1°–0.2°C cooler in the tropics but 0.1°–0.2°C warmer in the midlatitude oceans both before 1940 and from 1945 to 1970. Comparisons highlight differences in long-term SST trends and SSTA variations at decadal time scales among ERSST.v4, ERSST.v3b, HadSST3, and Centennial Observation-Based Estimates of SST version 2 (COBE-SST2), which is largely associated with the difference of bias adjustments in these SST products. The tests also show that, when compared with v3b, SSTAs in ERSST.v4 can substantially better represent the El Niño/La Niña behavior when observations are sparse before 1940. Comparisons indicate that SSTs in ERSST.v4 are as close to satellite-based observations as other similar SST analyses.
Abstract
The monthly Extended Reconstructed Sea Surface Temperature (ERSST) dataset, available on global 2° × 2° grids, has been revised herein to version 4 (v4) from v3b. Major revisions include updated and substantially more complete input data from the International Comprehensive Ocean–Atmosphere Data Set (ICOADS) release 2.5; revised empirical orthogonal teleconnections (EOTs) and EOT acceptance criterion; updated sea surface temperature (SST) quality control procedures; revised SST anomaly (SSTA) evaluation methods; updated bias adjustments of ship SSTs using the Hadley Centre Nighttime Marine Air Temperature dataset version 2 (HadNMAT2); and buoy SST bias adjustment not previously made in v3b.
Tests show that the impacts of the revisions to ship SST bias adjustment in ERSST.v4 are dominant among all revisions and updates. The effect is to make SST 0.1°–0.2°C cooler north of 30°S but 0.1°–0.2°C warmer south of 30°S in ERSST.v4 than in ERSST.v3b before 1940. In comparison with the Met Office SST product [the Hadley Centre Sea Surface Temperature dataset, version 3 (HadSST3)], the ship SST bias adjustment in ERSST.v4 is 0.1°–0.2°C cooler in the tropics but 0.1°–0.2°C warmer in the midlatitude oceans both before 1940 and from 1945 to 1970. Comparisons highlight differences in long-term SST trends and SSTA variations at decadal time scales among ERSST.v4, ERSST.v3b, HadSST3, and Centennial Observation-Based Estimates of SST version 2 (COBE-SST2), which is largely associated with the difference of bias adjustments in these SST products. The tests also show that, when compared with v3b, SSTAs in ERSST.v4 can substantially better represent the El Niño/La Niña behavior when observations are sparse before 1940. Comparisons indicate that SSTs in ERSST.v4 are as close to satellite-based observations as other similar SST analyses.
Abstract
Described herein is the parametric and structural uncertainty quantification for the monthly Extended Reconstructed Sea Surface Temperature (ERSST) version 4 (v4). A Monte Carlo ensemble approach was adopted to characterize parametric uncertainty, because initial experiments indicate the existence of significant nonlinear interactions. Globally, the resulting ensemble exhibits a wider uncertainty range before 1900, as well as an uncertainty maximum around World War II. Changes at smaller spatial scales in many regions, or for important features such as Niño-3.4 variability, are found to be dominated by particular parameter choices.
Substantial differences in parametric uncertainty estimates are found between ERSST.v4 and the independently derived Hadley Centre SST version 3 (HadSST3) product. The largest uncertainties are over the mid and high latitudes in ERSST.v4 but in the tropics in HadSST3. Overall, in comparison with HadSST3, ERSST.v4 has larger parametric uncertainties at smaller spatial and shorter time scales and smaller parametric uncertainties at longer time scales, which likely reflects the different sources of uncertainty quantified in the respective parametric analyses. ERSST.v4 exhibits a stronger globally averaged warming trend than HadSST3 during the period of 1910–2012, but with a smaller parametric uncertainty. These global-mean trend estimates and their uncertainties marginally overlap.
Several additional SST datasets are used to infer the structural uncertainty inherent in SST estimates. For the global mean, the structural uncertainty, estimated as the spread between available SST products, is more often than not larger than the parametric uncertainty in ERSST.v4. Neither parametric nor structural uncertainties call into question that on the global-mean level and centennial time scale, SSTs have warmed notably.
Abstract
Described herein is the parametric and structural uncertainty quantification for the monthly Extended Reconstructed Sea Surface Temperature (ERSST) version 4 (v4). A Monte Carlo ensemble approach was adopted to characterize parametric uncertainty, because initial experiments indicate the existence of significant nonlinear interactions. Globally, the resulting ensemble exhibits a wider uncertainty range before 1900, as well as an uncertainty maximum around World War II. Changes at smaller spatial scales in many regions, or for important features such as Niño-3.4 variability, are found to be dominated by particular parameter choices.
Substantial differences in parametric uncertainty estimates are found between ERSST.v4 and the independently derived Hadley Centre SST version 3 (HadSST3) product. The largest uncertainties are over the mid and high latitudes in ERSST.v4 but in the tropics in HadSST3. Overall, in comparison with HadSST3, ERSST.v4 has larger parametric uncertainties at smaller spatial and shorter time scales and smaller parametric uncertainties at longer time scales, which likely reflects the different sources of uncertainty quantified in the respective parametric analyses. ERSST.v4 exhibits a stronger globally averaged warming trend than HadSST3 during the period of 1910–2012, but with a smaller parametric uncertainty. These global-mean trend estimates and their uncertainties marginally overlap.
Several additional SST datasets are used to infer the structural uncertainty inherent in SST estimates. For the global mean, the structural uncertainty, estimated as the spread between available SST products, is more often than not larger than the parametric uncertainty in ERSST.v4. Neither parametric nor structural uncertainties call into question that on the global-mean level and centennial time scale, SSTs have warmed notably.
Abstract
The global tropical cyclone (TC) intensity record, even in modern times, is uncertain because the vast majority of storms are only observed remotely. Forecasters determine the maximum wind speed using a patchwork of sporadic observations and remotely sensed data. A popular tool that aids forecasters is the Dvorak technique—a procedural system that estimates the maximum wind based on cloud features in IR and/or visible satellite imagery. Inherently, the application of the Dvorak procedure is open to subjectivity. Heterogeneities are also introduced into the historical record with the evolution of operational procedures, personnel, and observing platforms. These uncertainties impede our ability to identify the relationship between tropical cyclone intensities and, for example, recent climate change.
A global reanalysis of TC intensity using experts is difficult because of the large number of storms. We will show that it is possible to effectively reanalyze the global record using crowdsourcing. Through modifying the Dvorak technique into a series of simple questions that amateurs (“citizen scientists”) can answer on a website, we are working toward developing a new TC dataset that resolves intensity discrepancies in several recent TCs. Preliminary results suggest that the performance of human classifiers in some cases exceeds that of an automated Dvorak technique applied to the same data for times when the storm is transitioning into a hurricane.
Abstract
The global tropical cyclone (TC) intensity record, even in modern times, is uncertain because the vast majority of storms are only observed remotely. Forecasters determine the maximum wind speed using a patchwork of sporadic observations and remotely sensed data. A popular tool that aids forecasters is the Dvorak technique—a procedural system that estimates the maximum wind based on cloud features in IR and/or visible satellite imagery. Inherently, the application of the Dvorak procedure is open to subjectivity. Heterogeneities are also introduced into the historical record with the evolution of operational procedures, personnel, and observing platforms. These uncertainties impede our ability to identify the relationship between tropical cyclone intensities and, for example, recent climate change.
A global reanalysis of TC intensity using experts is difficult because of the large number of storms. We will show that it is possible to effectively reanalyze the global record using crowdsourcing. Through modifying the Dvorak technique into a series of simple questions that amateurs (“citizen scientists”) can answer on a website, we are working toward developing a new TC dataset that resolves intensity discrepancies in several recent TCs. Preliminary results suggest that the performance of human classifiers in some cases exceeds that of an automated Dvorak technique applied to the same data for times when the storm is transitioning into a hurricane.