The Impact of Dropsonde Data on the Performance of the NCEP Global Forecast System during the 2020 Atmospheric Rivers Observing Campaign. Part I: Precipitation

Stephen J. Lord aUniversity Corporation for Atmospheric Research/CPAESS, Boulder, Colorado
bNOAA/NCEP Environmental Modeling Center, College Park, Maryland

Search for other papers by Stephen J. Lord in
Current site
Google Scholar
PubMed
Close
,
Xingren Wu cI. M. Systems Group, Inc., Rockville, Maryland
bNOAA/NCEP Environmental Modeling Center, College Park, Maryland

Search for other papers by Xingren Wu in
Current site
Google Scholar
PubMed
Close
,
Vijay Tallapragada bNOAA/NCEP Environmental Modeling Center, College Park, Maryland

Search for other papers by Vijay Tallapragada in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0003-4255-897X
, and
F. M. Ralph dCW3E, Scripps Institution of Oceanography, University of California, San Diego, San Diego, California

Search for other papers by F. M. Ralph in
Current site
Google Scholar
PubMed
Close
Free access

Abstract

The impact of assimilating dropsonde data from the 2020 Atmospheric River (AR) Reconnaissance (ARR) field campaign on operational numerical precipitation forecasts was assessed. Two experiments were executed for the period from 24 January to 18 March 2020 using the NCEP Global Forecast System, version 15 (GFSv15), with a four-dimensional hybrid ensemble–variational (4DEnVar) data assimilation system. The control run (CTRL) used all the routinely assimilated data and included ARR dropsonde data, whereas the denial run (DENY) excluded the dropsonde data. There were 17 intensive observing periods (IOPs) totaling 46 Air Force C-130 and 16 NOAA G-IV missions to deploy dropsondes over targeted regions with potential for downstream high-impact weather associated with the ARs. Data from a total of 628 dropsondes were assimilated in the CTRL. The dropsonde data impact on precipitation forecasts over U.S. West Coast domains is largely positive, especially for day-5 lead time, and appears driven by different model variables on a case-by-case basis. These results suggest that data gaps associated with ARs can be addressed with targeted ARR field campaigns providing vital observations needed for improving U.S. West Coast precipitation forecasts.

© 2022 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Vijay Tallapragada, Vijay.Tallapragada@noaa.gov

Abstract

The impact of assimilating dropsonde data from the 2020 Atmospheric River (AR) Reconnaissance (ARR) field campaign on operational numerical precipitation forecasts was assessed. Two experiments were executed for the period from 24 January to 18 March 2020 using the NCEP Global Forecast System, version 15 (GFSv15), with a four-dimensional hybrid ensemble–variational (4DEnVar) data assimilation system. The control run (CTRL) used all the routinely assimilated data and included ARR dropsonde data, whereas the denial run (DENY) excluded the dropsonde data. There were 17 intensive observing periods (IOPs) totaling 46 Air Force C-130 and 16 NOAA G-IV missions to deploy dropsondes over targeted regions with potential for downstream high-impact weather associated with the ARs. Data from a total of 628 dropsondes were assimilated in the CTRL. The dropsonde data impact on precipitation forecasts over U.S. West Coast domains is largely positive, especially for day-5 lead time, and appears driven by different model variables on a case-by-case basis. These results suggest that data gaps associated with ARs can be addressed with targeted ARR field campaigns providing vital observations needed for improving U.S. West Coast precipitation forecasts.

© 2022 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Vijay Tallapragada, Vijay.Tallapragada@noaa.gov

1. Introduction

Atmospheric rivers (ARs), generally described as bands of maximum horizontal water vapor flux typically 200 km in extent and primarily in the lower troposphere, are usually part of a midlatitude cyclone complex that has high moisture content originating from the central tropical Pacific region. ARs have an important impact on weather over the U.S. West Coast and Canada. They are often associated with heavy precipitation and flood conditions as well as dangerous mudslides, but can also provide beneficial water supply to affected areas. ARs are associated with 84% of flood damage in the western United States (Ralph et al. 2020).

The integrated vapor transport (IVT; Ralph et al. 2018) is a fundamental derived quantity for characterizing the ARs as the primary atmospheric feature leading to major precipitation events. The IVT (kg m−1 s−1) is defined as follows:
IVT=Mag{[sfc225(uq)dp/g],[sfc225(υq)dp/g]},
where sfc is the surface pressure; 225 hPa is the upper integration limit; Mag is the magnitude operator; u, υ, are, respectively, the horizontal zonal and meridional velocities (m s−1); q is the specific humidity (g g−1); and g is the acceleration of gravity. The IVT depends on integrated wind and q values, but is episodic in nature so that its statistics can be associated with precipitation maxima in AR-impacted situations.

Forecasting AR precipitation impact on typical watersheds, with an extent 100 km or less, has many difficulties, among which are the comparable horizontal extents of both the IVT maxims and the target watershed areas, coastal topographic forcing on the U.S. and Canadian west coasts, and the inherent challenges of accurate precipitation prediction from operational forecast and data assimilation systems.

In response to these weather prediction challenges, real-time AR Reconnaissance (ARR) observing campaigns (OCs; Ralph et al. 2020) have been designed and executed in a similar manner to those initiated for tropical cyclones. Supplementary dropsonde data gathered during tropical cyclone reconnaissance missions and ingested into operational numerical weather prediction (NWP) systems have, in most cases, resulted in reduced tropical cyclone track forecast errors. In particular, data gathered in the hurricane environment have provided improved depiction of hurricane steering winds and the large-scale troughs and ridges that highly influence hurricane tracks (e.g., Aberson et al. 2010; Majumdar et al. 2013; Brennan et al. 2015). Moreover, for dropsondes deployed in the vicinity of extratropical and tropical cyclones in midlatitudes during the North Atlantic Waveguide and Downstream Impact Experiment (NAWDEX), Schäfler et al. (2018) and Schindler et al. (2020) have shown positive downstream impact.

In the case of ARs, aircraft are deployed to sample the jet-like features throughout the troposphere and regions with enhanced moisture emanating primarily from the tropics. For each AR intensive observing period (IOP), dropsonde-equipped aircraft from NOAA and the U.S. Air Force were deployed in the atmospheric region of interest in the neighborhood of ARs prior to landfall. Targeted AR dropsonde observations were focused on sampling the atmospheric structures, including temperature, moisture, and wind, with ensemble-based sensitivity and adjoint sensitivity analyses, mainly focusing on predictions of U.S. West Coast precipitation. Performing observation targeting based on independent sensitivity analysis methods using adjoints (Reynolds et al. 2019; Doyle et al. 2019) and global model ensembles (Torn and Hakim 2008; Elless et al. 2021) increased confidence in the robustness of targeted regions of sensitivity. The NCEP ensemble sensitivity tools use data from the European Centre for Medium-Range Weather Forecasts (ECMWF) and the National Centers for Environmental Prediction (NCEP) Global Ensemble Forecast System (GEFS) and Canadian Meteorological Center (CMC) ensemble forecasts. They analyze the linear relationships between a forecast metric and initial/prior forecast hour state variables to identify areas that are most sensitive to uncertainty growth at the verification time. In most cases both the adjoint and ensemble sensitivity analyses highlight forecast sensitivity along the AR core, its edges, and in the warm conveyor belt, which provided complementary information on locations where additional observations may help to improve the forecast (Ralph et al. 2020).

Dropsonde soundings from each aircraft provided temperature, wind, pressure, and moisture data that were transmitted to international weather prediction centers in real time for ingest into their respective operational data assimilation and forecast systems. The resulting forecasts have the benefit of these data as well as previous soundings (through the cycled data assimilation) and would be expected to be of improved predictive quality for users concerned with ARs and their regional impacts along the U.S. West Coast.

This paper examines the impact of AR dropsonde observations taken during 17 AR IOPs during January–March 2020 (AR2020) on forecast precipitation. The NCEP operational Global Forecast System (GFS), consisting of a forecast model and a cycled data assimilation system, is executed with and without the observations and provides the raw impact data. The impact is measured in terms of forecast reduced error for precipitation over U.S. West Coast domains. A companion paper (Lord et al. 2022b, manuscript submitted to Wea. Forecasting, hereafter Part II) reports on a similar analysis, but for other model-predicted variables (mean sea level pressure, geopotential height, wind, and moisture).

Section 2 briefly reviews AR OCs prior to 2020 and section 3 details the methodology of the data impact experiments. In section 4, the first 2020 AR IOP (at 0000 UTC 24 January 2020) and verifying (observed) precipitation over important AR2020 OC verification domains are described. Section 5 gives the impact results for precipitation over these verification domains. Appendix A provides details on AR2020 IOPs and deployed dropsondes, and appendix B provides some additional details on precipitation impacts. Further details on AR 2020 IOPs are given in Lord et al. (2022a).

2. Review of prior AR OCs

AR Recon programs have been developed since 2014 to provide aircraft-based sounding observations and other field data, such as deployed buoys, to global operational weather prediction centers and the research community (Ralph et al. 2014). Ralph et al. (2020) summarizes the most recent AR missions from 2016 to 2019. Participating aircraft have been the U.S. Air Force C130 turboprop aircraft (with a flight ceiling of approximately 450 hPa, a range of 3300 km, and endurance exceeding 10.5 h) and the NOAA Gulfstream IV (G-IV) jet (with a flight ceiling of approximately 150 hPa, range of 6600 km and endurance of about 8.75 h). Each aircraft type is equipped with dropsondes that can be released at designated intervals over the flight track to sample temperature, moisture, wind, and pressure from flight level to the surface and has performed similar duties in the tropical cyclone reconnaissance as noted earlier. Ingest of these data into NWP operational systems, with expected improvement to real-time forecast accuracy, has been evaluated by Stone et al. (2020), Lavers et al. (2018, 2020) and Zheng et al. (2021a,b).

Hamill et al. (2013) studied the impact from the assimilation of targeted observations from the 2011 Winter Storms Reconnaissance (WSR) Program with parallel cycles of ECMWF’s data assimilation and deterministic forecasts by including or excluding the targeted observations with the rest of the regularly assimilated data. They found that the 2011 WSR results do not support the hypothesis that differences between forecasts with and without the assimilated dropsondes are statistically (significantly) improved in the localized verification region. They noted that there may be several reasons for the lack of impact in their study, including improvement of the observing systems and the data assimilation and forecast systems, and the incomplete dropsonde sample of the initial sensitive area due to limitations on deploying aircraft range and other flight restrictions. In addition, they reported that the ensemble transform Kalman filter (ETKF) targeting technique used for the WSR experiment was imperfect and inconsistent with the operational data assimilation scheme used in their study. Nevertheless, the work reported here adopts a similar ETKF methodology, but with the NCEP GFS, as described below.

3. Methodology

a. Data assimilation and forecast model

The experimental methodology is that of a standard NWP data impact experiment. A cycled data assimilation and forecast system is executed from the beginning of the observing period to its end plus several more days to provide verification of the last set of observations. One run is executed with the additional AR observations and a second run is made without these observations. A set of diagnostic error post processing software is applied to each run and differences in verification statistics are generated and evaluated.

The forecast model and data assimilation used in the impact experiments evaluated in this paper is the GFS version 15 (GFSv15), which was put into operations on 4 March 2020, nearly at the end of AR2020. As such, this GFS was an upgrade over the operational version used during most of the AR2020 IOPs and is, therefore, also a consistent version of the assimilation and model throughout both control (CTRL) and denial (DENY) experiments.

The GFSv15 model (Yang and Tallapragada 2018) has a horizontal resolution of 13 km, and 64 levels in the vertical extending up to 0.2 hPa. The GFDL finite-volume cubed-sphere (FV3) dynamical core (Lin and Rood 1997; Lin 2004; Putman and Lin 2007; Harris and Lin 2013; Harris et al. 2020a,b) is the basis of GFSv15; that dynamical core and a suite of physical parameterizations comprise the GFS model. The GFSv15 upgraded physical parameterization package includes replacement of Zhao–Carr microphysics with the more advanced GFDL microphysics (Zhou et al. 2019), an updated parameterization of ozone photochemistry with additional production and loss terms (McCormack et al. 2006), a newly introduced parameterization of middle atmospheric water vapor photochemistry (McCormack et al. 2008), a revised bare soil evaporation scheme to reduce a dry and warm bias, and a modified convection scheme to reduce excessive cloud top cooling.

The Global Data Assimilation System (GDAS) is a 4D hybrid ensemble–variational data assimilation system (Kleist and Ide 2015b). The ensemble system has 80 members at a resolution of 25 km. The hybrid algorithm combines uncertainty estimates from the ensemble and a variance that is derived from model data, constant in time, but spatially varying over the globe.

The operational GDAS observations include hyperspectral polar-orbiting and geostationary sounder and imager radiances, radiosonde soundings, GPS radio-occultation soundings, atmospheric motion vectors derived from geostationary imagers, buoy and ship observations, and land-based surface observations (Kleist et al. 2009; Kleist and Ide 2015a,b). The GDAS is cycled 4 times daily for data centered at 0000, 0600, 1200, and 1800 UTC. The observation window is ±3 h surrounding each cycle time. A 9-h forecast from the previous cycle is used as a background field in the hybrid assimilation scheme and a variational quality control algorithm is used to down-weight suspicious observations.

The experimental period began at 0000 UTC 24 January 2020 and continued until 0000 UTC 18 March, covering 55 days, of which 17 days have IOP dropsonde observations. Since the final IOP was at 0000 UTC 11 March, forecasts from both CTRL and DENY experiments out to 168 h (7 days) can be verified against their own analyses. The vast majority of dropsonde launches occurred surrounding the 0000 UTC observation window, so that sondes launched during the 0600 and 1800 UTC observation windows will not be explicitly referenced (Table A1).

b. Precipitation observations

Precipitation observations are critical for verifying AR impact. Over land, the standard, operational Climatologically Calibrated Precipitation Analysis (CCPA) version 4 is used (Hou et al. 2014). The CCPA is a regression-based merging of the Climate Prediction Center Unified Global Daily Gauge Analysis and the Environmental Modeling Center’s Stage-IV multisensor precipitation product (Lin and Mitchell 2005). The CCPA product is available twice daily (0000 and 1200 UTC) and is a 24-h accumulated quantity over the CONUS on a 0.125° latitude–longitude (∼10 km at 40°N) grid. Over the ocean, the Climate Prediction Center Morphing technique (CMORPH; Joyce et al. 2004), a satellite-based product, is used. CMORPH uses Level 2 precipitation rate retrievals from passive microwave instruments aboard low-Earth orbiting satellite platforms and infrared brightness temperatures from geostationary platforms (https://www.ncei.noaa.gov/pub/data/sds/cdr/CDRs/Precipitation-CMORPH/AlgorithmDescription_01B-23.pdf). CMORPH is defined quasi-globally (60°S–60°N) on a 0.25° latitude–longitude grid every 30 min. The CMORPH data are accumulated over 24 h at 0000 and 1200 UTC to match the CCPA product. No attempt is made to reconcile the CCPA and CMORPH products at the coastline; the CMORPH product can be biased and is not as reliable over land as CCPA. Precipitation dates are identified by the end of their accumulation; viz. a map labeled “1200 UTC 27 January” shows the 24-h accumulation (mm) ending on that date.

c. Experimental design

Two GFS experiments were executed, each consisting of a 6-hourly GDAS cycle over the experimental period and forecasts out to 168 h generated from initial conditions at 0000 and 1200 UTC daily through 0000 UTC 11 March 2020, the last IOP date. The control experiment (“CTRL”) assimilated all dropsonde observations received operationally by NCEP and the second experiment (“DENY”) did not assimilate any dropsondes. Output precipitation data are on the GFS Gaussian grid (∼13 km). As such, the CTRL output is very similar to the operational GFS output, with much the same error patterns at all scales. Routine operational precipitation statistics, “full field” comparisons of CTRL and DENY precipitation fields, and CTRL/DENY (interpolated) differences with CCPA data, are presented to assess the impact of the AR OC data for each 2020 IOP and the entire OC.

The 17 IOP cases represent approximately 30% of the total cases (55) over the experimental period. Thus, statistical mean values over all verifications can dilute the impact of the IOP cases alone. The approach taken here is to examine mean statistics over all 55 initial conditions, but to also examine impacts over the 17 IOP cases separately. Similarly, for geographical extent, the observation impacts over North America are likely to be small, having been diluted by a large areal extent compared to the area covered by and impacted by the observations. Again, the approach taken here is to briefly describe impact over North America, but to focus on the impact over a much smaller area, the northeast Pacific, and U.S. West Coast, which is also most consistent with the goals of the AR Program. Table 1 summarizes the variety of statistical results calculated over appropriate verifying domains (see also Fig. 1) and presented in this paper.

Table 1

Summary of the time and space domains used in calculating precipitation verification statistics. Appendix A contains the valid dates for each IOP. Descriptions are abbreviated (TBS: time-mean threat and bias statistics for the OC; FDS: full field and difference statistics for area mean and maximum values, QCD: qualitative case description, and CS: case study). Spatial domains are shown in Fig. 1. Case study verifications are calculated on domains 4 and 5, which are extended versions of domains 2 and 3, respectively.

Table 1
Fig. 1.
Fig. 1.

Spatial domains 1–5 for precipitation and case study verifications (Table 1). Case study domains 4 (PNNC_x) and 5 (SCAN_x) are indicated by dashed lines.

Citation: Weather and Forecasting 38, 1; 10.1175/WAF-D-22-0036.1

d. Estimating and evaluating data impacts

Since dropsonde data are ingested into a continuously cycled data assimilation system, typically with 4 cycles per day, impact can occur at the cycle with assimilated AR observations due to an improved initial atmospheric state at the assimilation time, or it may occur at a later data assimilation cycle due to spread of observed information throughout the model domain as the continuous cycling operates over time, even though there are no new AR observations assimilated. In the first case, one would expect improvement in the short-range forecasts (12–72 h). In the latter case, a forecast improvement may occur during any subsequent assimilation cycle when dropsondes are not assimilated. Due to the continuous nature of a cycled data assimilation system, it is likely that both impacts may be realized, but “direct impact” from dropsonde data at the initialization (analysis) time is more easily identifiable than “indirect impact” from previously assimilated data.1 Separating direct and indirect impacts is extremely difficult. While direct impacts are more likely to be positive, indirect impacts from remote areas may be negative and may overwhelm direct impacts.

The magnitude of data impact is, however, extremely difficult to predict. Due to the possibility of fast-growing, nonlinear errors in the forecast model, any perturbation from dropsonde data can result in a large (positive or negative) impact, even though many/most impacts are very small. Sampling the atmosphere over a limited geographical area with time-separated observations based on phenomenological considerations has substantial merit but does not ensure success due to indirect impacts as noted above. Nevertheless, while the question of observation impact is somewhat murky, for the purposes of this study, we will adopt a simplified strategy by defining direct and indirect impact as above.

4. The AR2020 observing campaign and IOPs

The AR2020 OC took place over the northeast Pacific and Gulf of Alaska from 24 January to 11 March 2020. Seventeen IOPs were conducted using the NOAA G-IV and two C-130 turboprop aircraft from the Air Force 53rd Weather Reconnaissance Squadron’s (AFRES) Hurricane Hunter group as described above. Dropsondes were deployed during each IOP, and the data were transmitted in real time to operational weather prediction centers, including NCEP. Dropsonde temperature and moisture sensor accuracy is approximately that of radiosonde data taken routinely and worldwide over landed areas and scattered islands by operational national weather services; dropsonde wind errors are also equivalent to those of radiosondes. All IOPs begin on the 0000 UTC cycle in year 2020 so that their dates are identified by calendar day and month (e.g., 5 February). Additional details on the 17 AR2020 IOPs are in appendix A.

The IOPs were initiated based on satellite-based evidence of an existing AR and real-time operational forecasts predicting their future impacts over the western United States. To make precipitation verification most relevant to the impacted geographical area, three special domains were used for precipitation (Table 1). The first domain covers the Pacific Northwest (Washington and Oregon) and Northern California (PNNC). The second domain covers Southern California, Arizona, and New Mexico (SCAN) and the third domain covers the West Coast region and encompasses both PNNC and SCAN domains. The observed CCPA domain mean and maximum 24-h accumulated precipitation (mm) within the WEST, PNNC and SCAN domains (Fig. 1 and Table 1) show extreme values within 1–2 days following (or during) each IOP (Figs. 2 and 3 ). Maximum area-average precipitation for the WEST domain in this time window covering all IOPs is 2–4.5 and up to 10 mm for the PNNC domain and 7 mm for the SCAN domain (Figs. 2a–c, respectively). Maximum precipitation accumulations in the WEST domain exceed 200 mm and occur in the PNNC domain during both early and late IOPs (Figs. 3a,b) and in the SCAN domain during late February IOPs and March IOPs (Fig. 3c). Since the WEST domain covers both PNNC and SCAN domains, comparison of each subdomain with the WEST domain shows that they cover all major precipitation events over the OC.

Fig. 2.
Fig. 2.

Area-mean, 24-h accumulated observed CCPA precipitation (mm) at 0000 and 1200 UTC from 0000 UTC 25 Jan to 1200 UTC 16 Mar over (a) the WEST domain, (b) the PNNC domain, and (c) the SCAN domain. See Table 1 and Fig. 1 for more details.

Citation: Weather and Forecasting 38, 1; 10.1175/WAF-D-22-0036.1

Fig. 3.
Fig. 3.

As in Fig. 2, but for domain-maximum precipitation.

Citation: Weather and Forecasting 38, 1; 10.1175/WAF-D-22-0036.1

A short summary of the synoptic conditions, including the IVT, is presented below for IOP-1 (0000 UTC 24 January). Lord et al. (2022a) presents similar information for the remaining IOPs. Critical forecast ranges (h) for each IOP cover the dates of possible high impact precipitation events subsequent to each IOP in the PNNC and SCAN domains (Table 2). Note that the forecast ranges do not include 12 h, since it does not contain all the 24-h accumulated precipitation beginning at the IOP date. Local precipitation domains (Fig. 1) are the PNNC_x and SCAN_x as defined in Table 1. IOP-11 is not included for precipitation evaluation due to lack of available land-based observed precipitation analyses over the primary landfall area in British Columbia (BC), Canada.

Table 2

Summary of the critical forecasts for land-based precipitation associated with each IOP. IOPs are numbered as in Table A1, and dropsonde observations are deployed at 0000 UTC. Critical forecast ranges (h), 24, 36, etc., are at 12-h intervals and are valid at the date in column 4, but do not include 12 h, since the latter does not contain all of the 24-h accumulated precipitation beginning at the IOP date. Beginning precipitation dates (column 3) are measured from IOP observation date (column 1) and are 24 h or longer. The forecast range for IOP-9 is less than 24 h and is omitted from the table. IOP modifiers “p” and “s” signify an event in the PNNC/SCAN domains over the same forecast range. IOP 11 is not included for precipitation evaluation due to lack of available land-based precipitation at landfall in BC.

Table 2

IOP-1 begins at 0000 UTC 24 January. At this time, a precipitation event is already impacting the Oregon–Washington coast with an associated IVT maximum and southwest flow along the coast (Fig. 4). A major AR feature is centered at 35°N, 148°W between a low pressure system to the northwest and a high centered at 23°N, 135°W. Two AFRES flights sampled this AR, which continues to propagate NE before the leading edge influences the Oregon, Washington, and Northern California area (Fig. 5) and produces peak precipitation at 1200 UTC 26 January (Fig. 3c). Further details on IOP-1, including a brief description of the major precipitation event subsequent to IOP-1, are given in Lord et al. (2022a).

Fig. 4.
Fig. 4.

Magnitude of the CTRL vertically integrated specific humidity flux (kg m−1 s−1) for 0000 UTC 24 Jan 2020. Contours of mean sea level pressure (hPa) and 850-hPa streamlines are included. The sounding locations for 37 dropsondes from two AFRES aircraft for the 0000 UTC data assimilation cycle are shown in blue.

Citation: Weather and Forecasting 38, 1; 10.1175/WAF-D-22-0036.1

Fig. 5.
Fig. 5.

As in Fig. 4, but valid at 0000 UTC 26 Jan 2020, 48 h after IOP-1.

Citation: Weather and Forecasting 38, 1; 10.1175/WAF-D-22-0036.1

5. Precipitation impacts

Positive precipitation forecast impacts are probably the most important desired result from the AR Program. Model forecast precipitation itself, however, is a complex function of the forecast divergent wind, available moisture, thermodynamic vertical stability, and the various physical parameterizations in the model itself. The AR IOPs were designed to improve precipitation forecasts over the WEST, PNNC, and SCAN domains. Generally, IOPs 1–9 and 11–14 (Table 2) were focused on future events in the PNNC domain and IOPs 10, 12, and 14–17 were focused on the SCAN domain. However, some major events in these domains were not addressed by any IOP observations (e.g., 14–16 March in the PNNC domain and 25 January–16 February in the SCAN domain), so that any impacts (positive or negative) at these dates and in these domains would be indirectly due to the data assimilation system carrying prior IOP observed information forward in time. After presenting the forecast precipitation statistics used here, the time series of observed and model forecast precipitation over the WEST, PNNC, and SCAN domains are described, and regional scores for the entire OC together with direct impact statistics for relevant IOPs for the PNNC and SCAN domains are presented. This section concludes with a description of forecast errors for three IOP cases and a correlational analysis that suggests impact relationships between improvements/degradations in the CTRL forecasts and precipitation improvements.

a. Forecast precipitation statistics

Both precipitation magnitude, duration and timing are critical criteria for measuring forecast success. Operationally standard “threat” and “bias” scores (Wilks 2006; Jolliffe and Stephenson 2003) include contributions from each of these criteria and are presented first. Additional statistical verification for precipitation forecasts captures improvement/degradation for CTRL/DENY “full field” (FF) quantities such as domain-wide maximum and domain-averaged values relative to their CCPA counterparts. Additionally, similar statistics for gridded “difference fields” (DF; CTRL − CCPA and DENY − CCPA) are calculated. All precipitation statistics were generated for the WEST, PNNC, and SCAN domains with verifications for 24–120-h forecasts at both 0000 and 1200 UTC from the entire set of AR2020 OC initial conditions (0000 UTC 24 January–1200 UTC 11 March), a total of 96 cases. In addition, these statistics were then sampled for the critical forecasts of each IOP (Table 2).

Basic FF verified quantities are the maximum and domain-averaged precipitation for each model on its Gaussian grid within each domain, FFpmaxCTRL, FFpmaxDENY, FFpavgCTRL, and FFpavgDENY, respectively, and the corresponding observed (CCPA) maximum and average amounts on its grid, FFpmaxCCPA, and FFpavgCCPA. All quantities are verified for 24–120-h forecasts against 24-h accumulated CCPA ending at either 0000 or 1200 UTC on the verifying date as appropriate. The first verification date is 0000 UTC 25 January for the 24-h forecast and the final date is 1200 UTC 16 March for the 120-h forecast. The FFpavg and FFpmax derived statistics (S) are the absolute values of differences:
SCTRL=|FFpmaxCTRLFFpmaxCCPA|,
and are used to calculate an error improvement (I) given by
I=100×(SDENYSCTRL)/SDENY,

for both maximum and domain-averaged precipitation; positive values of I indicate that CTRL maximum/average precipitation values are closer to observed values than for DENY.

The basic DF statistics are calculated after interpolating the FF model precipitation output to the CCPA grid and taking the model-minus-observed difference over the domain. The minimum, maximum and standard deviation of the difference (DFpmin, DFpmax, and DFpstd, respectively) for each model run are then compared and the improvement calculated as, for example:
IDFpmin=100×(DFpminDENYDFpminCTRL)/|DFpminDENY|.
Improvements of CTRL over DENY for these statistics indicate a better domain-wide geographical distribution of the DF wherever the CCPA is defined.

Forecast statistics are calculated for the 24–120-h range and each domain by averaging over all valid dates for each forecast hour for indirect statistics and each IOP for direct statistics. Since these statistics are noisy, resulting from both naturally noisy precipitation distributions and a very small number of verifications, results are best described categorically as improvements (I > Ithr > 0), degradations (I < −Ithr < 0), where Ithr = 0.5% and neutral (−IthrIIthr). To prevent unrealistic calculations from distorting the statistics, a filter to remove cases with minimal CCPA values is applied to all calculations; further details on calculation of the precipitation statistics are given in appendix B.

b. Time series of observed and model forecast precipitation

The time series of 24-, 72-, and 120-h CTRL forecast precipitation, averaged over the PNNC domain, shows good correspondence with observed values over the OC (Fig. 6a), but with a 10% increase in mean value at 120 h compared to 24 h, while the 72- and 24-h means are equivalent. At individual verification times, however, there are patches of strong overprediction (0000 UTC 26 January–0000 UTC 27 January, 0000 UTC 16 February–0000 UTC 17 February, 0000 UTC 2 March–0000 UTC 3 March, and 0000 UTC 9 March–1200 UTC 9 March) and other patches of notable underprediction (1200 UTC 8 February–1200 UTC 9 February and 0000 UTC 14 March–1200 UTC 16 March). Overprediction is worse at 120 h than shorter forecasts and underprediction is trendless across all forecast hours.

Fig. 6.
Fig. 6.

As in Fig. 2 over (a) the PNNC and (b) SCAN domains for area-average precipitation, but with added CTRL area-average, 24-h accumulated, forecast precipitation (mm) for the 24-h (dot), 72-h (square), and 120-h (X) forecasts at both 0000 and 1200 UTC for the AR2020 OC.

Citation: Weather and Forecasting 38, 1; 10.1175/WAF-D-22-0036.1

The CTRL forecast average precipitation over the SCAN domain (Fig. 6b), however, shows a 16% decreasing trend compared to the CCPA from 24 to 120 h. The number and magnitude of the over predictions is somewhat reduced compared to the PNNC domain, but there are over prediction periods from 0000 UTC 12 February to 0000 UTC 13 February, from 1200 UTC 5 March to 1200 UTC 6 March, and various times from 1200 UTC 12 March to 1200 UTC 15 March for some forecast hours but not others. Due to the opposite trends in time- and domain-averaged precipitation in the PNNC and SCAN domains, the WEST domain, which encompasses both subdomains, exhibits fewer positive outliers and the forecast trends for the subregions are hidden.

For PNNC domain maximum precipitation, the CTRL has low biases of −15.7%, −17.5%, and −11.1% at 24-, 72-, and 120-h forecasts, respectively, over the PNNC domain (Fig. 7a). Undoubtedly, some of this bias is model-resolution dependent, but there is at least one period, 0000 UTC 1 February–0000 UTC 2 February when the model maxima for all forecast hours exceed the observed CCPA, and several others when at least one forecast hour does also (e.g., 0000 UTC 7 February–0000 UTC 8 February, 0000 UTC 14 February–1200 UTC 17 February, and 0000 UTC 9 March–1200 UTC 13 March).

Fig. 7.
Fig. 7.

As in Fig. 6, but for domain-maximum precipitation (mm).

Citation: Weather and Forecasting 38, 1; 10.1175/WAF-D-22-0036.1

Over the SCAN domain, the maximum precipitation amounts (Fig. 7b) exceed the observed in fewer cases than for the PNNC, but some exceptions (i.e., false alarms) occur for low precipitation periods, from 0000 UTC 30 January to 0000 UTC 4 February, for example. The CTRL maximum precipitation biases are much lower for the SCAN domain than for PNNC: −35.7%, −33.5%, and −42.3% for 24-, 72-, and 120-h forecasts, respectively.

In a typical AR landfalling scenario with high precipitation (>50 mm day−1), the IVT maximum begins to make landfall at 12–24 h in advance of the maximum rainfall (Fig. 8). Examples are IOP-1 at 36 h in the PNNC domain (when the IVT landfalls at 36 h and the maximum precipitation begins at 48 h) and IOP-10 at both 36 and 72 h in the SCAN domain when the IVT also leads by 12 h. However, as described in the clarifying notes to Fig. 8 (Table 3), a small number of heavy precipitation scenarios during AR2020 occur without an accompanying, landfalling east Pacific IVT, or in conjunction with other atmospheric phenomena. Therefore, three different cases of precipitation impacts are examined in more detail to place the statistical analyses in a greater perspective.

Fig. 8.
Fig. 8.
Fig. 8.

Observed CCPA domain-maximum precipitation instances of 50 mm or greater (green) for each AR2020 IOP over (a) PNNC and (b) SCAN domains and IVT instances diagnosed from the ECMO verifying analysis (red) for the analysis (0) and each forecast verification hour following the IOP date. An IVT instance occurs when a contour of 300 kg m−1 s−1 is within the PNNC/SCAN domain at the initial/forecast verification time. IOPs not meeting these criteria are not shown. Notes of clarification for unique situations are added for both precipitation and IVT instances in Table 3.

Citation: Weather and Forecasting 38, 1; 10.1175/WAF-D-22-0036.1

Table 3

Notes of clarification relevant to Fig. 8 for the PNNC and SCAN domains. Notes describe IVT and precipitation origins and together with verification issues pertaining to specific IOPs.

Table 3

c. Regional scores

For continental scales, precipitation impacts are insignificant (not shown). However, regional statistics over the U.S. West Coast, viz. the operational threat and bias scores for 0000 UTC initializations valid at 1200 UTC (Figs. 9a,b) over a domain approximately equal to the WEST domain show some positive impact for the threat score, particularly for moderate rainfall amounts (10 mm day−1) at 132–144 h and heavy rainfall (35 mm) at 96–120 h. Furthermore, time-averaged 108–132-h CTRL threat scores for all precipitation amounts (Fig. 9c) are improved over DENY with 6- and 10-mm categories statistically significant at the 99% level.

Fig. 9.
Fig. 9.

(a) CTRL threat score for the experimental period. (b) CTRL − DENY difference for 36–180-h precipitation forecasts over the U.S. West Coast region (approximately 32°–49.5°N, 115°–125°W) for the AR2020 experimental period. The score is for 0.2–75 mm day−1 precipitation thresholds for 24-h accumulations. Positive impact is in green, and negative impact is in red. (c) The 108–132-h forecast averaged CTRL (red) and DENY (black) threat scores and CTRL − DENY differences. (d) As in (c), but for bias scores. Differences outside the vertical boxes indicate statistical significance at the 99% level.

Citation: Weather and Forecasting 38, 1; 10.1175/WAF-D-22-0036.1

Forecast improvement statistics of CTRL over DENY [Eq. (2)], averaged over all 0000 and 1200 UTC cases for the WEST domain (Fig. 10a), show generally positive impact of 1%–4% in the 24–60-h range, but mixed impact for the maximum domain-wide precipitation, even though the domain-averaged precipitation is improved by almost 4% at 60 h. At 72–84 h, statistics are preponderantly negative, except for the domain-wide FF maximum and average statistics. For longer forecasts (96–120 h), improvements are positive, consistent with the regional threat scores (Figs. 10a,b). Most cases are improvements, with majorities of 5%–15%, over 24–120 h except for IDFpmin (Fig. 10b).

Fig. 10.
Fig. 10.

(a) Average improvement (%) of CTRL over DENY for 96 twice-daily (0000 and 1200 UTC) forecast cases for 24–120-h forecasts. Forecasts are initialized from 0000 UTC 24 Jan to 1200 UTC 11 Mar over the WEST domain (Table 1). Statistics are as described in the text. (b) As in Fig. 10a, but for percent improved cases (percent > 50).

Citation: Weather and Forecasting 38, 1; 10.1175/WAF-D-22-0036.1

PNNC domain statistics are mixed for 24-h forecasts (Fig. 11a), show mostly small positive impacts of <5% for DF quantities for 36–60-h forecasts, generally negative impact of at most −5% at 72–96 h and mostly positive impact at 108–120 h. It is, however, notable that FF statistics (IFFpmax and IFFpavg) continue positive at approximately 5%–15% throughout 36–120 h, indicating that the dropsonde data result in domain-wide maximum and average precipitation rates closer to CCPA observations in the PNNC domain, as illustrated by Figs. 6 and 7. Moreover, corresponding case majorities (Fig. 11b) are approximately 7%–15% for FF quantities and generally positive for most DF quantities, thereby indicating overall improved geographical precipitation distribution from the CTRL forecasts relative to DENY.

Fig. 11.
Fig. 11.

(a) As in Fig. 10a, but for the PNNC domain. (b) As in Fig. 10b, but for the PNNC domain.

Citation: Weather and Forecasting 38, 1; 10.1175/WAF-D-22-0036.1

The above case-averaged improvement statistics, after averaging over all forecast hours and all statistics, are positive for all domains, with a maximum improvement of 3.21% for the PNNC domain (Table 4) and the largest plurality of positive cases (3.8%). Improvements for the PNNC domain are most encouraging for the maximum forecast value (IFFpmax, 10.7%) and the domain-average value (IFFpavg, 5.7%), each of which also has a strong accompanying plurality of positive impacts. While the SCAN domain has smaller impacts overall, the individual statistics indicate positive impacts and positive plurality in all but one quantity (IDFpmin). More details on impacts, including for the SCAN domain and a summary of all cases for the IDFpstd statistic, are given in appendix B.

Table 4

Improvement statistics of CTRL over DENY forecast precipitation averaged over 96 cases and forecast hours 24–120 for the WEST domain and the two subdomains, PNNC and SCAN (Table 1). The number in parentheses is the percent of positive impact cases, expressed as a majority (i.e., >0 means more than 50% improved cases and vice versa for negative values). The right column is the average value of all statistics for each domain.

Table 4

Stratification by forecast hour over each domain (Table 5) reveals no definite trend for larger forecast improvements at short term (24–48 h) and decreases in the midrange (60–84 h) or longer (96–120 h) forecasts, either for a single statistic or in the average over all statistics. If anything, improvements tend to be smaller at the shorter term, which is possibly due to the nonlinearity of the model precipitation generation (further discussion of this somewhat unexpected result is given in section 5e below).

Table 5

As in Table 4, but for averages over 24–48, 60–84, and 96–120 h for each domain.

Table 5

d. Direct impacts from IOPs only

For each IOP (except IOP-11, which has no CCPA verification over western Canada), precipitation statistics were selected for critical forecast hours over the more relevant, more local, PNNC and SCAN verification domains (Table 2). Consistent with operational precipitation verification at 1200 UTC, verification of 0000 UTC initializations is done for forecast ranges 12–36, 36–60 h, etc. Overall, results are mixed for each forecast hour (Fig. 12). Precipitation forecasts for 96 h are improved in four of five statistics. OC-averaged IDFpmax and IDFpstd are positive for longer forecasts, implying an improved geographical precipitation distribution. Short-range forecasts (24–36 h) are mostly not improved, while 48–60-h forecasts have improved IDFpstd. At 72–84 h, forecasts are largely degraded or unchanged across multiple statistics. These statistics are consistent with error reductions for the mass, wind, and specific humidity fields as presented in Part II.

Fig. 12.
Fig. 12.

Summary of precipitation improvement statistics (I) for 24–132 h, including all IOPs in the AR2020 OC that verify during a period of high-impact precipitation. Improved (green, I > 1%), degraded (red, I < −1%), and unchanged (yellow, −1% ≤ I ≤1%) impacts are indicated for each forecast hour. The number of IOPs for each forecast hour, the total number of valid dates, and the number of statistics improved, degraded, and unchanged are also tabulated.

Citation: Weather and Forecasting 38, 1; 10.1175/WAF-D-22-0036.1

e. Case studies and correlation analysis

While the statistical precipitation forecast improvement results presented above are (for the most part) mixed, there are some individual IOP cases with improved forecasts periods that are illustrative in further analysis. Generally, the complexities of generating and verifying improved precipitation forecasts from a global forecast system are considerable. First, assimilating observations of mass (geopotential height, temperature, and surface pressure), wind and humidity [henceforth “non-precipitation” model quantities (NPQs)] is a global, nonlinear process due to the interaction of the cycled data assimilation algorithm with the forecast model. Next, model precipitation is a result of nonlinear, parameterized physical processes acting on the NPQs as well as derived quantities (e.g., vertical velocity) and is, therefore, not solely determined by the NPQs. Last, model forecast evolution can, for example, produce positive results early in the forecast but negative results later on. As a result, interpretation of the NPQ observation impacts is not always amenable to a “cause and effect” diagnosis, but some insights from specific cases can be gained with the caveat that they should be applied elsewhere with caution.

Verifying NPQs in the context of precipitation forecast improvements requires some further considerations beyond a straight-forward comparison of IVT and NPQ improvements to those for precipitation over regional domain such as PNNC and SCAN. While precipitation impact is clearly and closely connected with IVT verification and the NPQ verifications, wind and moisture quantities are local to the precipitation evolution itself, whereas IVT positioning can be impacted by the larger-scale regional environment consisting of interacting high and low pressure systems. The regional-scale IVT and NPQ verifications (see Part II) can be misleading or marginally relevant to the local precipitation verification domains (Table 1, Fig. 1). In this section, therefore, special verifications for IVT and NPQs are performed over minor extensions of the PNNC domain (PNNC_x) northward and southward to 41°–51°N and westward 231°–243°E, and the SCAN domain (SCAN_x) southward to 20°N (while retaining the east-west extent, Table 1).

Three cases from IOPs 5, 10, and 16 are chosen for illustration, and are followed by a correlation analysis between precipitation and IVT impacts and between IVT and NPQ impacts. Additional impacts of assimilating the AR2020 dropsonde NPQ data on IVTs and NPQ fields, together with additional cases, are found in Part II and Lord et al. (2022a).

1) Case studies

(i) IOP-5 (5 February)

At 0000 UTC 4 February, a strong IVT maximum stretches across the northeast Pacific from just north of Vancouver Island to northwest of Hawaii; there are strong northwest winds and a low pressure area approaching northwest Washington State at the initial forecast hour. The IVT maximum lasts for 72 h and precipitation exceeds 50 mm day−1 through 96 h (Fig. 8a). For the 24-h forecast, there is a modest improvement in the precipitation statistic (IDFpstd) of 2% (Table 6), but large degradations occur from 48 to 72 h.

Table 6

Improvement (%) of the IDFpstd statistic for IOP-5 (5 Feb) and corresponding improvements (%) in MAE statistics for IVT and 850-hPa geopotential height (Z), wind speed (WSPD), and specific humidity (SPCH). Precipitation improvement statistics are not generated for forecast lengths less than 24 h due to the 24-h accumulation period but NPQ statistics are. The maximum observed precipitation (mm day−1) is also listed for each forecast verification time.

Table 6

For example, the 48-h CTRL precipitation (Fig. 13b) is excessive over the Olympic Peninsula and to the east while the DENY is improved in both areas compared to the verifying CCPA (Figs. 13c and 14) but is nevertheless an over prediction.

Fig. 13.
Fig. 13.

(a) 24-h CCPA accumulated observed precipitation (mm) ending at 0000 UTC 7 Feb. (b) CTRL 24–48-h forecast precipitation (mm) valid at 0000 UTC 7 Feb. (c) As in (b), but for the DENY forecast.

Citation: Weather and Forecasting 38, 1; 10.1175/WAF-D-22-0036.1

Fig. 14.
Fig. 14.

(a) Difference of the DENY 24–48-h forecast precipitation (mm) with the CCPA observed precipitation valid at 0000 UTC 7 Feb. (b) As in (a), but for the CTRL forecast.

Citation: Weather and Forecasting 38, 1; 10.1175/WAF-D-22-0036.1

Degradations in the CTRL IVT MAE are from 36 to 60 h (Table 6), thereby preceding the largest negative IDFpstd statistical impacts by 12 h; the CTRL IVT is of larger magnitude and shifted northward compared to the verification (Fig. 15), while the DENY IVT is smaller in the vicinity of Vancouver Island, Canada, but still over predicted. Moreover, the concurrent, large degradations in Z850, WSPD850, and SPCH850 (Table 6) are consistent with over predicting the IVT and are largest (percentage-wise) for Z850. Inspecting the CTRL/DENY error maps for these fields shows an increased north–south geopotential gradient for CTRL relative to DENY over the domain and accompanying IVT and WSPD increases over the PNCC domain (Figs. 15 and 16, respectively). These impacts are on the mesoscale, on the order of 100–200 km in extent, evolve differently throughout the length of the forecast due to both the assimilated observations and the differences in the cycled background field, and are driven by the NPQ variables and the model physical parameterizations in the forecast itself.

Fig. 15.
Fig. 15.

(a) CTRL IVT error (kg m−1 s−1, shaded) for the 48-h IOP-5 forecast valid at 0000 UTC 7 Feb. Dotted line is the ECMO verifying analysis. (b) As in (a), but for the DENY forecast.

Citation: Weather and Forecasting 38, 1; 10.1175/WAF-D-22-0036.1

Fig. 16.
Fig. 16.

(a) CTRL 850-hPa wind speed error for the 48-h IOP-5 forecast valid at 0000 UTC 7 Feb. Dotted line is the ECMO verifying analysis. (b) As in (a), but for the DENY forecast.

Citation: Weather and Forecasting 38, 1; 10.1175/WAF-D-22-0036.1

(ii) IOP-10 (21 February)

Dropsondes were deployed off the Southern California coast to measure a low latitude cyclone and associated IVT that was forecast to intensify and make landfall over Baja California at 0000 UTC 22 February, thereby posing a threat within the SCAN domain (not discussed further). Meanwhile, a second AR system, unobserved by deployed dropsondes, was moving toward the U.S./Canadian coast and forecast to make landfall over the PNNC domain at 1200 UTC 23 February. Under the influence of a developing cyclone west of Vancouver Island, the IVT maximum to the southeast brought a strong onshore moisture flux over northwest Washington State (Lord et al. 2022a).

The maximum precipitation over the PNNC domain for this IOP-10 event is well forecast by the CTRL (Fig. 17). The 60–84-h accumulated observed and forecast precipitation, valid at 1200 UTC 24 February has maxima of 71 (CCPA), 64 (CTRL), and 56 mm (DENY), so that the CTRL forecast maximum is much closer to observed than the DENY and is improved by 53%; moreover, the broad area of maximum precipitation in north Washington State is improved in the CTRL relative to DENY. Difference fields of CTRL/DENY with the CCPA also show that the DENY is under forecast by 10–20 mm in the maximum precipitation area near the U.S.–Canadian border (Fig. 18a), while the CTRL, while still under forecast, is less so (Fig. 18b), and is improved by 4.5% in the IDFpstd statistic at 84 h (Table 7).

Fig. 17.
Fig. 17.

(a) 24-h CCPA accumulated observed precipitation (mm) ending at 1200 UTC 24 Feb. (b) CTRL 60–84-h forecast precipitation (mm) valid at 1200 UTC 24 Feb. (c) As in (b), but for the DENY forecast.

Citation: Weather and Forecasting 38, 1; 10.1175/WAF-D-22-0036.1

Fig. 18.
Fig. 18.

(a) Difference of the DENY 60–84-h forecast precipitation (mm) with the CCPA observed precipitation valid at 1200 UTC 24 Feb. (b) As in (a), but for the CTRL forecast.

Citation: Weather and Forecasting 38, 1; 10.1175/WAF-D-22-0036.1

Table 7

As in Table 6, but for IOP-10 (21 Feb).

Table 7

The landfalling CTRL IVT error at 60 h (valid 1200 UTC 23 February, Fig. 19) is 18% worse than for DENY as it covers a larger area in the domain, but actually has a smaller magnitude; after landfall, the 72-h CTRL forecast is improved by 22% over the DENY (Table 7). Most NPQs in the 60–84-h forecast range are improvements and, at 84 h, all impacts (except IVT) are positive, with the largest being Z850 (16%) and the smallest for SPCH850 (4%).

Fig. 19.
Fig. 19.

As in Fig. 15, but for the landfalling 60-h forecast for IOP-10, valid at 1200 UTC 23 Feb.

Citation: Weather and Forecasting 38, 1; 10.1175/WAF-D-22-0036.1

(iii) IOP-17 (11 March)

Beginning at 1200 UTC 7 March, a steady stream of IVT maxima propagated northeast from south of 20°N to the Baja California coast under the influence of first, an offshore high pressure system (through 0000 UTC 9 March), and later, a developing cyclonic system west of the U.S.–Mexico border that moved slowly eastward. The number of deployed dropsondes for IOP-17 was minimal (2) but those from previous IOPs (14–16) sought to improve realizations of the cyclone and associated IVT maxima. See Lord et al. (2022a) for more details.

The IOP-17 statistical summary (Table 8) shows modest, but consistent, improvement in IDFpstd at most forecast ranges. The 36-h precipitation forecast for IOP-17 is improved by more than 10% in the CTRL as over predictions in coastal Southern California and southern Arizona are reduced (Figs. 20 and 21); there is, however, little change in the under prediction over central and northern Arizona. Improvements in IVT and moisture (SPCH850) are both positive and of order 6%–7% at 36 h.

Fig. 20.
Fig. 20.

(a) 24-h CCPA accumulated observed precipitation (mm) ending at 1200 UTC 12 Mar. (b) CTRL 12–36-h forecast precipitation (mm) valid at 1200 UTC 12 Mar. (c) As in (b), but for the DENY forecast.

Citation: Weather and Forecasting 38, 1; 10.1175/WAF-D-22-0036.1

Fig. 21.
Fig. 21.

(a) Difference of the DENY 12–36-h forecast precipitation (mm) with the CCPA observed precipitation valid at 1200 UTC 12 Mar. (b) As in (a), but for the CTRL forecast.

Citation: Weather and Forecasting 38, 1; 10.1175/WAF-D-22-0036.1

Table 8

As in Table 6, but for IOP-17 (11 Mar). Note that there is no CCPA verifying precipitation over Baja California and Mexico so that conclusions about the overall impacts on precipitation forecasts are based on limited spatial coverage over the United States.

Table 8

At 72 h (0000 UTC 14 March), there is heavy observed precipitation exceeding 35 mm day−1 over coastal areas northwest of Los Angeles and northeastward through Nevada, west central Arizona and northeast Texas and Oklahoma (Fig. 22a). Neither CTRL and DENY generate more than 15 mm day−1 over these areas and DENY has the larger amounts (Figs. 22b,c). Similar under predictions occur in Nevada, Arizona, and Texas/Oklahoma precipitation. Both CTRL and DENY have maxima over Texas event but DENY is forecast further south and west of the verifying position (Fig. 23). Consistent with these results, the CTRL IVT improvement (20%) is due to improved positioning of the maximum over Baja California, northeast Mexico, and central Texas, the latter in conjunction with a moisture surge from the Gulf of Mexico that is much better forecast in the CTRL but impacts precipitation only in southern Texas (Fig. 8b, Table 3 and Fig. 24).

Fig. 22.
Fig. 22.

(a) 24-h CCPA accumulated observed precipitation (mm) ending at 0000 UTC 14 Mar. (b) CTRL 48–72-h forecast precipitation (mm) valid at 0000 UTC 14 Mar. (c) As in (b), but for the DENY forecast.

Citation: Weather and Forecasting 38, 1; 10.1175/WAF-D-22-0036.1

Fig. 23.
Fig. 23.

(a) Difference of the DENY 48–72-h forecast precipitation (mm) with the CCPA observed precipitation valid at 0000 UTC 14 Mar. (b) As in (a), but for the CTRL forecast.

Citation: Weather and Forecasting 38, 1; 10.1175/WAF-D-22-0036.1

Fig. 24.
Fig. 24.

As in Fig. 15, but for the 72-h IOP-17 (11 Mar) forecast, valid at 0000 UTC 14 Mar.

Citation: Weather and Forecasting 38, 1; 10.1175/WAF-D-22-0036.1

2) Correlation analysis

(i) Case studies

Correlations between IDFpstd improvement and improvements to IVT and the NPQ variables, over forecast hours when significant precipitation is predicted, can provide some insight into which model variables are impacting precipitation improvements (or degradations). Given that changes induced by assimilation of dropsonde data have propagated throughout each forecast and impacted areas remote from the deployment area, a definitive result is difficult to achieve, however, but may be suggestive when accompanied by illustrations from case studies.2

Concurrent and lagged correlations were calculated from the data in Tables 6 and 8. IOP-10 (21 February) was omitted since there is only one CCPA verification of maximum precipitation equal to or greater than 50 mm day−1 (Fig. 8, Table 7). Similar calculations for IOP-16, however, have been included instead (Table 9).

Table 9

Concurrent and lagged correlations between IDFpstd improvement and improvements to IVT MAE. Improvements were calculated from MAE statistics over the expanded PNNC_x and SCAN_x domains (Fig. 1). A 12-h lagged correlation, for example, correlates improvement in IDFpstd with IVT improvement 12 h earlier. Averages over all three IOPs are also given.

Table 9

For IOP-5, the correlation between improvements to IDFpstd and those to IVT for 24–120-h forecasts is 0.48 for concurrent fields and 0.625 for 12-h lagged precipitation; the latter result reinforces the finding (Fig. 8) that IVT maxima often precede the maximum precipitation by 12 h. For IOPs 15–16 and the three-case average, however, the interactions appear to be more concurrent.

Concurrent correlations between IVT and Z850, WSPD850, and SPCH850 (Table 10) indicate varying relationships between these variables for the three cases. IOP-5 exhibits a strong correlation between IVT and both WSPD850 (Figs. 15 and 16) and SPCH850, implying that improvements to these variables are more important than for Z850, which is sensitive to the spatial positioning of the IVT. For IOP-16, SPCH850 dominates but this pattern is not echoed 24 h later for IOP-17, when the correlation between IVT and Z850 improvements is largest (Fig. 24). In conclusion, across just these three IOP samples, there is a variety of apparently dominant impacts, resulting in the average impact of 850-hPa geopotential height, wind speed and specific humidity being about equal.

Table 10

Concurrent correlations across 24–120-h forecasts between IVT and Z850, WSPD850, and SPCH850 for three cases. Dates and domains match those in Table 9 for each IOP.

Table 10

(ii) IOP-averaged impacts

While correlations over the three chosen cases are suggestive of impacts on precipitation, correlations across all IOPs and over larger domains, such as those used in Part II and Lord et al. (2022a), can provide additional information on impacts of assimilating NPQ observations on ARs simply because there are many more IVT cases over the verifying oceanic domain. While some of these are the landfalling cases described in this paper (in which precipitation is emphasized), many others are in earlier development stages than treated here or have entered the verification domain throughout the AR2020 OC (see Part II and Lord et al. 2022a).

IOP-averaged concurrent correlations of IVT and layer mean (700–925 hPa) Z, WSPD, and SPCH over 24–120-h forecasts (Table 11) show that geopotential height, wind speed and specific humidity improvements are about equally correlated with improvements to IVT. When, however, the 0–12-h data are included in the correlations; the correlation with Zavg is reduced to below those for WSPD and SPCH. It is possible that nonphysical model height adjustments, resulting from imbalances in the data assimilation, may be distorting the IVT evolution for the first forecast day.

Table 11

Concurrent correlations between IVT improvement and vertically averaged NPQ improvements (Zavg, WSPDavg, and SPCHavg) for all forecast hours over the northwest Pacific (see Part II). The NPQ vertical average spans the 700–925-hPa levels, which provide the dominant contributions to each IVT through Eq. (1).

Table 11

6. Summary and discussion

Aircraft-deployed wind and thermodynamic soundings, ingested into the NCEP GDAS, provided regional precipitation forecast improvements for the WEST and PNNC domains when averaged over all forecast hours and the entire OC period. While direct impacts from all 17 IOPs were inconsistent, short-term (12–36 h) and longer (96–120 h) forecast hours showed overall improved geographical precipitation distribution. Some cases, e.g., the IOP-10 84-h forecast verifying at 1200 UTC 24 February over the PNNC domain, are notably improved. Other cases may have mixed improvements and degradations within the same IOP (e.g., IOP-5). Over the course of 17 IOPs, a correlation analysis suggests approximately equal overall impact across the assimilated NPQs [mass (temperature, geopotential height), wind components and specific humidity]. Part II describes the statistical impacts for moisture and other model variables over the OC period.

Episodic dropsonde deployments impact larger areas both upstream and downstream of the verification area through the GDAS cycling process as time proceeds. Observations for a particular AR event may impact subsequent AR events as they enter the verification area beyond 36–48 h. The nature of this impact is generally unpredictable, especially for precipitation; for very limited observation coverage it is more likely to be negative or neutral than positive. Nevertheless, some positive regional impacts for 24–120-h precipitation forecasts have been shown here (e.g., Figs. 17 and 18) and for 108–168 h here and elsewhere (Wu et al. 2021); they coincide with improved wind speed and moisture statistics over the entire OC as described by Lord et al. (2022a) and Part II.

The AR Recon program has evolved from a field demonstration to a real-time operational capability as documented in the National Winter Season Operations Plan by the Office of Federal Coordinator of Meteorology (OFCM 2020), thereby providing more opportunities to thoroughly investigate the impact of aircraft observations on NCEP operational global model analysis and forecasts. AR Recon in 2021 consisted of multiple aircraft and several sequential (multiday) IOPs, giving unprecedented coverage of ARs. In addition, NCEP made significant advancements to the operational GFS and GDAS in 2021 (GFSv16; Yang et al. 2021; Kleist et al. 2021). For the first time, real-time data denial experiments were conducted using GFSv16. Results from those experiments will be documented in a sequel to this manuscript, along with focused examination of precipitation forecast improvements at local watershed levels where the impacts are found to be more pronounced.

1

Assertion based on lead author’s experience: Part II contains an example of indirect impact that occurred during the AR2020 OC.

2

An alternative approach is to perform separate data assimilation experiments, withdrawing all but a single observed variable. This approach, besides being expensive computationally, is beyond the scope of this paper. Furthermore, it may contribute to physical imbalances in the analysis conditions. For example, assimilating temperature/geopotential height data without accompanying moisture data may distort the moist static stability in the initial condition and worse precipitation may result.

Acknowledgments.

This work is supported by generous funding made available from NOAA/Office of Marine and Aviation Observations (OMAO). The authors are grateful to the feedback from the Modeling and Data Assimilation Steering Committee of the AR Recon Program and thank the internal and external reviewers of this manuscript for their valuable comments.

Data availability statement.

All model data, observations, and statistical results are available through the corresponding author at the NOAA/National Centers for Environmental Prediction/Environmental Modeling Center.

APPENDIX A

AR2020 Intensive Observing Periods

Table A1 provides some details on the 17 AR2020 IOPs, including dates, participating aircraft, number of dropsondes (and failures) from each of the G-IV and C-130 aircraft and the number of observations assimilated in real time by the GDAS for the 0000 UTC cycle and the surrounding cycles (1800 and 0600 UTC). In some IOPs, (6, 7, 15, and 16), not all successful dropsondes were transmitted to NCEP and other centers. These missing data have not been recovered.

Table A1

Summary of the AR2020 IOPs. Columns identify the serial number of each IOP, the central time for the aircraft observations, the number of dropsondes (failed number), and the number of sondes included in the 1800, 0000, and 0600 UTC GDAS cycles.

Table A1

APPENDIX B

Precipitation Statistics for all AR2020 OC Cases and All IOPs

a. OC statistical calculations

The IDFpstd statistic measures the variability in difference between the forecast and CCPA verification, with the former being interpolated to the verifying observed grid. In cases of minimal observed 24-h accumulated precipitation (domain-average CCPA < 0.002 mm day−1), IDFpstd (and other precipitation statistics) are not calculated. Furthermore, in cases when CTRL-DENY differences and the CCPA observation produce unrealistic improvement values (e.g., |IDFpstd| > 100%), these statistics are also not calculated.

b. IDFpstd statistic for the PNNC domain

Precipitation impacts for a specific statistic are summarized by plotting precipitation impact categories for each of the 96 cases and each forecast hour (Fig. B1a). For IDFpstd, impacts vary across consecutive cases and by forecast hour for a particular case. The maximum positive impact across all forecast hours is 15.9% for the 1200 UTC 19 February initialization and the maximum negative impact is −20.6% for 0000 UTC 6 March. In neither of these cases is the initialization on an IOP date, so that most of the overall impact is governed by the evolution of the wind and moisture environment through the cycled data assimilation system. A time series of the forecast-averaged IDFpstd impact (Fig. B1b) shows periods of mostly positive average impact that include IOPs (e.g., 0000 UTC 10 February–1200 UTC 19 February, IOPs 7–9) and also periods of mostly negative impact, also including IOPs (e.g., 1200 UTC 27 January–0000 UTC 5 February, IOPs 2–5).

Fig. B1.
Fig. B1.

(a) Percent improvement for the IDFpstd statistic in the PNNC domain (Table 1) for forecast hours 24–120 and initialization times every 12 h from 0000 UTC 24 Jan to 1200 UTC 11 Mar. Forecast-averaged improvements for each of the 96 cases are on the right ordinate and case averages for each forecast hour are above the abscissa. Black areas are discarded cases due to minimal observed 24-h accumulated rainfall in the domain (Fig. 2b). (b). Time series of the forecast-averaged percent improvement for the IDFpstd statistic in the PNNC domain (Table 1). Initialization times are every 12 h from 0000 UTC 24 Jan to 1200 UTC 11 Mar.

Citation: Weather and Forecasting 38, 1; 10.1175/WAF-D-22-0036.1

c. SCAN domain statistics

SCAN domain statistics (Fig. B2a) show generally positive impacts of 2%–10% for domain-wide maximum and average precipitation at 24–96 h and <2% improvements for most other statistics. At 108–120 h, however, there are large positive impacts to the maximum precipitation difference (IDFpmax) accompanied by a large positive percentage of improved cases (Fig. B2b), which indicates that the CTRL has a better match of heavy precipitation locations to the observed than the DENY. Overall, the CTRL has positive impact plurality for most statistics except the IDFpmin.

Fig. B2.
Fig. B2.

(a) As in Fig. 10a, but for the SCAN domain. (b) As in Fig. 10b, but for the SCAN domain.

Citation: Weather and Forecasting 38, 1; 10.1175/WAF-D-22-0036.1

d. IOP only statistics

Precipitation impact statistics for all forecast hours and each IOP are summarized in Figs. B3a–d. Very few IOPs have positive impact for more than two statistical measures. For 24-h forecasts, improvements and degradations are scattered across different IOPs with little sustained impact. Coherence across IOPs increases from 36 to 60 h, the latter forecasts having by far the strongest and most coherent improvements across the statistical measures for both PNNC and SCAN domains. On the other hand, for 72–84 h, improvements characterize the PNNC domain while degradations dominate for the SCAN domain. From 96 to 132 h, the DFpstd statistic is predominantly improved (see also Fig. 10) as much as 10%–20%, indicating a better overall fit of forecast precipitation with the verification over the SCAN domain.

Fig. B3.
Fig. B3.

(a) Summary of precipitation improvement statistics (%) for 24–36-h forecasts, including all contributing IOPs in the AR2020 OC. The verification domain and number of valid dates for each IOP are tabulated. Color key is as in Fig. 12. Missing IOPs do not have forecasts that verify at times of impactful precipitation. (b) As in (a), but for 48–60-h forecasts. (c) As in (a), but for 72–84-h forecasts. (d) As in (a), but for 96–132-h forecasts.

Citation: Weather and Forecasting 38, 1; 10.1175/WAF-D-22-0036.1

REFERENCES

  • Aberson, S. D., J. Cione, C.-C. Wu, M. M. Bell, J. Halverson, C. Fogarty, and M. Weissmann, 2010: Aircraft observations of tropical cyclones. Global Perspectives on Tropical Cyclones: From Science to Mitigation, J. C. L. Chan and J. D. Kepert, Eds., World Scientific Publishing Company, 227–240, https://doi.org/10.1142/9789814293488_0008.

  • Brennan, M. J., D. T. Kleist, K. Howard, and S. J. Majumdar, 2015: The impact of supplemental dropwindsonde data on the structure and intensity of Tropical Storm Karen (2013) in the NCEP Global Forecast System. Wea. Forecasting, 30, 683691, https://doi.org/10.1175/WAF-D-15-0002.1.

    • Search Google Scholar
    • Export Citation
  • Doyle, J. D., C. A. Reynolds, and C. Amerault, 2019: Adjoint sensitivity analysis of high-impact extratropical cyclones. Mon. Wea. Rev., 147, 45114532, https://doi.org/10.1175/MWR-D-19-0055.1.

    • Search Google Scholar
    • Export Citation
  • Elless, T. J., X. Wu, and V. Tallapragada, 2021: Identifying atmospheric river reconnaissance targets using ensemble forecasts. The 2021 Blue Book, 1-07, 7–8, http://bluebook.meteoinfo.ru/uploads/2021/sections/BB_21_S1.pdf.

  • Hamill, T. M., F. Yang, C. Cardinali, and S. J. Majumdar, 2013: Impact of targeted winter storm reconnaissance dropwindsonde data on midlatitude numerical weather predictions. Mon. Wea. Rev., 141, 20582065, https://doi.org/10.1175/MWR-D-12-00309.1.

    • Search Google Scholar
    • Export Citation
  • Harris, L. M., and S.-J. Lin, 2013: A two‐way nested global‐regional dynamical core on the cubed‐sphere grid. Mon. Wea. Rev., 141, 283306, https://doi.org/10.1175/MWR-D-11-00201.1.

    • Search Google Scholar
    • Export Citation
  • Harris, L. M., L. Zhou, X. Chen, and J.-H. Chen, 2020a: The GFDL finite-volume cubed-sphere dynamical core. NOAA Tech. Memo. OAR GFDL2020-001, 10 pp., https://doi.org/10.25923/7h88-c534.

  • Harris, L. M., and Coauthors, 2020b: GFDL SHiELD: A unified system for weather-to-seasonal prediction. J. Adv. Model. Earth Syst., 12, e2020MS002223, https://doi.org/10.1029/2020MS002223.

    • Search Google Scholar
    • Export Citation
  • Hou, D., and Coauthors, 2014: Climatology-calibrated precipitation analysis at fine scales: Statistical adjustment of Stage IV toward CPC gauge-based analysis. J. Hydrometeor., 15, 25422557, https://doi.org/10.1175/JHM-D-11-0140.1.

    • Search Google Scholar
    • Export Citation
  • Jolliffe, I. T., and D. B. Stephenson, 2003: Forecast Verification: A Practitioner’s Guide in Atmospheric Science. John Wiley and Sons, 240 pp.

  • Joyce, R. J., J. E. Janowiak, P. A. Arkin, and P. Xie, 2004: CMORPH: A method that produces global precipitation estimates from passive microwave and infrared data at high spatial and temporal resolution. J. Hydrometeor., 5, 487503, https://doi.org/10.1175/1525-7541(2004)005<0487:CAMTPG>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Kleist, D. T., and K. Ide, 2015a: An OSSE-based evaluation of hybrid variational–ensemble data assimilation for the NCEP GFS. Part I: System description and 3D-Hybrid results. Mon. Wea. Rev., 143, 433451, https://doi.org/10.1175/MWR-D-13-00351.1.

    • Search Google Scholar
    • Export Citation
  • Kleist, D. T., and K. Ide, 2015b: An OSSE-based evaluation of hybrid variational–ensemble data assimilation for the NCEP GFS. Part II: 4DEnVar and hybrid variants. Mon. Wea. Rev., 143, 452470, https://doi.org/10.1175/MWR-D-13-00350.1.

    • Search Google Scholar
    • Export Citation
  • Kleist, D. T., D. F. Parrish, J. C. Derber, R. Treadon, R. M. Errico, and R. Yang, 2009: Improving incremental balance in the GSI 3DVAR analysis system. Mon. Wea. Rev., 137, 10461060, https://doi.org/10.1175/2008MWR2623.1.

    • Search Google Scholar
    • Export Citation
  • Kleist, D. T., and Coauthors, 2021: NCEP operational global data assimilation upgrades: From versions 15 through 16. Special Symp. on Global and Mesoscale Models: Updates and Center Overviews, Online, Amer. Meteor. Soc., 12.3, https://ams.confex.com/ams/101ANNUAL/meetingapp.cgi/Paper/378554.

  • Lavers, D. A., M. J. Rodwell, D. S. Richardson, F. M. Ralph, J. D. Doyle, C. A. Reynolds, V. Tallapragada, and F. Pappenberger, 2018: The gauging and modeling of rivers in the sky. Geophys. Res. Lett., 45, 78287834, https://doi.org/10.1029/2018GL079019.

    • Search Google Scholar
    • Export Citation
  • Lavers, D. A., and Coauthors, 2020: Forecast errors and uncertainties in atmospheric rivers. Wea. Forecasting, 35, 14471458, https://doi.org/10.1175/WAF-D-20-0049.1.

    • Search Google Scholar
    • Export Citation
  • Lin, S.-J., 2004: A “vertically Lagrangian” finite‐volume dynamical core for global models. Mon. Wea. Rev., 132, 22932307, https://doi.org/10.1175/1520-0493(2004)132<2293:AVLFDC>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Lin, S.-J., and R. B. Rood, 1997: An explicit flux-form semi-Langrangian shallow-water model on the sphere. Quart. J. Roy. Meteor. Soc., 123, 24772498, https://doi.org/10.1002/qj.49712354416.

    • Search Google Scholar
    • Export Citation
  • Lin, Y., and K. E. Mitchell, 2005: The NCEP Stage II/IV hourly precipitation analyses: Development and applications. Preprints, 19th Conf. on Hydrology, San Diego, CA, Amer. Meteor. Soc., 1.2, https://ams.confex.com/ams/Annual2005/techprogram/paper_83847.htm.

  • Lord, S., X. Wu, and V. Tallapragada, 2022a: Overview of the 2020 atmospheric rivers field campaign. NOAA/NCEP Office Note 508, 168 pp., https://doi.org/10.25923/pjwn-p075.

  • Majumdar, S. J., M. J. Brennan, and K. Howard, 2013: The impact of dropwindsonde and supplemental rawinsonde observations on track forecasts for Hurricane Irene (2011). Wea. Forecasting, 28, 13851403, https://doi.org/10.1175/WAF-D-13-00018.1.

    • Search Google Scholar
    • Export Citation
  • McCormack, J. P., S. D. Eckermann, D. E. Siskind, and T. J. McGee, 2006: CHEM2D-OPP: A new linearized gas-phase ozone photochemistry parameterization for high-altitude NWP and climate models. Atmos. Chem. Phys., 6, 49434972, https://doi.org/10.5194/acp-6-4943-2006.

    • Search Google Scholar
    • Export Citation
  • McCormack, J. P., K. W. Hoppel, and D. E. Siskind, 2008: Parameterization of middle atmospheric water vapor photochemistry for high-altitude NWP and data assimilation. Atmos. Chem. Phys., 8, 75197532, https://doi.org/10.5194/acp-8-7519-2008.

    • Search Google Scholar
    • Export Citation
  • OFCM, 2020: National winter season operations plan. OFCM Rep. FCM-P13-2020, OFCM, 126 pp., https://www.icams-portal.gov/resources/ofcm/nwsop/2020_nwsop.pdf.

  • Putman, W. M., and S.-J. Lin, 2007: Finite-volume transport on various cubed-sphere grids. J. Comput. Phys., 227, 5578, https://doi.org/10.1016/j.jcp.2007.07.022.

    • Search Google Scholar
    • Export Citation
  • Ralph, F. M., and Coauthors, 2014: A vision for future observations for western U.S. extreme precipitation and flooding. J. Contemp. Water Res. Educ., 153, 1632, https://doi.org/10.1111/j.1936-704X.2014.03176.x.

    • Search Google Scholar
    • Export Citation
  • Ralph, F. M., M. D. Dettinger, M. M. Cairns, T. J. Galarneau, and J. Eylander, 2018: Defining “atmospheric river”: How the glossary of meteorology helped resolve a debate. Bull. Amer. Meteor. Soc., 99, 837839, https://doi.org/10.1175/BAMS-D-17-0157.1.

    • Search Google Scholar
    • Export Citation
  • Ralph, F. M., and Coauthors, 2020: West Coast forecast challenges and development of atmospheric river reconnaissance. Bull. Amer. Meteor. Soc., 101, E1357E1377, https://doi.org/10.1175/BAMS-D-19-0183.1.

    • Search Google Scholar
    • Export Citation
  • Reynolds, C. A., J. D. Doyle, F. M. Ralph, and R. Demirdjian, 2019: Adjoint sensitivity of North Pacific atmospheric river forecasts. Mon. Wea. Rev., 147, 18711897, https://doi.org/10.1175/MWR-D-18-0347.1.

    • Search Google Scholar
    • Export Citation
  • Schäfler, A., G. Craig, H. Wernli, P. Arbogast, J. D. Doyle, and R. McTaggart-Cowan, 2018: The North Atlantic waveguide and downstream impact experiment. Bull. Amer. Meteor. Soc., 99, 16071637, https://doi.org/10.1175/BAMS-D-17-0003.1.

    • Search Google Scholar
    • Export Citation
  • Schindler, M., M. Weissmann, A. Schäfler, and G. Radnoti, 2020: The impact of dropsonde and extra radiosonde observations during NAWDEX in autumn 2016. Mon. Wea. Rev., 148, 809824, https://doi.org/10.1175/MWR-D-19-0126.1.

    • Search Google Scholar
    • Export Citation
  • Stone, R. E., C. A. Reynolds, J. D. Doyle, R. H. Langland, N. L. Baker, D. A. Lavers, and F. M. Ralph, 2020: Atmospheric river reconnaissance observation impact in the Navy Global Forecast System. Mon. Wea. Rev., 148, 763782, https://doi.org/10.1175/MWR-D-19-0101.1.

    • Search Google Scholar
    • Export Citation
  • Torn, R. D., and G. J. Hakim, 2008: Ensemble-based sensitivity analysis. Mon. Wea. Rev., 136, 663677, https://doi.org/10.1175/2007MWR2132.1.

    • Search Google Scholar
    • Export Citation
  • Wilks, D. S., 2006: Statistical Methods in the Atmospheric Sciences. 2nd ed. International Geophysics Series, Vol. 100, Academic Press, 648 pp.

  • Wu, X., V. Tallapragada, S. Lord, K. Wu, and M. Ralph, 2021: Impact of atmospheric river reconnaissance dropsonde data on NCEP GFS forecast: A case study. The 2021 Blue Book, 1–23, http://bluebook.meteoinfo.ru/uploads/2021/sections/BB_21_S1.pdf.

  • Yang, F., and V. Tallapragada, 2018: Evaluation of retrospective and real-time NGGPS FV3GFS experiments for Q3FY18 beta implementation. 25th Conf. on Numerical Weather Prediction, Denver, CO, Amer. Meteor. Soc., 5B.3, https://ams.confex.com/ams/29WAF25NWP/webprogram/Paper345231.html.

  • Yang, F., V. Tallapragada, D. T. Kleist, A. Chawla, J. Wang, R. Treadon, and J. Whitaker, 2021: On the development and evaluation of NWS Global Forecast Systems version 16. Special Symp. on Global and Mesoscale Models: Updates and Center, Online, Amer. Meteor. Soc., 12.2, https://ams.confex.com/ams/101ANNUAL/meetingapp.cgi/Paper/378135.

  • Zheng, M., and Coauthors, 2021a: Data gaps within atmospheric rivers over the northeastern Pacific. Bull. Amer. Meteor. Soc., 102, E492E524, https://doi.org/10.1175/BAMS-D-19-0287.1.

    • Search Google Scholar
    • Export Citation
  • Zheng, M., and Coauthors, 2021b: Improved forecast skill through the assimilation of dropsonde observations from the atmospheric river reconnaissance program. J. Geophys. Res. Atmos., 126, e2021JD034967, https://doi.org/10.1029/2021JD034967.

    • Search Google Scholar
    • Export Citation
  • Zhou, L., S.-J. Lin, J.-H. Chen, L. M. Harris, X. Chen, and S. L. Rees, 2019: Toward convective‐scale prediction within the next generation global prediction system. Bull. Amer. Meteor. Soc., 100, 12251243, https://doi.org/10.1175/BAMS-D-17-0246.1.

    • Search Google Scholar
    • Export Citation
Save
  • Aberson, S. D., J. Cione, C.-C. Wu, M. M. Bell, J. Halverson, C. Fogarty, and M. Weissmann, 2010: Aircraft observations of tropical cyclones. Global Perspectives on Tropical Cyclones: From Science to Mitigation, J. C. L. Chan and J. D. Kepert, Eds., World Scientific Publishing Company, 227–240, https://doi.org/10.1142/9789814293488_0008.

  • Brennan, M. J., D. T. Kleist, K. Howard, and S. J. Majumdar, 2015: The impact of supplemental dropwindsonde data on the structure and intensity of Tropical Storm Karen (2013) in the NCEP Global Forecast System. Wea. Forecasting, 30, 683691, https://doi.org/10.1175/WAF-D-15-0002.1.

    • Search Google Scholar
    • Export Citation
  • Doyle, J. D., C. A. Reynolds, and C. Amerault, 2019: Adjoint sensitivity analysis of high-impact extratropical cyclones. Mon. Wea. Rev., 147, 45114532, https://doi.org/10.1175/MWR-D-19-0055.1.

    • Search Google Scholar
    • Export Citation
  • Elless, T. J., X. Wu, and V. Tallapragada, 2021: Identifying atmospheric river reconnaissance targets using ensemble forecasts. The 2021 Blue Book, 1-07, 7–8, http://bluebook.meteoinfo.ru/uploads/2021/sections/BB_21_S1.pdf.

  • Hamill, T. M., F. Yang, C. Cardinali, and S. J. Majumdar, 2013: Impact of targeted winter storm reconnaissance dropwindsonde data on midlatitude numerical weather predictions. Mon. Wea. Rev., 141, 20582065, https://doi.org/10.1175/MWR-D-12-00309.1.

    • Search Google Scholar
    • Export Citation
  • Harris, L. M., and S.-J. Lin, 2013: A two‐way nested global‐regional dynamical core on the cubed‐sphere grid. Mon. Wea. Rev., 141, 283306, https://doi.org/10.1175/MWR-D-11-00201.1.

    • Search Google Scholar
    • Export Citation
  • Harris, L. M., L. Zhou, X. Chen, and J.-H. Chen, 2020a: The GFDL finite-volume cubed-sphere dynamical core. NOAA Tech. Memo. OAR GFDL2020-001, 10 pp., https://doi.org/10.25923/7h88-c534.

  • Harris, L. M., and Coauthors, 2020b: GFDL SHiELD: A unified system for weather-to-seasonal prediction. J. Adv. Model. Earth Syst., 12, e2020MS002223, https://doi.org/10.1029/2020MS002223.

    • Search Google Scholar
    • Export Citation
  • Hou, D., and Coauthors, 2014: Climatology-calibrated precipitation analysis at fine scales: Statistical adjustment of Stage IV toward CPC gauge-based analysis. J. Hydrometeor., 15, 25422557, https://doi.org/10.1175/JHM-D-11-0140.1.

    • Search Google Scholar
    • Export Citation
  • Jolliffe, I. T., and D. B. Stephenson, 2003: Forecast Verification: A Practitioner’s Guide in Atmospheric Science. John Wiley and Sons, 240 pp.

  • Joyce, R. J., J. E. Janowiak, P. A. Arkin, and P. Xie, 2004: CMORPH: A method that produces global precipitation estimates from passive microwave and infrared data at high spatial and temporal resolution. J. Hydrometeor., 5, 487503, https://doi.org/10.1175/1525-7541(2004)005<0487:CAMTPG>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Kleist, D. T., and K. Ide, 2015a: An OSSE-based evaluation of hybrid variational–ensemble data assimilation for the NCEP GFS. Part I: System description and 3D-Hybrid results. Mon. Wea. Rev., 143, 433451, https://doi.org/10.1175/MWR-D-13-00351.1.

    • Search Google Scholar
    • Export Citation
  • Kleist, D. T., and K. Ide, 2015b: An OSSE-based evaluation of hybrid variational–ensemble data assimilation for the NCEP GFS. Part II: 4DEnVar and hybrid variants. Mon. Wea. Rev., 143, 452470, https://doi.org/10.1175/MWR-D-13-00350.1.

    • Search Google Scholar
    • Export Citation
  • Kleist, D. T., D. F. Parrish, J. C. Derber, R. Treadon, R. M. Errico, and R. Yang, 2009: Improving incremental balance in the GSI 3DVAR analysis system. Mon. Wea. Rev., 137, 10461060, https://doi.org/10.1175/2008MWR2623.1.

    • Search Google Scholar
    • Export Citation
  • Kleist, D. T., and Coauthors, 2021: NCEP operational global data assimilation upgrades: From versions 15 through 16. Special Symp. on Global and Mesoscale Models: Updates and Center Overviews, Online, Amer. Meteor. Soc., 12.3, https://ams.confex.com/ams/101ANNUAL/meetingapp.cgi/Paper/378554.

  • Lavers, D. A., M. J. Rodwell, D. S. Richardson, F. M. Ralph, J. D. Doyle, C. A. Reynolds, V. Tallapragada, and F. Pappenberger, 2018: The gauging and modeling of rivers in the sky. Geophys. Res. Lett., 45, 78287834, https://doi.org/10.1029/2018GL079019.

    • Search Google Scholar
    • Export Citation
  • Lavers, D. A., and Coauthors, 2020: Forecast errors and uncertainties in atmospheric rivers. Wea. Forecasting, 35, 14471458, https://doi.org/10.1175/WAF-D-20-0049.1.

    • Search Google Scholar
    • Export Citation
  • Lin, S.-J., 2004: A “vertically Lagrangian” finite‐volume dynamical core for global models. Mon. Wea. Rev., 132, 22932307, https://doi.org/10.1175/1520-0493(2004)132<2293:AVLFDC>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Lin, S.-J., and R. B. Rood, 1997: An explicit flux-form semi-Langrangian shallow-water model on the sphere. Quart. J. Roy. Meteor. Soc., 123, 24772498, https://doi.org/10.1002/qj.49712354416.

    • Search Google Scholar
    • Export Citation
  • Lin, Y., and K. E. Mitchell, 2005: The NCEP Stage II/IV hourly precipitation analyses: Development and applications. Preprints, 19th Conf. on Hydrology, San Diego, CA, Amer. Meteor. Soc., 1.2, https://ams.confex.com/ams/Annual2005/techprogram/paper_83847.htm.

  • Lord, S., X. Wu, and V. Tallapragada, 2022a: Overview of the 2020 atmospheric rivers field campaign. NOAA/NCEP Office Note 508, 168 pp., https://doi.org/10.25923/pjwn-p075.

  • Majumdar, S. J., M. J. Brennan, and K. Howard, 2013: The impact of dropwindsonde and supplemental rawinsonde observations on track forecasts for Hurricane Irene (2011). Wea. Forecasting, 28, 13851403, https://doi.org/10.1175/WAF-D-13-00018.1.

    • Search Google Scholar
    • Export Citation
  • McCormack, J. P., S. D. Eckermann, D. E. Siskind, and T. J. McGee, 2006: CHEM2D-OPP: A new linearized gas-phase ozone photochemistry parameterization for high-altitude NWP and climate models. Atmos. Chem. Phys., 6, 49434972, https://doi.org/10.5194/acp-6-4943-2006.

    • Search Google Scholar
    • Export Citation
  • McCormack, J. P., K. W. Hoppel, and D. E. Siskind, 2008: Parameterization of middle atmospheric water vapor photochemistry for high-altitude NWP and data assimilation. Atmos. Chem. Phys., 8, 75197532, https://doi.org/10.5194/acp-8-7519-2008.

    • Search Google Scholar
    • Export Citation
  • OFCM, 2020: National winter season operations plan. OFCM Rep. FCM-P13-2020, OFCM, 126 pp., https://www.icams-portal.gov/resources/ofcm/nwsop/2020_nwsop.pdf.

  • Putman, W. M., and S.-J. Lin, 2007: Finite-volume transport on various cubed-sphere grids. J. Comput. Phys., 227, 5578, https://doi.org/10.1016/j.jcp.2007.07.022.

    • Search Google Scholar
    • Export Citation
  • Ralph, F. M., and Coauthors, 2014: A vision for future observations for western U.S. extreme precipitation and flooding. J. Contemp. Water Res. Educ., 153, 1632, https://doi.org/10.1111/j.1936-704X.2014.03176.x.

    • Search Google Scholar
    • Export Citation
  • Ralph, F. M., M. D. Dettinger, M. M. Cairns, T. J. Galarneau, and J. Eylander, 2018: Defining “atmospheric river”: How the glossary of meteorology helped resolve a debate. Bull. Amer. Meteor. Soc., 99, 837839, https://doi.org/10.1175/BAMS-D-17-0157.1.

    • Search Google Scholar
    • Export Citation
  • Ralph, F. M., and Coauthors, 2020: West Coast forecast challenges and development of atmospheric river reconnaissance. Bull. Amer. Meteor. Soc., 101, E1357E1377, https://doi.org/10.1175/BAMS-D-19-0183.1.

    • Search Google Scholar
    • Export Citation
  • Reynolds, C. A., J. D. Doyle, F. M. Ralph, and R. Demirdjian, 2019: Adjoint sensitivity of North Pacific atmospheric river forecasts. Mon. Wea. Rev., 147, 18711897, https://doi.org/10.1175/MWR-D-18-0347.1.

    • Search Google Scholar
    • Export Citation
  • Schäfler, A., G. Craig, H. Wernli, P. Arbogast, J. D. Doyle, and R. McTaggart-Cowan, 2018: The North Atlantic waveguide and downstream impact experiment. Bull. Amer. Meteor. Soc., 99, 16071637, https://doi.org/10.1175/BAMS-D-17-0003.1.

    • Search Google Scholar
    • Export Citation
  • Schindler, M., M. Weissmann, A. Schäfler, and G. Radnoti, 2020: The impact of dropsonde and extra radiosonde observations during NAWDEX in autumn 2016. Mon. Wea. Rev., 148, 809824, https://doi.org/10.1175/MWR-D-19-0126.1.

    • Search Google Scholar
    • Export Citation
  • Stone, R. E., C. A. Reynolds, J. D. Doyle, R. H. Langland, N. L. Baker, D. A. Lavers, and F. M. Ralph, 2020: Atmospheric river reconnaissance observation impact in the Navy Global Forecast System. Mon. Wea. Rev., 148, 763782, https://doi.org/10.1175/MWR-D-19-0101.1.

    • Search Google Scholar
    • Export Citation
  • Torn, R. D., and G. J. Hakim, 2008: Ensemble-based sensitivity analysis. Mon. Wea. Rev., 136, 663677, https://doi.org/10.1175/2007MWR2132.1.

    • Search Google Scholar
    • Export Citation
  • Wilks, D. S., 2006: Statistical Methods in the Atmospheric Sciences. 2nd ed. International Geophysics Series, Vol. 100, Academic Press, 648 pp.

  • Wu, X., V. Tallapragada, S. Lord, K. Wu, and M. Ralph, 2021: Impact of atmospheric river reconnaissance dropsonde data on NCEP GFS forecast: A case study. The 2021 Blue Book, 1–23, http://bluebook.meteoinfo.ru/uploads/2021/sections/BB_21_S1.pdf.

  • Yang, F., and V. Tallapragada, 2018: Evaluation of retrospective and real-time NGGPS FV3GFS experiments for Q3FY18 beta implementation. 25th Conf. on Numerical Weather Prediction, Denver, CO, Amer. Meteor. Soc., 5B.3, https://ams.confex.com/ams/29WAF25NWP/webprogram/Paper345231.html.

  • Yang, F., V. Tallapragada, D. T. Kleist, A. Chawla, J. Wang, R. Treadon, and J. Whitaker, 2021: On the development and evaluation of NWS Global Forecast Systems version 16. Special Symp. on Global and Mesoscale Models: Updates and Center, Online, Amer. Meteor. Soc., 12.2, https://ams.confex.com/ams/101ANNUAL/meetingapp.cgi/Paper/378135.

  • Zheng, M., and Coauthors, 2021a: Data gaps within atmospheric rivers over the northeastern Pacific. Bull. Amer. Meteor. Soc., 102, E492E524, https://doi.org/10.1175/BAMS-D-19-0287.1.

    • Search Google Scholar
    • Export Citation
  • Zheng, M., and Coauthors, 2021b: Improved forecast skill through the assimilation of dropsonde observations from the atmospheric river reconnaissance program. J. Geophys. Res. Atmos., 126, e2021JD034967, https://doi.org/10.1029/2021JD034967.

    • Search Google Scholar
    • Export Citation
  • Zhou, L., S.-J. Lin, J.-H. Chen, L. M. Harris, X. Chen, and S. L. Rees, 2019: Toward convective‐scale prediction within the next generation global prediction system. Bull. Amer. Meteor. Soc., 100, 12251243, https://doi.org/10.1175/BAMS-D-17-0246.1.

    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    Spatial domains 1–5 for precipitation and case study verifications (Table 1). Case study domains 4 (PNNC_x) and 5 (SCAN_x) are indicated by dashed lines.

  • Fig. 2.

    Area-mean, 24-h accumulated observed CCPA precipitation (mm) at 0000 and 1200 UTC from 0000 UTC 25 Jan to 1200 UTC 16 Mar over (a) the WEST domain, (b) the PNNC domain, and (c) the SCAN domain. See Table 1 and Fig. 1 for more details.

  • Fig. 3.

    As in Fig. 2, but for domain-maximum precipitation.

  • Fig. 4.

    Magnitude of the CTRL vertically integrated specific humidity flux (kg m−1 s−1) for 0000 UTC 24 Jan 2020. Contours of mean sea level pressure (hPa) and 850-hPa streamlines are included. The sounding locations for 37 dropsondes from two AFRES aircraft for the 0000 UTC data assimilation cycle are shown in blue.

  • Fig. 5.

    As in Fig. 4, but valid at 0000 UTC 26 Jan 2020, 48 h after IOP-1.

  • Fig. 6.

    As in Fig. 2 over (a) the PNNC and (b) SCAN domains for area-average precipitation, but with added CTRL area-average, 24-h accumulated, forecast precipitation (mm) for the 24-h (dot), 72-h (square), and 120-h (X) forecasts at both 0000 and 1200 UTC for the AR2020 OC.

  • Fig. 7.

    As in Fig. 6, but for domain-maximum precipitation (mm).

  • Fig. 8.

    Observed CCPA domain-maximum precipitation instances of 50 mm or greater (green) for each AR2020 IOP over (a) PNNC and (b) SCAN domains and IVT instances diagnosed from the ECMO verifying analysis (red) for the analysis (0) and each forecast verification hour following the IOP date. An IVT instance occurs when a contour of 300 kg m−1 s−1 is within the PNNC/SCAN domain at the initial/forecast verification time. IOPs not meeting these criteria are not shown. Notes of clarification for unique situations are added for both precipitation and IVT instances in Table 3.

  • Fig. 9.

    (a) CTRL threat score for the experimental period. (b) CTRL − DENY difference for 36–180-h precipitation forecasts over the U.S. West Coast region (approximately 32°–49.5°N, 115°–125°W) for the AR2020 experimental period. The score is for 0.2–75 mm day−1 precipitation thresholds for 24-h accumulations. Positive impact is in green, and negative impact is in red. (c) The 108–132-h forecast averaged CTRL (red) and DENY (black) threat scores and CTRL − DENY differences. (d) As in (c), but for bias scores. Differences outside the vertical boxes indicate statistical significance at the 99% level.

  • Fig. 10.

    (a) Average improvement (%) of CTRL over DENY for 96 twice-daily (0000 and 1200 UTC) forecast cases for 24–120-h forecasts. Forecasts are initialized from 0000 UTC 24 Jan to 1200 UTC 11 Mar over the WEST domain (Table 1). Statistics are as described in the text. (b) As in Fig. 10a, but for percent improved cases (percent > 50).

  • Fig. 11.

    (a) As in Fig. 10a, but for the PNNC domain. (b) As in Fig. 10b, but for the PNNC domain.

  • Fig. 12.

    Summary of precipitation improvement statistics (I) for 24–132 h, including all IOPs in the AR2020 OC that verify during a period of high-impact precipitation. Improved (green, I > 1%), degraded (red, I < −1%), and unchanged (yellow, −1% ≤ I ≤1%) impacts are indicated for each forecast hour. The number of IOPs for each forecast hour, the total number of valid dates, and the number of statistics improved, degraded, and unchanged are also tabulated.

  • Fig. 13.

    (a) 24-h CCPA accumulated observed precipitation (mm) ending at 0000 UTC 7 Feb. (b) CTRL 24–48-h forecast precipitation (mm) valid at 0000 UTC 7 Feb. (c) As in (b), but for the DENY forecast.

  • Fig. 14.

    (a) Difference of the DENY 24–48-h forecast precipitation (mm) with the CCPA observed precipitation valid at 0000 UTC 7 Feb. (b) As in (a), but for the CTRL forecast.

  • Fig. 15.

    (a) CTRL IVT error (kg m−1 s−1, shaded) for the 48-h IOP-5 forecast valid at 0000 UTC 7 Feb. Dotted line is the ECMO verifying analysis. (b) As in (a), but for the DENY forecast.

  • Fig. 16.

    (a) CTRL 850-hPa wind speed error for the 48-h IOP-5 forecast valid at 0000 UTC 7 Feb. Dotted line is the ECMO verifying analysis. (b) As in (a), but for the DENY forecast.

  • Fig. 17.

    (a) 24-h CCPA accumulated observed precipitation (mm) ending at 1200 UTC 24 Feb. (b) CTRL 60–84-h forecast precipitation (mm) valid at 1200 UTC 24 Feb. (c) As in (b), but for the DENY forecast.

  • Fig. 18.

    (a) Difference of the DENY 60–84-h forecast precipitation (mm) with the CCPA observed precipitation valid at 1200 UTC 24 Feb. (b) As in (a), but for the CTRL forecast.