Unlocking GOES: A Statistical Framework for Quantifying the Evolution of Convective Structure in Tropical Cyclones

Trey McNeely Department of Statistics and Data Science, Carnegie Mellon University, Pittsburgh, Pennsylvania

Search for other papers by Trey McNeely in
Current site
Google Scholar
PubMed
Close
,
Ann B. Lee Department of Statistics and Data Science, Carnegie Mellon University, Pittsburgh, Pennsylvania

Search for other papers by Ann B. Lee in
Current site
Google Scholar
PubMed
Close
,
Kimberly M. Wood Department of Geosciences, Mississippi State University, Mississippi State, Mississippi

Search for other papers by Kimberly M. Wood in
Current site
Google Scholar
PubMed
Close
, and
Dorit Hammerling Department of Applied Mathematics and Statistics, Colorado School of Mines, Golden, Colorado

Search for other papers by Dorit Hammerling in
Current site
Google Scholar
PubMed
Close
Free access

Abstract

Tropical cyclones (TCs) rank among the most costly natural disasters in the United States, and accurate forecasts of track and intensity are critical for emergency response. Intensity guidance has improved steadily but slowly, as processes that drive intensity change are not fully understood. Because most TCs develop far from land-based observing networks, geostationary satellite imagery is critical to monitor these storms. However, these complex data can be challenging to analyze in real time, and off-the-shelf machine-learning algorithms have limited applicability on this front because of their “black box” structure. This study presents analytic tools that quantify convective structure patterns in infrared satellite imagery for overocean TCs, yielding lower-dimensional but rich representations that support analysis and visualization of how these patterns evolve during rapid intensity change. The proposed feature suite targets the global organization, radial structure, and bulk morphology (ORB) of TCs. By combining ORB and empirical orthogonal functions, we arrive at an interpretable and rich representation of convective structure patterns that serve as inputs to machine-learning methods. This study uses the logistic lasso, a penalized generalized linear model, to relate predictors to rapid intensity change. Using ORB alone, binary classifiers identifying the presence (vs absence) of such intensity-change events can achieve accuracy comparable to classifiers using environmental predictors alone, with a combined predictor set improving classification accuracy in some settings. More complex nonlinear machine-learning methods did not perform better than the linear logistic lasso model for current data.

Corresponding author: Trey McNeely, imcneely@andrew.cmu.edu

Abstract

Tropical cyclones (TCs) rank among the most costly natural disasters in the United States, and accurate forecasts of track and intensity are critical for emergency response. Intensity guidance has improved steadily but slowly, as processes that drive intensity change are not fully understood. Because most TCs develop far from land-based observing networks, geostationary satellite imagery is critical to monitor these storms. However, these complex data can be challenging to analyze in real time, and off-the-shelf machine-learning algorithms have limited applicability on this front because of their “black box” structure. This study presents analytic tools that quantify convective structure patterns in infrared satellite imagery for overocean TCs, yielding lower-dimensional but rich representations that support analysis and visualization of how these patterns evolve during rapid intensity change. The proposed feature suite targets the global organization, radial structure, and bulk morphology (ORB) of TCs. By combining ORB and empirical orthogonal functions, we arrive at an interpretable and rich representation of convective structure patterns that serve as inputs to machine-learning methods. This study uses the logistic lasso, a penalized generalized linear model, to relate predictors to rapid intensity change. Using ORB alone, binary classifiers identifying the presence (vs absence) of such intensity-change events can achieve accuracy comparable to classifiers using environmental predictors alone, with a combined predictor set improving classification accuracy in some settings. More complex nonlinear machine-learning methods did not perform better than the linear logistic lasso model for current data.

Corresponding author: Trey McNeely, imcneely@andrew.cmu.edu

1. Introduction

Tropical cyclones (TCs) can be long lived, powerful, and spatially extensive; these three factors place them among the most damaging and costly natural disasters in the United States (Klotzbach et al. 2018). Our TC guidance has improved over the past few decades, though reductions in track forecast error have outpaced reductions in intensity forecast error (DeMaria et al. 2014). Rapid intensity change [≥30-kt (1 kt ≈ 0.51 m s−1) increase or decrease in 24 h] continues to challenge forecasters, and the need to better understand the processes that drive these changes spurred the Hurricane Forecast Improvement Project (HFIP; http://www.hfip.org/) with specific goals to further reduce both track and intensity forecast errors.

Because the near-storm environment influences TC intensity, understanding and predicting the evolution of that environment is a key component in forecasting intensity change. The operational Statistical Hurricane Intensity Prediction Scheme (SHIPS; DeMaria and Kaplan 1999) uses large-scale, numerical weather prediction (NWP)-derived diagnostics of environmental factors known to affect TC intensity change, including vertical wind shear and relative humidity. SHIPS also relies on observations such as infrared (IR) imagery and sea surface temperature (SST). Environments in which TCs preferentially undergo rapid intensification (RI; operationally defined as an increase of ≥30 kt in 24 h) tend to have higher SSTs, greater atmospheric humidity, and lower vertical wind shear (e.g., Kaplan and DeMaria 2003). To target RI events, a corresponding subset of SHIPS predictors was used to develop the SHIPS rapid intensification index (SHIPS-RII; Kaplan et al. 2010). SHIPS predictors can also provide insight into the characteristics of rapid weakening events (RW; defined as a decrease of ≥30 kt in 24 h; Wood and Ritchie 2015).

Although NWP-derived predictors are included in SHIPS, global NWP models such as the Global Forecast System (GFS) cannot resolve subgrid-scale processes like convection. In situ TC observations aid in capturing small-scale storm features, but such aircraft reconnaissance is infrequent and expensive. Because most TCs develop and strengthen far from dense, land-based observing networks, analysts have long relied on geostationary satellite observations to monitor TCs. Since colder cloud tops imply deeper convection and thus stronger updrafts, IR observations of cloud-top brightness temperatures Tb provide a proxy for convective strength. In addition, as TCs intensify, the distribution of convection becomes more symmetric around the storm center (i.e., axisymmetric). The relationship between convective organization and the maximum intensity of TCs was used to develop the original Dvorak technique (Dvorak 1975), a subjective method relating patterns in IR imagery to TC intensity.

Today, the Advanced Baseline Imager (ABI; Schmit et al. 2017) on Geostationary Operational Environmental Satellite (GOES)-16 and GOES-17 observes the North Atlantic (NAL) and eastern North Pacific (ENP) every 10 min in 16 bands at spatial resolutions from 2 km for infrared channels to 0.5 km for the visible band 2. The current version of SHIPS incorporates ABI IR observations as percent coverage of Tb below −30°C and standard deviation of Tb, each computed within a 50–200-km annulus centered on the TC (Kaplan et al. 2015). However, single-value representations produced every 6 h cannot fully utilize the information provided by these finer spatial and temporal resolutions. How, then, do we objectively “unlock” the increasingly rich information contained in geostationary IR imagery that may highlight physical processes associated with short-term (≤24 h) TC intensity change? From a practical point of view, there are two challenges: (i) forecasters have access to increasingly detailed data but have the same limited time (6 h) to prepare each operational forecast, and (ii) although machine-learning algorithms theoretically can analyze sequences of high-resolution images in an automated way, the extracted features can be difficult to interpret. However, if we can extract meaningful features that are both informative inputs to machine-learning methods and interpretable by forecasters, then we can reduce forecaster effort and leverage some of the predictive power of modern statistical methods.

To address these challenges, we develop a rich and scientifically motivated family of IR imagery descriptors through our global organization, radial structure, and bulk morphology (ORB) feature suite for Tb. These features identify patterns in Tb structure in three broad categories: (i) global organization, (ii) radial structure, and (iii) bulk morphology. Each category targets one aspect of convective structure by mapping the complex Tb image data to summary statistics computed over a range of temperature or radial distance thresholds. We will hereafter refer to the continuous treatment of ORB statistics as ORB functions. As demonstrated in Figs. 1 and 2, these functions transition from threshold-based statistics (e.g., coverage of Tb below −30°C) to continuous functions of thresholds (e.g., coverage of Tb below a continuously varying temperature). Furthermore, we develop an empirical orthogonal function (EOF) representation of each ORB function to capture its main modes of variability. The EOFs are computed separately for the NAL and ENP basins via principal component analysis (PCA); see Fig. 3 and appendix A for details.

Fig. 1.
Fig. 1.

DAV(r) as an ORB function for organization: DAV as an ORB function of the threshold r and deviation angles for each image in Fig. 8, i.e., Edouard (2014) (solid line and left inset) and Nicole (2016) (dashed line and right inset).

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0286.1

Fig. 2.
Fig. 2.

SIZE and SKEW as ORB statistics for bulk morphology: (left) Example level set for a threshold of c = −20°C on Edouard’s Tb (Fig. 8). (right) Level sets for {0°, −20°, −40°, −60°C} (in descending order, with −20°C repeated). SIZE(c) is the area covered by a level set; here, the size is 400 000 km2, or 20% of the stamp area. SKEW(c) is the displacement of a level set’s center of mass (yellow; 34 km from center) normalized by the average displacement of points in the set (green; 92 km from center); here, the skew is 0.37 west-northwest.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0286.1

Fig. 3.
Fig. 3.

EOFs and mean ORB functions: EOFs for all 6 ORB functions in the (top) NAL and (middle) ENP, with (bottom) sample means with 95% pointwise confidence intervals estimated by a stationary bootstrap (Politis and Romano 1994). The computed EOFs differ significantly only in the sign of the second ORB coefficient for eccentricity (ECC2), which merely flips the direction of interpretation. The difference between basins is largely contained in the sample means. Despite these differences, the same qualitative structures persist between basins.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0286.1

Our final representation of TC convective structure is physically interpretable (Figs. 4, 5) and supports both analysis and visualization of evolving convective structure patterns in near–real time (section 3b). The ORB functions capture some IR patterns analogous to the advanced Dvorak technique (ADT; Olander and Velden 2007, 2019) scene types as well as novel approaches to potentially informative convective patterns. For example, the EOFs for radial profiles and bulk morphology can mimic the ADT’s cloud region and eye region scene type scores, whereas global organization provides an alternate approach to assessing Tb axisymmetry versus the cloud symmetry value included in the ADT. The ORB framework is also extensible to any thresholded feature, as defined in section 3; this enables further characterization of both additional ADT scene types and novel patterns in an objective, interpretable fashion.

Fig. 4.
Fig. 4.

EOFs for radial profiles: Results from PCA of NAL TCs. The first three EOFs capture 90% of the variability in the ORB functions for radial structure and describe the three primary orthogonal shapes present in their profiles; see section 3b for details.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0286.1

Fig. 5.
Fig. 5.

Hourly SIZE functions for Hurricane Katrina (2005). Lighter colors indicate higher intensities. The first two EOFs in the inset capture 94% of the variance in the size functions.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0286.1

This study focuses on applying the ORB framework to develop interpretable and rich descriptions of Tb structure. To demonstrate that this suite of features retains information relevant to short-term intensity change, we build statistical models to diagnose the current state of TC intensification. We construct separate binary classifiers for RI and RW events that ask, “Is this TC currently in the midst of an RI event (or RW event)?” To predict the probability that the current state meets RI or RW criteria, we use logistic lasso, a penalized logistic regression that automatically selects a set of relevant predictors by shrinking the regression coefficients of the other (i.e., irrelevant) predictors to zero. For the predictors included in this study, we find that several harder-to-interpret, nonlinear machine-learning methods result in similar performance as lasso, and thus we focus on the latter given its interpretability.

Our analysis indicates that a combination of ORB features and the lasso can provide a powerful observational tool for relating short-term intensity change to convective structure patterns in overocean TCs. These features (i) enable rapidly digested summary and visualization of evolving convective structures while (ii) capturing the complexity of those structures across many thresholds in a rich and extensible framework. We find that binary classification methods restricted to GOES-derived ORB predictors achieve an accuracy at least comparable to classification methods restricted to SHIPS environmental predictors when distinguishing rapid change from nonrapid change events. Adding the ORB suite to the environmental predictor set can improve the results beyond individual predictor sets, with RI/non-RI in the NAL basin showing significant promise (Fig. 6; Table 1). ORB also allows us to interpret regression coefficients in a way that enables physical insight into the models (Fig. 7).

Fig. 6.
Fig. 6.

Binary classification by logistic lasso and nonlinear classifiers: The area under the ROC curve (AUC; white markers) for test data, with bootstrap-estimated 95% confidence intervals (colored intervals) computed for each of the 16 event–basin–predictor type combinations using the classifiers (top) LASSO, (middle) RF, and(bottom) GBCT. For all settings, ORB-only has a performance on par with SHIPS-only. Qualitative results persist across different classifiers. See text for details.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0286.1

Table 1.

The P values for testing for differences in AUC relevant to the goals of the paper, estimated as in appendix C; P values significant at the 0.05 level for each individual model are shown in boldface. Rows 1–3 correspond to tests of ORB-only AUCs lagging significantly behind SHIPS-only, and rows 4–6 correspond to tests of SHIPS + ORB improving on SHIPS-only.

Table 1.
Fig. 7.
Fig. 7.

Interpreting regression coefficients in logistic lasso with the SHIPS + ORB predictor set (NAL): Regression coefficients for SHIPS + ORB lasso models for (left) RI events and (center) RW events, as well as (right) the difference between the RI and RW regression coefficients in the NAL basin. Variables are ordered vertically by RI − RW such that an increase in the variable with the largest RI − RW coefficient, holding all other variables constant, is associated with an increase in the probability of RI much more strongly than an increase in the probability of RW. See the text for more discussion.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0286.1

2. Data

The GOES-East/West satellites provide Tb imagery with high and consistent spatial and temporal resolutions. NOAA’s Gridded Satellite (GridSat)-GOES database contains historical, hourly GOES channel 4 (IR band) Tb remapped to an even 0.04° grid over the full GOES domain (75°N to 75°S and 150°E to 5°W) in the Western Hemisphere through GOES-15 (up to 2017 at the time of this study; Knapp and Wilkins 2018). Though these resolutions are coarser than the ABI, GridSat-GOES covers the period 1994–2017 and thus ensures we have a reasonably large dataset for analysis. Due to early data acquisition difficulties, this study uses the period 1998–2016; future work will use the full period 1994–present. We refer to the TC-centered field at a given time as a “stamp.” This term is borrowed from observational astronomy, where small digital images of the sky centered around an astronomical object are sometimes referred to as postage stamps. The earliest versions of ORB were inspired by nonparametric approaches to quantifying galaxy morphology (e.g., Conselice 2014).

We use the National Hurricane Center’s “best track” hurricane database (HURDAT2; Landsea and Franklin 2013) to limit our TC sample to tropical time steps with intensities of at least 50 kt to ensure identifiable structure in Tb and exclude extratropical times. From these 6-h samples, we compute 24-h intensity change to identify rapid versus nonrapid change events and then interpolate each track to hourly resolution to extract the associated GridSat-GOES Tb stamp of radius 800 km (Fig. 8).

Fig. 8.
Fig. 8.

GOES Tb stamps for (left) Edouard at 1800 UTC 16 Sep 2014 (showing an eye–eyewall structure at an intensity of 95 kt) and (right) Nicole at 0100 UTC 9 Oct 2016 (showing an asymmetric central dense overcast at an intensity of ~47 kt).

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0286.1

The SHIPS developmental database provides historical values of observed and NWP-derived predictors used to generate SHIPS forecasts. In this study, we use SHIPS predictors for reasons of availability and known correlation with TC intensity change, not for the purpose of comparing our statistical model with SHIPS operational models. Our goal is to classify the current (rather than the future) state of the TC as RI/non-RI (or, equivalently, RW/non-RW). We restrict our analysis to the 0-h SHIPS values; that is, we only use values valid at each time step rather than the forecast values valid at subsequent times. We are then able to demonstrate the merits of the ORB feature suite in interpretable analyses when used alongside selected 0-h SHIPS predictors. We include two observed oceanic SHIPS predictors: ocean heat content (OHC) and Reynolds sea surface temperature (RSST). We also select eight NWP-derived atmospheric fields: 200-hPa zonal wind (U200), relative humidity at three different levels (RHLO, RHMD, and RHHI), three measures of vertical wind shear (SHRD, SHDC, and SHRS), and maximum potential intensity (VMPI) (DeMaria 2018). The predictors in this initial set are selected for their (i) relationship to TC convection, (ii) suspected relevance to current intensity given our study’s goal of classifying current intensity change, and (iii) interpretability. For example, 200–850-hPa vertical wind shear calculated over a 200–800-km annulus (SHRD) and over a 0–500-km annulus (SHDC) are part of the SHIPS-RII predictor suite (Kaplan et al. 2015), and 500–850-hPa vertical wind shear (SHRS) also correlates with 24-h intensity change (Rhome et al. 2006). These eight NWP-derived atmospheric predictors and two oceanic predictors are used alongside our Tb-derived predictors to assess the value of combining environmental predictors with TC structure information (Table 2) in predicting the current rate of TC intensity change.

Table 2.

Predictors in each predictor set, and their descriptions. Lags for persistence (Δ6V and Δ12V) are consistent with SHIPS persistence predictor “INCV,” the incremental 6-h changes in intensity. Note that the use of three different lags will multiply the total number of predictors by a factor of 4for each of SHIPS-only and ORB-only.

Table 2.

In addition to a minimum intensity of 50 kt, we further restrict our sample to overwater TCs to reduce the influence of land on TC convective structure. For a TC center to be considered over water, it must (i) be at least 250 km from land and (ii) not move within 250 km of land during the 24-h window under consideration. Since rapid intensity changes of at least 30 kt in 24 h are rare events, we ensure our study relies on a reasonable sample size by defining rapid change events as a 24-h intensity change of at least 25 kt; this lower threshold is frequently examined in operational forecasts such as SHIPS-RII as well as RI/RW literature (Kaplan et al. 2010, 2015; Wood and Ritchie 2015). We examine the evolution of all TCs that meet our criteria between 1998 and 2016 in the NAL and ENP basins, and we analyze both intensification and weakening events. Finally, we only include stamps with less than 5% of pixels missing.

The above criteria applied to 1998–2016 produce a dataset of 2811 6-h observations (Table 3), including 174 distinct RI events and 162 distinct RW events—that is, nonoverlapping runs of consecutive 6-h time steps during which a TC is undergoing RI or RW. While the above sample is used to analyze RI and RW events, we will compute the features described in section 3 on nearly 25 000 hourly stamps (14 470 NAL; 9818 ENP) to track the evolution of TCs at finer temporal resolution; that is, we develop our ORB feature suite using all stamps from the time the TC first breaches the 50-kt threshold until the last time it drops below 50 kt, regardless of proximity to land. Due to the relaxed restrictions on the included stamps, this set of hourly stamps is more than 6 times as large as the restricted, 6-h set used for the modeling of section 4.

Table 3.

Observations used in our study; the first seven rows of the table refer to numbers of stamps. The “Unique TCs” row indicates the number of TCs in the sample. The bottom two rows indicate the number of RI/RW events, which each consist of multiple temporally adjacent stamps. The statistical models in section 4 are based on the final GOES + SHIPS available sample (boldface). The sample size in the NAL basin is reduced because of more frequent landfall than in the ENP basin.

Table 3.

3. ORB features for convective structure

ORB seeks to support statistical analysis that includes interpretable Tb features derived from IR imagery. This section describes the development of “ORB coefficients” (which later serve as inputs to our statistical models) from a suite of ORB statistics. Throughout this section, we denote the brightness temperature at location s as Tb(s). Each ORB statistic is based on a single Tb stamp and is parameterized by a threshold value for either the maximum cloud-top temperature considered c, or the radial distance from the TC center r. Though all of these statistics are functions of both Tb and either c or r, we will for notational convenience simply denote them by f(c) and f(r). The values of c and r can be fixed, or they can be allowed to vary continuously, resulting in “fixed threshold” ORB statistics or “continuous” ORB functions, respectively.

a. From ORB statistics to ORB functions

ORB statistics quantify the spatial structure of a Tb stamp by capturing (i) global organization by using departures of the image gradient from perfect symmetry about the center of the storm, (ii) radial structure by using azimuthal averages of Tb about the center, and (iii) the bulk morphology of the storm by using descriptions of Tb level sets. These ORB statistics strike a balance in the trade-off between interpretability and descriptiveness. More specifically, they attempt to capture Tb structure using a handful of numbers that correspond to intuitive aspects of TC structure (interpretability) while retaining as much of the information contained in the original Tb image as possible (descriptiveness).

Rather than describing properties of a stamp for a fixed threshold (as in a single ORB statistic), we can consider the whole range of thresholds simultaneously. This is the basic idea behind ORB functions. For example, the deviation angle variance [DAV; defined in section 3a(1)] is typically computed for a fixed radius (e.g., 250 km) or over a few different radii. However, we can choose any arbitrarily high number of sample thresholds. In the limit, we have a dense sampling that results in an approximation of a continuous function, the ORB function.

For Hurricane Nicole (2016), the radial profile functions [T¯(r); defined in section 3a(2)] exhibit structure that follows TC intensity (Fig. 9). However, it is unclear which choice of radial threshold would best describe this structure at all time steps. Many features of these curves can vary from TC to TC or within a single TC, while others may not even appear at all time steps. For example, the radius at which the global minimum occurs is not fixed in Fig. 9, while an eye (high cloud-top temperatures near r = 0) is only occasionally present. Though we can account for many of these subtleties, generating ad hoc features from T¯(r) will sacrifice the cohesiveness of a single ORB function. [We documented our attempt at such an approach for T¯(r) in McNeely et al. 2019.]

Fig. 9.
Fig. 9.

Radial profile T¯(r) as an ORB function for radial structure: Radial profiles of Edouard’s Tb (Fig. 8; 1800 UTC 16 Sep 2014) are shown both centered on the best track (solid line) and centered on the eye (dashed line). If we use the best track center as the origin about which T¯(r) is computed, the temperature of the eye (near r = 0) is deflated; we start including the eyewall at r = 0 because the center of the image is not the center of the eye.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0286.1

In this study, we consider the entire range of thresholds c or distances r simultaneously. This approach ensures that we do not miss structure in the feature, as selecting just a few thresholds is likely to. The density of our sampling scheme is determined by the precision of the data; here we sample at the 0.04° spatial resolution of GridSat-GOES for ORB functions f(r) of radius r and every 1°C for ORB functions f(c) of temperature threshold c.

1) Global organization

The characterization of global organization utilizes the deviation angle variance technique (Pineros et al. 2008). We first compute the 2D image gradient of Tb at each position s = (x, y) [denoted ∇Tb(s)]. Consider the core structure of Edouard (2014) in Fig. 8: the image gradient of a perfectly axisymmetric TC is expected to point either directly toward or away from the TC center. The deviation angle compares the direction of ∇Tb(s) to the gradient directions expected of such an idealized storm. We denote the deviation angle of the image gradient of Tb at a point s as ψ(s); values near ψ(s) = 0 indicate local axisymmetry.

We summarize the level of organization by taking the variance of ψ(s) over circular regions centered on the TC. We will denote the ORB statistic for global organization over a region of fixed radius r as

DAV(r)=Var[ψ(s)||s|r].

We will consider the value of DAV over a range of radii from r = 50 km to r = 400 km, resulting in ORB functions for global organization. Figure 1 shows DAV(r) as ORB functions for the two infrared images of Fig. 8.

2) Radial structure

The characterization of radial structures is based on radial profiles (e.g., Sanabia et al. 2014), the angular averages of Tb(s). At a fixed distance r from the TC center, define the ORB statistic for radial structure as

T¯(r)=12π02πTb(r,θ)dθ.

The primary complication comes from centering the coordinate system (i.e., defining the location of r = 0). Sanabia et al. (2014) solve this by manually identifying the center of each image, whereas we automate image centering by maximizing T¯(r) at low r when an eye is present (McNeely et al. 2019). In this work, we use the best track center when an eye is not detected. This automated image centering is also used in the computation of DAV(r).

Our treatment of the radial profile also differs from earlier work in its use of functional features, defined in section 3b, rather than designed features along the curve. Sanabia et al. (2014) identify several critical points along this curve for use as features, such as the radial location of the minimum Tb within 200 km. In lieu of such designed features on T¯(r), section 3b details a method for the recovery of data-driven functional features from general ORB functions; these radial profiles serve as ORB functions for radial structure.

Figure 10 demonstrates the relationship between the radial profile and TC intensity. Radial profiles discard angular information, making them informative primarily when bulk symmetry is high, as determined from bulk morphology ORB functions defined in the next section. In such cases, radial profile temperatures can help discriminate between eye–eyewall structures and uniform central dense overcasts. In an eye–eyewall structure, the temperature at the center r = 0 will be high, while the minimum of the radial profile, minrT¯(r), will be low; this typically indicates a strong TC. Conversely, a uniform central dense overcast is marked by low temperatures near the center r = 0 due to the absence of a clear, warm eye and generally indicates a TC is below hurricane strength (<64 kt). The slope of the profile also provides insight into the extent of the storm’s convection: gentle slopes indicate deep convection sustained over large regions, and sharp slopes indicate more localized deep convection most often associated with an eyewall.

Fig. 10.
Fig. 10.

Hourly radial profiles: Example hourly eye-centered radial profiles for Hurricane Nicole (2016). Lighter colors indicate higher intensities. The inset depicts Nicole’s intensity trajectory. The high-intensity phase of the TC’s evolution corresponds to the clearest eye–eyewall structure.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0286.1

3) Bulk morphology

The characterization of the bulk morphology of Tb utilizes sublevel sets. A sublevel set (hereinafter “level set”) on Tb(s) identifies the points s = (x, y) for which Tb is at or below a certain threshold. Formally, define a level set as

L(c)={s|Tb(s)c}.

As the temperature threshold c is reduced, the level set shrinks to a region of lower temperatures and typically stronger convection. This definition of L(c) gives rise to a variety of ORB statistics for bulk morphology.

ORB currently uses four summaries for the structure of L(c); see McNeely et al. (2019) for the full mathematical definitions. SIZE(c), a function of the level set L(c), gives the area covered by L(c) in kilometers squared (the black region in Fig. 2), providing insight into the coverage of various levels of convection in a TC. Similarly, SKEW(c) is a function of the level set L(c) and gives the displacement of the center of mass of L(c), normalized by the mean radius of points in L(c), where “radius” is the distance from the TC center. This reveals whether the TC cloud pattern is biased in a particular direction (Fig. 2). The remaining two are measures of the raggedness (SHAPE) and stretching or eccentricity (ECC) of the level set L(c). Note that any functions of level sets (such as our SIZE, SKEW, SHAPE, and ECC) can themselves be regarded as functions of the level set threshold c. Hence, we refer to their continuous approximations as ORB functions for bulk morphology.

b. Analysis of TC structure via ORB and PCA

1) Dimensionality reduction via PCA

When we densely sample the ORB statistics, we obtain approximations of continuous ORB functions. Now, we will switch to a different representation (coordinate system) in which we can reduce the dimensionality of the ORB functions while retaining most of the information in the original functions. (Here, “dimension” refers to the number of values we use to represent each function; thus, the original dimension of the ORB functions would be the number of threshold/sampling values.) A more compact representation simplifies the analysis and enables us to visualize the evolution of TC convective structure over time in a low-dimensional phase diagram defined by the new representation.

Given a suitable orthogonal basis, we can write any continuous function, such as the radial temperature profiles in Eq. (2), as a linear combination of basis functions. More specifically, after subtracting the average over the basin, the function f(x) describing deviations from the basin average is

f(x)=α1f1(x)+α2f2(x)++αifi(x)+,

where fi(x) denotes the ith basis function and αi = ⟨f, fi⟩ is the scalar orthogonal projection of f(x) onto the ith basis function, representing how much of the shape of fi(x) is present in f(x). We call the value of fi(x) the loading of the basis function at x.

For continuous functions, the number of elements in a basis is infinite. However, provided that a proper basis is chosen, one can reasonably approximate functions using a finite linear combination of basis functions. In this work, we will use a data-driven approach called principal component analysis to find a low-dimensional basis representation of our ORB functions. PCA returns an ordered set of orthogonal and normalized vectors whose linear combinations can fully reconstruct the original vectors—that is, the densely sampled ORB functions. Henceforth we will refer to the basis functions f1, f2, … from PCA as empirical orthogonal functions, to emphasize that these basis functions are empirical (data driven) and orthogonal. We compute EOFs {fi} separately for each basin and each ORB feature. The EOFs are constant in time, whereas the projections αi of the stamps onto respective EOFs fi are scalars varying with time. We refer to the latter αi-values as ORB coefficients; these coefficients serve as a time-varying representation of TC convective structure.

The first few EOFs provide the most information, and in our case, these tend to capture larger-scale structures of an ORB function. We can project an observed ORB function (a single function computed from a stamp) onto the first K EOFs of that feature to obtain the best K-dimensional approximation. [Figure A1, described in more detail in appendix A, demonstrates such a reconstruction for radial (RAD); appendix A also includes details on PCA.] In our study, we use the first K coefficients {α1, α2, …, αK} of each ORB function at each time stamp as both inputs (predictors) to machine-learning methods as well as a means to visualize the evolution of TC convective structure with time.

2) Analysis of TC structure and evolution

We compute basin-specific EOFs separately on 14 470 NAL stamps and 9818 ENP stamps (section 2). It only takes K = 2 or 3 EOFs fi(x) to explain most of the variability in the six current ORB functions (i.e., DAV, RAD, SIZE, SKEW, SHAPE, and ECC). While objective methods (such as cross validation) exist for selecting the number of basis functions, we here use a heuristic 90% variance threshold. A larger set may prove useful in forecasting-only settings, but some of the higher-order basis functions lack the interpretability of the first two–three EOFs.

Reconstruction using the first three EOFs of radial structure explains 90% of the variance of observed radial profiles (Fig. 4). EOF 1 (solid purple) roughly measures the overall temperature anomaly of the profile, with higher loadings in the TC interior; f1(r) is positive, so generally the corresponding ORB coefficient is positive (α1 > 0) when the radial profile T¯(r) has warmer-than-average cloud tops. EOF 2 (dotted blue) measures the core temperature relative to outer-band temperatures. Since f2(r) changes sign from positive to negative around 200 km, generally α2 > 0 when the radial profile shows a warmer core; hence, α2 < 0 could indicate a uniform central dense overcast or an obscured eye. EOF 3 (dashed green) indicates the presence of an eye–eyewall structure, where f3(r) > 0 for r < 50 km and r > 300 km and f3(r) < 0 elsewhere; hence generally α3 > 0 indicates an IR-visible eye. These three EOFs alone can construct close approximations to the wide range of the profiles apparent in Fig. 10.

In Fig. 5, we apply the same ORB framework to the SIZE function for Hurricane Katrina (2005). For SIZE, we only need K = 2 EOFs to capture 90% of the variance in either basin. Because EOF 1 (solid purple) of SIZE measures the rate at which L(c) accumulates area with increasing temperature thresholds c, α1 for SIZE indicates colder overall Tb. EOF 2 (dashed green) flattens the middle of the SIZE function when α2 > 0, resulting in more coverage at high temperatures (the exposed sea surface) and low temperatures (deep convection).

Although this study’s classification methods use 6-h values to match predictors from the SHIPS developmental database (section 4), one benefit of Tb observations is high temporal resolution. It is straightforward to track the structural evolution of TCs with ORB coefficients. The EOFs fi(x) for each ORB feature are constant within a basin, but the ORB coefficients are implicitly functions of time; that is, αi = αi(t). One temporal phenomenon that can be observed in TCs is the diurnal cycle (e.g., Dunion et al. 2014), where deeper convection occurs near the TC core around local sunset and then spreads radially outward through the following afternoon. In Fig 11, the time evolution of the first two ORB coefficients for SIZE exhibits a 24-h periodic clockwise oscillation for Hurricane Katrina (2005). Though SIZE is insensitive to the location of convection and thus cannot measure the outward motion of such pulses, the observed oscillations in this phase space may be a manifestation of the TC diurnal cycle. In the phase diagram, we see two primary deviations from this cycle: when the TC turns north and accelerates [near point D, when it becomes a major hurricane (100 kt)] and when the TC begins to interact with land (the rightward translation toward point E).

Fig. 11.
Fig. 11.

Visualizing SIZE evolution with ORB: (a) The spatial trajectory shows the path of Hurricane Katrina (2005) through the Gulf of Mexico from 1800 UTC 25 Aug through 0000 UTC 30 Aug, with the labels marking the locations of displayed ORB functions. (b) At six points (A–F, labeled) in time, we plot the SIZE function (black) and the sample mean SIZE function for the NAL basin (gray). (c) The trajectory of the ORB coefficients of the SIZE function in phase space during the storm’s path. The clockwise cyclic path in (c) matches the local night/day cycle as well as the storm’s path in (a); see the text for details.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0286.1

c. Summary of the ORB framework

ORB quantifies convective structure in TCs as revealed by Tb imagery in a way that is both descriptive enough to support statistical analyses and interpretable enough to support scientific inquiry into and clear visualization of temporal evolution of rapidly changing TCs. Figure 12 gives an example of the ORB framework as applied to Hurricane Katrina. The framework under which ORB is developed allows extension to other aspects of convective structure as well as other observational bands. The main steps in the construction of an ORB representation are as follows:

  1. Develop a summary of Tb structure reliant on a single threshold, an ORB statistic. For example, the average Tb can be computed at a single distance from the TC center, such as the mapping T¯(r) [see subsection 3a and McNeely et al. (2019)].

  2. Compute the summary for a dense set of threshold values. View the dense sampling as an approximation of a continuous function, the ORB function (see Figs. 1, 2, and 9 and section 3a).

  3. Compute the ORB functions for all stamps from the same TC basin (here, NAL or ENP). Apply PCA to the resulting set of ORB functions; the derived empirical orthogonal functions reflect the basin-specific variability of each ORB function. We refer to the projection of ORB functions onto the EOFs as ORB coefficients. The ORB coefficients are the time-varying inputs to statistical models, as in section 4 (see Figs. 4, 5, and 10 and section 3b).

  4. Interpret the EOFs. This interpretation relies on the design of the original ORB statistics (step 1). For example, PCA of the average temperature as a function of radius detects the presence of an eye–eyewall structure in its third EOF (see Figs. Figs. 4, 5, and 11 and section 3b).

One can use the ORB coefficients to visualize or analyze TC convective evolution. In the next section, we use the ORB coefficients as inputs to machine-learning methods that can sift through a large number of predictors and relate rapid change events to a subset of relevant predictors. The interpretability of the EOFs and associated ORB coefficients allows one to directly tie the results of such an analysis to meaningful aspects of convective structure.

Fig. 12.
Fig. 12.

Full ORB evolution of Katrina (2005) with selected stamps: (top left) A map of the trajectory of Katrina with select points (A–H) labeled. (top right) The intensity over time, with the same points labeled. (middle) GOES imagery for the eight labeled points. (bottom) The full suite of ORB coefficients as a function of time, with the original time series (blue) smoothed by an exponential weighted moving average (black) with decay chosen to achieve a roughness [average value of |d2f(t)/dt2|] of 0.2σ h−2 across the basin. Where no blue curve is visible, the smoothed time series is nearly equal to the original time series.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0286.1

4. Modeling rapid intensity change

The ultimate goals of ORB are to (i) advance scientific understanding of TC intensity change and (ii) support future improvements in TC intensity forecasting by providing a framework for analysis of IR imagery. To evaluate the effectiveness of the ORB framework, we here consider the problem of diagnosing the state of rapid change in a TC (where RI and RW events are treated separately). The problem of obtaining the state Yt given data X1:t collected up to and including time t is sometimes referred to as filtering (Cressie and Wikle 2015), to be distinguished from forecasting or predicting Yt based on data X1:t−1.

Following the operational convention of 24 h, we label every observation in our sample as either RI or non-RI (or analogously as either RW or non-RW). If a given 6-hourly observation lies within any 24-h moving window that was classified as RI or RW according to our 25-kt definition, then that 6-hourly observation is labeled as RI or RW. All other 6-hourly observations are labeled as non-RI/non-RW.

We build a probabilistic binary classifier (see section 4a for details) that predicts the current state of the storm with respect to rapid change events, using four different predictor sets or combinations of inputs.

The four predictor sets in our analysis are as follows:

  1. The SHIPS-only predictor set contains only the 10 SHIPS predictors we selected for this study, as well as TC latitude and longitude (see section 2 and Table 2).

  2. The ORB-only predictor set contains only ORB coefficients derived from Tb imagery.

  3. The SHIPS + ORB predictor set adds ORB coefficients to the SHIPS-only predictor set but does not add explicit interactions between the two.

  4. The SHIPS + Persistence set includes SHIPS predictors and intensity persistence (defined by SHIPS as the current intensity, the 6-h change in intensity, and the 12-to-6-h change in intensity).

These predictor sets have been chosen to demonstrate the value of adding ORB coefficients to SHIPS environmental predictors. The fourth predictor set provides a baseline for other predictor sets or a best-case scenario of classification performance. Because RI and RW events are defined using information partially contained in these persistence terms, one could argue that including such a term only makes sense in prognosis, but not when the goal (as in our case) is diagnosis and relating the current RI (or RW) state to meaningful predictors.

a. Methods: Probabilistic classification via logistic lasso

We would like to answer the following question: “Is the storm currently undergoing or entering a rapid change event, and what convective features are associated with such an event?” We proceed by constructing a statistical model—the logistic lasso—that uses the d components of a predictor set at time I (which we denote by a d-dimensional vector of predictors, xi) to classify the binary state Yi of the storm at that time. (For example, with the SHIPS-only predictor set in Table 2, we have a total of d = 48 predictors for 10 SHIPS variables plus latitude and longitude and their lagged changes, whereas d = 68 for the ORB-only predictor set.) Following the previously described labeling scheme, we say that Yi = 1 if the ith observation of the TC occurs during a RI (or RW) event, whereas Yi = 0 corresponds to the observed absence of RI (or RW).

We will use TCs prior to 2010 to learn the probability of an ongoing rapid change event, piP(Yi = 1|xi) and then use the learned statistical model to predict the probability of rapid change for TCs that occur during and after 2010; that is, our training set spans 1998–2009, whereas our test set spans 2010–16.1 These probabilities can be converted to binary (RI/non-RI or RW/non-RW) classes. Below, we describe the details of the logistic lasso regression model and how to properly fit and assess such a model.

1) Logistic lasso

Logistic regression is similar to traditional linear regression but bounds the mean response E(Y|x) = P(Y= 1|x), or equivalently the probabilities p(x) ≡ P(Y = 1|x), to values between 0 and 1. We also assume a different distribution for the response, in this case the binary response or “labels” Yi, than in traditional linear regression. Rather than directly fitting Yi using linear regression with independent and identically distributed (iid) errors εi from a normal distribution, we instead assume that the labels Yi (conditional on xi) are iid observations from a Bernoulli distribution with parameter pi. We then fit a linear function to logit transformations of pi; this results in the generalized linear model

logit(pi)log(pi1pi)=β0+β1xi1++βdxid,

where xij represents the jth predictor or the jth component of the input vector xi. As in standard linear regression, we can use maximum likelihood estimation (MLE) to find the best-fitting β = (β1, …, βd) coefficients. Note that our data come from time series of intensities; due to temporal correlations, the assumption of conditional independence Yi|xi may not hold for events close in time. Nevertheless, our logistic lasso model can still provide useful diagnosis.

The advantage of a linear model, or in this case a generalized linear model, is that it is easy to directly relate predictors to the mean response E(Yi|xi). Recall that the coefficients in a traditional linear regression model Yi = β0 + β1xi1 + + βdxid + εi with normally distributed errors εi reflect how important a predictor is relative to the other variables in the model. In a linear model, the regression coefficient βj tells us that an increase in the jth variable xij by one unit, while holding all the other d–1 variables constant, is on average associated with an increase (if βj > 0) or decrease (if βj < 0) in the response Yi by |βj| units. The linearity of logistic regression admits a similarly straightforward interpretation that relates the probabilities pi to the predictors. From Eq. (5) we have that an increase of xij by one unit, holding all other variables constant, is associated with a change of the log odds [i.e., the logarithm of the odds pi/(1 − pi)] of the event Yi = 1 by βj, which is equivalent to multiplying the odds by exp(βj).

Unfortunately, when d is large (i.e., when we have many predictors), the estimate β^ is highly variable. The statistical model also becomes difficult to interpret. To reduce the variance of the statistical model and to help interpretability, one typically reduces the size of the model by excluding variables that contribute little to the fit from the predictor set. The logistic lasso is able to sift through a large number of predictors by maximizing a penalized MLE problem

β^=argmaxβ[L(β;D)λj=1d|βj|],

where L(β; D) denotes the likelihood of the coefficients β = (β1, …, βd) when observing data D = {(x1, Y1), …, {(xN, YN)}, the term λj=1d|βj| is the lasso penalty, and λ ≥ 0 is a tuning parameter. Adding this penalty for λ > 0 discourages large (positive or negative) coefficient values and results in all coefficients shrinking toward 0 as λ increases. To penalize all predictors equally, the predictors x are scaled (to standard deviation 1) and centered (to mean 0). For sufficiently large λ, small regression coefficients are set to 0. Increasing λ leads to smaller statistical models with fewer predictors. We fit our logistic lasso model to TCs between 1998 and 2009 via 10-fold cross validation; see appendix B for details. Note that we later in sections 4b and 4c assess the final statistical models (with tuned parameters) on an independent test set not used in the cross validation; in our case, all TCs from 2010 to 2016.

2) Assessing classification performance

To evaluate the performance of the fitted statistical models in an objective way, we convert the predicted probabilities p^i to binary classes Ci, for which we have a ground truth. Here, Ci = 1 denotes an ongoing RI (or RW) event and Ci = 0 denotes the absence of such an event. We perform this conversion by choosing a probability cutoff p; for observations xi for which the probability p^i>p, the classifier predicts onset of or ongoing rapid change (Ci = 1); if p^i<p, the classifier predicts that there is no rapid change (Ci = 0). The cutoff is often fixed to a default value of p = 0.5, which makes sense in the case of balanced data (i.e., when the two classes occur with the same proportions) but not when one class represents rare events such as RI and RW. As we change the value of p in the range [0, 1], we can make the resulting classifier more or less sensitive to the event, trading between higher true positive rates at lower p (TPR; the fraction of events correctly identified as such) and higher true negative rates at higher p (TNR; the fraction of nonevents correctly identified as such). A so-called receiver operating characteristic (ROC) curve describes the properties of a classifier by showing its TPR and TNR values at different cutoffs p (Fig. 13). An ideal classifier would have a ROC curve hugging the top-left corner, which represents a 100% true positive and true negative rate. (The trivial SHIPS + Persistence model tends to perform best, as expected). A poor classifier would be near the gray diagonal, which corresponds to no better than chance. Hence, a common way of quantifying the performance of a classifier for all possible choices of thresholds is to compute the area under the ROC curve (AUC). In our study, we include both ROC curves and AUC values. In addition, to address the issue of unbalanced data, we choose the threshold p of the binary classifiers in section 4b to maximize the so-called balanced accuracy (equivalent to half of the Peirce score) defined as BA = (TPR + TNR)/2 (Brodersen et al. 2010).

Fig. 13.
Fig. 13.

Classification by logistic lasso: ROC curve comparison across all four predictor sets, by basin and by type of rapid change. See Fig. 6 and Table 1 for AUC metrics with bootstrap confidence intervals and significance tests of differences in AUC between statistical models.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0286.1

b. Results: Logistic lasso

1) Classification by logistic lasso

We fit the lasso coefficients β^ in Eq. (5) for all four predictor sets in Table 2. Figure 13 and Fig. 6 (top) summarize the final lasso classification results on test data by type of rapid change (RI or RW) and by basin (ENP or NAL); with four settings and four predictor sets there are 16 different models. In Fig. 6, the square markers represent the AUC on TCs between 2010 and 2016 (the test sample) for statistical models fitted on TCs between 1998 and 2009 (the train sample). We assess the uncertainty in the AUC estimates (conditional on the train sample) by resampling the test sample 250 times with replacement. The figure shows a 95% pivotal bootstrap confidence interval of the AUC for each model fitted with logistic regression or one of the nonlinear classifiers discussed in section 4c.

According to a permutation test2 of differences in AUC, the statistical models based on the satellite-derived ORB-only predictor set achieve a classification accuracy comparable to the models based on the SHIPS-only environmental predictor set. This result suggests that ORB is able to adequately assess aspects of evolving convective structure relevant to intensity change by directly capturing the structure of clouds (rather than the environment in which those structures evolve). Furthermore, ORB predictors used as a complement to traditional environmental predictors, as in the SHIPS + ORB predictor set, may further improve accuracy. The significance tests for RI/non-RI in the NAL basin in particular underscore the potential for the ORB framework to complement environmental predictors.

In summary, our analysis implies that comprehensive study of GOES IR can provide a high-resolution view of the evolution of TC internal structure, and that information complements traditional NWP-derived environmental predictors. These results appear robust across widely different classification methods (see section 4c and Fig. 6).

2) Regression coefficients in the fitted logistic lasso

As mentioned, the logistic lasso model [Eq. (5)] allows for fast variable selection and straightforward interpretation of the regression coefficients β^. Figure 7 shows the fitted nonzero lasso coefficients (20 for RI/non-RI and 37 for RW/non-RW) in the NAL basin; the corresponding “selected” variables form a subset of the 116 SHIPS + ORB predictors.

The regression coefficient for RAD3 in this model is −0.53, meaning that a 1σ increase in RAD3 (the clarity of the eye–eyewall structure) is associated with a 0.53 decrease in the predicted log odds of the rapid change event, or a decrease of a factor of exp(−0.53) = 0.59 in the predicted odds, given that all other predictors are held constant. Likewise, the coefficient for U200 in that same fit is −0.75; the same 1σ increase in U200 is associated with a decrease in the log odds by 0.75, and the odds by a factor of exp(−0.75) = 0.47. We center the predictors (in addition to scaling to σ = 1) so that a predictor value of zero corresponds to the sample mean value in that basin.

When predictors with high collinearity are present, a traditional multiple regression will suffer from inflated coefficient variance. Lasso suffers from a different version of the problem, since it will often select one variable out of a group with strongly correlated variables, returning regression coefficients of 0 for the rest. For this reason, the recovered support (the set of predictors with nonzero regression coefficients in the fit) of the final fitted classifier may exclude predictors expected to appear; for example, OHC does not appear in the logistic lasso for ENP RI, but that does not mean it is not important to the intensity-change process. Instead, other variables (such as RSST) may contain similar information with respect to intensity change as OHC. This effect is reversed for ENP RW, where RSST is dropped from the model while OHC remains.

A positive regression coefficient means that above-average values of a predictor are associated with increased log odds of rapid intensity change, while negative coefficients mean that above-average values of a predictor are associated with reduced log odds. Hence, rapid intensity change is more likely when the sign of the predictor matches the sign of the coefficient. In the left panel, the classifier for RI tends to predict ongoing RI events when (i) current eye–eyewall structure is weak (negative RAD3 regression coefficient), (ii) eye–eyewall structure is strengthening (positive Δ24RAD3 regression coefficient), (iii) current 200-hPa zonal winds are low (negative U200 regression coefficient), (iv) current wind shear is low (negative SHDC regression coefficient), and (v) current midlevel humidity is elevated (positive RHMD regression coefficient). In the center panel, the RW classifier uses a wider range of weaker effects, predicting RW for cases of (i) higher current interior symmetry than exterior symmetry (negative SHAPE2 regression coefficient), (ii) recent decay of interior symmetry (positive Δ6SHAPE2 and Δ12SHAPE2 regression coefficients), (iii) decaying eye–eyewall structure (negative Δ24RAD3 regression coefficient), and (iv) strong current eye–eyewall structure (high RAD3 regression coefficient).

The SHIPS + ORB results suggest a more complex relationship between structure, environment, and intensity in the case of RW than in the case of RI (via a large number of small to medium-sized regression coefficients as opposed to a few dominant effects). RI is thought to be strongly driven by internal processes, while RW results from a more complex relationship between the TC and its environment. This is reflected in the logistic lasso fitted under the ORB-only setting (shown in the online supplemental material). RI is dominated by radial profile predictors (for core structure), SHAPE2 (which reflects the core symmetry), and SKEW1 (for overall skew of the cloud tops, which is influenced by upper-level winds and shear). RW encompasses a wider range of regression coefficients, including global measures such as SIZE, various DAV predictors, and change in overall Tb24RAD1), alongside the core structure measurements.

The logistic lasso models fitted for the ENP basin (see the online supplemental material) show similar results, with differences largely explained by the narrow region of TC activity. Outside of this region, SSTs decrease and become hostile to TCs. The logistic lasso for RI fitted under the SHIPS + ORB predictor set predicts RI when (i) SST is high (RSST), (ii) shear is low (SHDC), (iii) the TC is farther south (LAT), (iv) Tb is low and dropping, particularly in the core (RAD2; RAD1; Δ24RAD1), (v) the eye–eyewall structure is intensifying (Δ24RAD3), and (vi) the storm has access to ample upper-level moisture (RHHI).

c. Results: Nonlinear classifiers

To determine the robustness of our results across classification methods as well as to benchmark the logistic lasso, we implemented two additional, nonlinear classification methods: random forests (RF) and gradient-boosted classification trees (GBCT). Both RF and GBCT can capture complex relationships between variables, such as interactions between predictors and nonlinear relationships between predictors and rapid change. First, the ranger implementation of random forests (Breiman 2001; Wright and Ziegler 2015) tallies class “votes” from many deeply grown classification trees with variables chosen from a random subset at each branch, sampling the training data with replacement for each tree.3 Second, the Xgboost implementation of gradient-boosted classification trees (Friedman 2002; Chen and Guestrin 2016) iteratively builds trees of a fixed size, fitting the error remaining after the addition of the previous tree.4

More complex machine-learning methods (such as RF, GBCT, and neural networks) have a higher capacity to fit data well but require larger training sample sizes to achieve a better fit compared to simpler linear models (such as the logistic lasso). Indeed, in our settings the logistic lasso either matches or outperforms RF and GBCT (Fig. 6; Table 1). The performance of the two higher-capacity classifiers here appears to be limited by the small size of the historical sample and the few occurrences of rapid changes within that sample.

With regard to robustness of results across methods: The relative performance of the 16 statistical models fitted for different rapid change types, basins, and predictor sets is consistent across all three classification methods (shown as rows in Fig. 6). In addition, Figs 7 and 8 in the online supplemental material demonstrate that the variable selection and ranking of predictor importances to remain stable across these rather different machine-learning methods; for example, the Gini importance in RF tends to agree with the magnitude of the lasso coefficients. Note that RF and GBCT to some extent account for correlation between predictors with different criteria for variable selection. Our analysis indicates that our main results are model agnostic. We conclude that logistic lasso is a good statistical model for rapid change in our setting because of its statistical performance for the data at hand, fast variable selection, robustness of qualitative results, and added interpretability.

5. Conclusions and future work

a. Conclusions

High spatial and temporal resolution images from GOES platforms hold promise to support improvements in understanding and forecasting TC intensity change in combination with statistical tools that can “unlock” the rich information they contain. The ORB framework provides a method to generate a set of ORB coefficients that target aspects of TC spatial structure in IR imagery that are meaningful to scientists and forecasters and enable both application and interpretation of powerful statistical methods. In this work, we use the logistic lasso—a generalized linear regression model—to capture the relationship between IR imagery and rapid change events. The logistic lasso can accept a large number of explanatory variables (including the ORB coefficients from PCA) as inputs and then automatically select a subset of variables relevant to rapid change with fitted regression coefficients that give a measure of the relative importance of the selected inputs.

We apply our ORB framework to GridSat-GOES IR imagery and base the computations on novel (bulk morphology) as well as existing (radial profiles and organization) ORB statistics. Our approach recovers a low-dimensional phase space for each structural feature, enabling visualization of the trajectory of evolving TC convective structure over time as well as the application of statistical analysis methods such as the logistic lasso.

The ORB coefficients show promise in classifying rapid intensity-change events on their own (as in the ORB-only predictor set); however, for best results these features should be used alongside SHIPS predictors. SHIPS and ORB predictors have complementary strengths: SHIPS values provide current estimates of the TC environment, and ORB features capture the history and current structure of the TC itself, which likely contain important markers of ongoing and future changes in TC intensity. In addition, the extended historical record available for both sets of predictors will support future studies and guidance tools in analyzing and leveraging the relationship between the environment and TC structure as revealed by ORB.

Though nonlinear machine-learning techniques are capable of fitting more complex relationships, they do not seem to return better predictions for these data. There is a trade-off between higher-capacity, more flexible methods, and the need for larger training samples. The number of unique RI and RW events in our study is of the same order as the number of predictors (Table 3). Linear methods place more structural assumptions on the statistical model and tend to perform better in a low-sample-size setting. This property, in combination with variable selection and straightforward interpretation of regression coefficients, makes the logistic lasso an ideal tool to study the relationship between onset of or ongoing rapid change events and environmental and structural predictors.

b. Future work

The ORB framework is algorithmic, making it easily extensible to other aspects of convective structure in IR imagery and to other bands such as water vapor (WV) or derived products such as differenced IR-WV imagery (Olander and Velden 2009). Such extensions may better probe smaller-scale, transient structures in high-resolution IR imagery and identify spatial and/or temporal patterns in overshooting tops revealed by differenced IR-WV imagery. We expect this framework to be able to extract similar features from the higher-resolution ABI observations, and we will investigate ORB’s performance once more data become available.

Our logistic lasso approach has demonstrated the value of GOES-derived ORB coefficients in a simplified setting: classifying binary RI versus non-RI (or RW vs non-RW) events under the assumption of independence of nearby observations in time. Future work will relax these assumptions by (i) treating intensity change as a continuous quantity rather than discretizing it into RI/non-RI or RW/non-RW and (ii) accounting for temporal dependence between observations by creating a time series model that accounts for persistence in environmental and GOES-derived predictors alike. Such statistical models could augment existing intensity guidance schemes by identifying spatial–temporal markers of upcoming intensity change in IR imagery, and provide the means of adding GOES-derived evolutionary history to NWP-derived predictions of a TC’s future environment.

Finally, this work utilizes a linear model without interaction terms. Rapid intensity-change events are driven by the interplay of internal TC processes and the TC environment; future analyses will examine the effects of such interactions, particularly between ORB coefficients and environmental predictors, by adding interaction terms to the statistical model. As the suite of predictors grows larger and more complex, the logistic lasso can be modified (such as via the group lasso penalty of Yuan and Lin 2006) to consider groups of predictors rather than individual predictors, resulting in more intuitive models.

Acknowledgments

Authors Lee and McNeely were supported in part by the National Science Foundation under DMS-1520786, and author Wood was supported by the Mississippi State University Office of Research and Economic Development. We also thank Dr. David John Gagne for his helpful comments on the paper and the anonymous reviewers whose comments greatly improved the organization and clarity of this paper.

APPENDIX A

Details on the Calculation of EOFs via PCA

We first center our data so that the sample mean of each ORB function is zero: (1/n)i=1nzi=0, where a densely sampled ORB function evaluated at d threshold values is here represented by the (random) vector z, and n denotes the number of observations of z in the basin of interest. The centered observations z1, z2, …, zn then form the rows of an n × d matrix Z. Note that the sample covariance matrix of z is given by the d × d matrix S = (1/n)ZTZ.

In PCA, one computes the eigenvectors of the covariance matrix Z. These vectors, which we refer to as EOFs, form a basis for the original data. Let v1, …, vK in Rd denote the eigenvectors that correspond to the K < d largest eigenvalues. In this work, we refer to υi as the ith EOF, fi(x). In a principal component (PC) map, the projections of the data onto these vectors are used as new coordinates; that is, the PC map of the observation zi is given by

zi(ziv1,,zivK)=(α1,,αK),

where the dot represents a scalar product of vectors. In the ORB framework, the scalar projections (α1, …, αK) are our ORB coefficients. We can reconstruct the approximate ORB function with ORB coefficients and EOFs according to

f(x)f¯(x)+α1f1(x)++αKfK(x),

as demonstrated in Fig. A1. The PC map with ORB coefficients (α1, …, αK) acts as a low-dimensional “phase space” for an aspect of convective structure; see Fig. 11 for an example.

Fig. A1.
Fig. A1.

Reconstructing an ORB function: The radial profile for Edouard (Fig. 9) is projected onto the first three EOFs for radial structure (Fig. 4), resulting in K = 3 ORB coefficients (−24.2, −55.7, and 102.1). (top) The contribution of each EOF to the reconstruction of T¯(r), i.e., the weighted EOFs αifi(r) for i = 1, 2, and 3. (bottom) To approximate the original radial profile (“Original”; thick solid black line) we add these three contributions to the basin mean (“Mean”; solid gray line). With all K = 3 contributions (“Mean+1+2+3”; green dashed line), we find a close approximation to the original function, capturing the overall structure of the observed ORB function T¯(r).

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0286.1

Algorithmically, the PC map is easy to compute using a singular value decomposition (SVD) of Z:

Z=UDVT.

Here U is an n × d orthogonal matrix, V is a d × d orthogonal matrix (where the columns are eigenvectors v1, …, vd of S), and D is a d × d diagonal matrix with diagonal elements a1a2ad ≥ 0 known as the singular values of Z. The first K EOFs are given by the first K columns of V. Because ZV = UD, the PC embedding of the ith data point in K dimensions—that is, the set of ORB coefficients (α1, …, αK)—is given by the first K elements of the ith row of UD.

APPENDIX B

Fitting the Logistic Lasso

We fit the logistic lasso model on TCs prior to 2010 using the “glmnet” package for R (Friedman et al. 2009); we select λ via tenfold cross validation (CV). This procedure fits β on 9 of 10 so-called folds, holding out a rotating 10th so-called validation fold. The tuning parameter λ is chosen so as to maximize the fit on the held-out validation folds (see Hastie et al. 2005 for details). CV prevents “overfitting” (fitting too closely to training data and hence performing poorly on future observations) or “underfitting” (inadequately capturing generalizable features in the training data). In our setting, CV results in a different λ value for each of the 16 fits. When choosing CV folds, we randomly sample TCs rather than observations in order to account for temporal dependence within a single TC’s lifespan. Note that we verify the final statistical models (with tuned parameters) on an independent test set not used in the cross validation; in our case, this test set consists of all TCs from 2010 to 2016.

APPENDIX C

Testing for Differences in AUC for Fitted Statistical Models

a. Hypotheses

We formally test two of the conjectures stated in the introduction to this work.

  1. Test 1: Our first conjecture is that ORB-only does at least as well as SHIPS-only. In words, we ask, “Is there evidence that ORB-only would do worse than SHIPS-only?” That is, test

    H0:AUCORB-only=AUCSHIPS-only versus Ha:AUCORB-only<AUCSHIPS-only.
  2. Test 2: Our second conjecture is that SHIPS + ORB may improve upon SHIPS-only. In words, we ask, “Is there evidence SHIPS + ORB improves upon SHIPS-only?” That is, test

    H0:AUCSHIPS+ORB=AUCSHIPS-only versus Ha:AUCSHIPS+ORB>AUCSHIPS-only.

b. Test statistics

Let X1, …, XN be the probabilistic predictions from the ORB-only model for test data, and let Y1, …, YN be the predictions from the SHIPS-only model for the same N test points. Our test statistic is the difference in AUC values between the two statistical models,

T1(X1,,XN,Y1,,YN)=AUC(X1,,XN)AUC(Y1,,Yn).

Likewise, for the second test, let X1, …, XN be the probabilistic predictions from the SHIPS + ORB model for the same N test data and Y1, …, YN be the predictions for SHIPS-only as before. Then, define the second test statistic as

T2(X1,,XN,Y1,,YN)=AUC(X1,,XN)AUC(Y1,.,YN).

c. Permutation tests

To assess the significance of an observed difference in AUC, we estimate the distribution of Ti under the null by forming B = 1000 permutations of the sample points. That is, for b = 1, …, B, randomly sample X˜1b,,X˜Nb from {X1, …, XN, Y1, …, YN} without replacement, and assign the remaining N predictions to Y˜1b,,Y˜Nb. For each of the B permutations, compute T˜ib=Ti(X˜1b,,X˜Nb,Y˜1b,,Y˜Nb). Our estimated p values for the two tests are given by

p^1=[1+b=1BI(T˜1b<T1)]/(B+1) and p^2=[1+b=1BI(T˜2b>T2)]/(B+1),

where I() is the indicator function, taking value 1 when the statement is true and value 0 when it is false.

REFERENCES

  • Belkin, M., and P. Niyogi, 2004: Semi-supervised learning on Riemannian manifolds. Mach. Learn., 56, 209239, https://doi.org/10.1023/B:MACH.0000033120.25363.1e.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Breiman, L., 2001: Random forests. Mach. Learn., 45, 532, https://doi.org/10.1023/A:1010933404324.

  • Brodersen, K. H., C. S. Ong, K. E. Stephan, and J. M. Buhmann, 2010: The balanced accuracy and its posterior distribution. 2010 20th Int. Conf. on Pattern Recognition, Istanbul, Turkey, IEEE, 3121–3124, https://doi.org/10.1109/ICPR.2010.764.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chen, T., and C. Guestrin, 2016: Xgboost: A scalable tree boosting system. Proc. 22nd ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining, San Francisco, CA, ACM, 785–794, https://doi.org/10.1145/2939672.2939785.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Conselice, C. J., 2014: The evolution of galaxy structure over cosmic time. Annu. Rev. Astron. Astrophys., 52, 291337, https://doi.org/10.1146/annurev-astro-081913-040037.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Cressie, N., and C. K. Wikle, 2015: Statistics for Spatio-Temporal Data. John Wiley and Sons, 624 pp.

  • DeMaria, M., 2018: SHIPS developmental database file format and predictor descriptions. Colorado State University Tech. Rep., 6 pp., http://rammb.cira.colostate.edu/research/tropical_cyclones/ships/docs/ships_predictor_file_2018.doc.

  • DeMaria, M., and J. Kaplan, 1999: An updated Statistical Hurricane Intensity Prediction Scheme (SHIPS) for the Atlantic and eastern North Pacific basins. Wea. Forecasting, 14, 326337, https://doi.org/10.1175/1520-0434(1999)014<0326:AUSHIP>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • DeMaria, M., C. R. Sampson, J. A. Knaff, and K. D. Musgrave, 2014: Is tropical cyclone intensity guidance improving? Bull. Amer. Meteor. Soc., 95, 387398, https://doi.org/10.1175/BAMS-D-12-00240.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Dunion, J. P., C. D. Thorncroft, and C. S. Velden, 2014: The tropical cyclone diurnal cycle of mature hurricanes. Mon. Wea. Rev., 142, 39003919, https://doi.org/10.1175/MWR-D-13-00191.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Dvorak, V. F., 1975: Tropical cyclone intensity analysis and forecasting from satellite imagery. Mon. Wea. Rev., 103, 420430, https://doi.org/10.1175/1520-0493(1975)103<0420:TCIAAF>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Friedman, J. H., 2002: Stochastic gradient boosting. Comput. Stat. Data Anal., 38, 367378, https://doi.org/10.1016/S0167-9473(01)00065-2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Friedman, J. H., T. Hastie, and R. Tibshirani, 2009: glmnet: Lasso and elastic-net regularized generalized linear models, version 1.1-4. R package, http://CRAN.R-project.org/package=glmnet.

  • Hastie, T., R. Tibshirani, J. Friedman, and J. Franklin, 2005: The Elements of Statistical Learning: Data Mining, Inference and Prediction. 2nd ed. Springer, 533 pp.

  • Kaplan, J., and M. DeMaria, 2003: Large-scale characteristics of rapidly intensifying tropical cyclones in the North Atlantic basin. Wea. Forecasting, 18, 10931108, https://doi.org/10.1175/1520-0434(2003)018<1093:LCORIT>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kaplan, J., M. DeMaria, and J. A. Knaff, 2010: A revised tropical cyclone rapid intensification index for the Atlantic and eastern North Pacific basins. Wea. Forecasting, 25, 220241, https://doi.org/10.1175/2009WAF2222280.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kaplan, J., and Coauthors, 2015: Evaluating environmental impacts on tropical cyclone rapid intensification predictability utilizing statistical models. Wea. Forecasting, 30, 13741396, https://doi.org/10.1175/WAF-D-15-0032.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Klotzbach, P. J., S. G. Bowen, R. Pielke, and M. Bell, 2018: Continental U.S. hurricane landfall frequency and associated damage: Observations and future risks. Bull. Amer. Meteor. Soc., 99, 13591376, https://doi.org/10.1175/BAMS-D-17-0184.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Knapp, K. R., and S. L. Wilkins, 2018: Gridded satellite (GridSat) GOES and CONUS data. Earth Syst. Sci. Data, 10, 14171425, https://doi.org/10.5194/essd-10-1417-2018.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Landsea, C. W., and J. L. Franklin, 2013: Atlantic hurricane database uncertainty and presentation of a new database format. Mon. Wea. Rev., 141, 35763592, https://doi.org/10.1175/MWR-D-12-00254.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • McNeely, T., A. B. Lee, D. Hammerling, and K. Wood, 2019: Quantifying the spatial structure of tropical cyclone imagery. NCAR Tech. Note NCAR/TN-557+STR, 18 pp., https://doi.org/10.5065/5frb-ws04.

    • Crossref
    • Export Citation
  • Olander, T. L., and C. S. Velden, 2007: The advanced Dvorak technique: Continued development of an objective scheme to estimate tropical cyclone intensity using geostationary infrared satellite imagery. Wea. Forecasting, 22, 287298, https://doi.org/10.1175/WAF975.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Olander, T. L., and C. S. Velden, 2009: Tropical cyclone convection and intensity analysis using differenced infrared and water vapor imagery. Wea. Forecasting, 24, 15581572, https://doi.org/10.1175/2009WAF2222284.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Olander, T. L., and C. S. Velden, 2019: The advanced Dvorak technique (ADT) for estimating tropical cyclone intensity: Update and new capabilities. Wea. Forecasting, 34, 905922, https://doi.org/10.1175/WAF-D-19-0007.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Pineros, M. F., E. A. Ritchie, and J. S. Tyo, 2008: Objective measures of tropical cyclone structure and intensity change from remotely sensed infrared image data. IEEE Trans. Geosci. Remote Sens., 46, 35743580, https://doi.org/10.1109/TGRS.2008.2000819.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Politis, D. N., and J. P. Romano, 1994: The stationary bootstrap. J. Amer. Stat. Assoc., 89, 13031313, https://doi.org/10.1080/01621459.1994.10476870.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rhome, J. R., C. A. Sisko, and R. D. Knabb, 2006: On the calculation of vertical shear: An operational perspective. 27th Conf. on Hurricanes and Tropical Meteorology, Monterey, CA, Amer. Meteor. Soc., 14A.4, https://ams.confex.com/ams/27Hurricanes/techprogram/paper_108724.htm.

    • Search Google Scholar
    • Export Citation
  • Sanabia, E. R., B. S. Barrett, and C. M. Fine, 2014: Relationships between tropical cyclone intensity and eyewall structure as determined by radial profiles of inner-core infrared brightness temperature. Mon. Wea. Rev., 142, 45814599, https://doi.org/10.1175/MWR-D-13-00336.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Schmit, T. J., P. Griffith, M. M. Gunshor, J. M. Daniels, S. J. Goodman, and W. J. Lebair, 2017: A closer look at the ABI on the GOES-R series. Bull. Amer. Meteor. Soc., 98, 681698, https://doi.org/10.1175/BAMS-D-15-00230.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wood, K. M., and E. A. Ritchie, 2015: A definition for rapid weakening of North Atlantic and eastern North Pacific tropical cyclones. Geophys. Res. Lett., 42, 10 09110 097, https://doi.org/10.1002/2015GL066697.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wright, M. N., and A. Ziegler, 2015: ranger: A fast implementation of random forests for high dimensional data in C++ and R. J. Stat. Software, 77, https://doi.org/10.18637/jss.v077.i01.

    • Search Google Scholar
    • Export Citation
  • Yuan, M., and Y. Lin, 2006: Model selection and estimation in regression with grouped variables. J. Roy. Stat. Soc., 68B, 4967, https://doi.org/10.1111/j.1467-9868.2005.00532.x.

    • Crossref
    • Search Google Scholar
    • Export Citation
1

While the statistical models use train-test data splitting, the EOFs are computed for the full dataset. Such semisupervised learning approaches, which use all data to learn a basis for dimension reduction but then predict on test data only, are common in statistical machine learning. See, for example, Belkin and Niyogi (2004) for classification on nonlinear manifolds.

2

See appendix C for details on the permutation testing.

3

The RF models use Gini impurity for node splitting, with d1/2 predictors sampled at each node. Each forest contains 10 000 trees.

4

The GBCT models use logistic loss and a learning rate of 0.001 on depth-2 trees. The number of trees varies for each model.

Save
  • Belkin, M., and P. Niyogi, 2004: Semi-supervised learning on Riemannian manifolds. Mach. Learn., 56, 209239, https://doi.org/10.1023/B:MACH.0000033120.25363.1e.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Breiman, L., 2001: Random forests. Mach. Learn., 45, 532, https://doi.org/10.1023/A:1010933404324.

  • Brodersen, K. H., C. S. Ong, K. E. Stephan, and J. M. Buhmann, 2010: The balanced accuracy and its posterior distribution. 2010 20th Int. Conf. on Pattern Recognition, Istanbul, Turkey, IEEE, 3121–3124, https://doi.org/10.1109/ICPR.2010.764.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chen, T., and C. Guestrin, 2016: Xgboost: A scalable tree boosting system. Proc. 22nd ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining, San Francisco, CA, ACM, 785–794, https://doi.org/10.1145/2939672.2939785.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Conselice, C. J., 2014: The evolution of galaxy structure over cosmic time. Annu. Rev. Astron. Astrophys., 52, 291337, https://doi.org/10.1146/annurev-astro-081913-040037.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Cressie, N., and C. K. Wikle, 2015: Statistics for Spatio-Temporal Data. John Wiley and Sons, 624 pp.

  • DeMaria, M., 2018: SHIPS developmental database file format and predictor descriptions. Colorado State University Tech. Rep., 6 pp., http://rammb.cira.colostate.edu/research/tropical_cyclones/ships/docs/ships_predictor_file_2018.doc.

  • DeMaria, M., and J. Kaplan, 1999: An updated Statistical Hurricane Intensity Prediction Scheme (SHIPS) for the Atlantic and eastern North Pacific basins. Wea. Forecasting, 14, 326337, https://doi.org/10.1175/1520-0434(1999)014<0326:AUSHIP>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • DeMaria, M., C. R. Sampson, J. A. Knaff, and K. D. Musgrave, 2014: Is tropical cyclone intensity guidance improving? Bull. Amer. Meteor. Soc., 95, 387398, https://doi.org/10.1175/BAMS-D-12-00240.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Dunion, J. P., C. D. Thorncroft, and C. S. Velden, 2014: The tropical cyclone diurnal cycle of mature hurricanes. Mon. Wea. Rev., 142, 39003919, https://doi.org/10.1175/MWR-D-13-00191.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Dvorak, V. F., 1975: Tropical cyclone intensity analysis and forecasting from satellite imagery. Mon. Wea. Rev., 103, 420430, https://doi.org/10.1175/1520-0493(1975)103<0420:TCIAAF>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Friedman, J. H., 2002: Stochastic gradient boosting. Comput. Stat. Data Anal., 38, 367378, https://doi.org/10.1016/S0167-9473(01)00065-2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Friedman, J. H., T. Hastie, and R. Tibshirani, 2009: glmnet: Lasso and elastic-net regularized generalized linear models, version 1.1-4. R package, http://CRAN.R-project.org/package=glmnet.

  • Hastie, T., R. Tibshirani, J. Friedman, and J. Franklin, 2005: The Elements of Statistical Learning: Data Mining, Inference and Prediction. 2nd ed. Springer, 533 pp.

  • Kaplan, J., and M. DeMaria, 2003: Large-scale characteristics of rapidly intensifying tropical cyclones in the North Atlantic basin. Wea. Forecasting, 18, 10931108, https://doi.org/10.1175/1520-0434(2003)018<1093:LCORIT>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kaplan, J., M. DeMaria, and J. A. Knaff, 2010: A revised tropical cyclone rapid intensification index for the Atlantic and eastern North Pacific basins. Wea. Forecasting, 25, 220241, https://doi.org/10.1175/2009WAF2222280.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kaplan, J., and Coauthors, 2015: Evaluating environmental impacts on tropical cyclone rapid intensification predictability utilizing statistical models. Wea. Forecasting, 30, 13741396, https://doi.org/10.1175/WAF-D-15-0032.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Klotzbach, P. J., S. G. Bowen, R. Pielke, and M. Bell, 2018: Continental U.S. hurricane landfall frequency and associated damage: Observations and future risks. Bull. Amer. Meteor. Soc., 99, 13591376, https://doi.org/10.1175/BAMS-D-17-0184.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Knapp, K. R., and S. L. Wilkins, 2018: Gridded satellite (GridSat) GOES and CONUS data. Earth Syst. Sci. Data, 10, 14171425, https://doi.org/10.5194/essd-10-1417-2018.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Landsea, C. W., and J. L. Franklin, 2013: Atlantic hurricane database uncertainty and presentation of a new database format. Mon. Wea. Rev., 141, 35763592, https://doi.org/10.1175/MWR-D-12-00254.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • McNeely, T., A. B. Lee, D. Hammerling, and K. Wood, 2019: Quantifying the spatial structure of tropical cyclone imagery. NCAR Tech. Note NCAR/TN-557+STR, 18 pp., https://doi.org/10.5065/5frb-ws04.

    • Crossref
    • Export Citation
  • Olander, T. L., and C. S. Velden, 2007: The advanced Dvorak technique: Continued development of an objective scheme to estimate tropical cyclone intensity using geostationary infrared satellite imagery. Wea. Forecasting, 22, 287298, https://doi.org/10.1175/WAF975.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Olander, T. L., and C. S. Velden, 2009: Tropical cyclone convection and intensity analysis using differenced infrared and water vapor imagery. Wea. Forecasting, 24, 15581572, https://doi.org/10.1175/2009WAF2222284.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Olander, T. L., and C. S. Velden, 2019: The advanced Dvorak technique (ADT) for estimating tropical cyclone intensity: Update and new capabilities. Wea. Forecasting, 34, 905922, https://doi.org/10.1175/WAF-D-19-0007.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Pineros, M. F., E. A. Ritchie, and J. S. Tyo, 2008: Objective measures of tropical cyclone structure and intensity change from remotely sensed infrared image data. IEEE Trans. Geosci. Remote Sens., 46, 35743580, https://doi.org/10.1109/TGRS.2008.2000819.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Politis, D. N., and J. P. Romano, 1994: The stationary bootstrap. J. Amer. Stat. Assoc., 89, 13031313, https://doi.org/10.1080/01621459.1994.10476870.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rhome, J. R., C. A. Sisko, and R. D. Knabb, 2006: On the calculation of vertical shear: An operational perspective. 27th Conf. on Hurricanes and Tropical Meteorology, Monterey, CA, Amer. Meteor. Soc., 14A.4, https://ams.confex.com/ams/27Hurricanes/techprogram/paper_108724.htm.

    • Search Google Scholar
    • Export Citation
  • Sanabia, E. R., B. S. Barrett, and C. M. Fine, 2014: Relationships between tropical cyclone intensity and eyewall structure as determined by radial profiles of inner-core infrared brightness temperature. Mon. Wea. Rev., 142, 45814599, https://doi.org/10.1175/MWR-D-13-00336.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Schmit, T. J., P. Griffith, M. M. Gunshor, J. M. Daniels, S. J. Goodman, and W. J. Lebair, 2017: A closer look at the ABI on the GOES-R series. Bull. Amer. Meteor. Soc., 98, 681698, https://doi.org/10.1175/BAMS-D-15-00230.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wood, K. M., and E. A. Ritchie, 2015: A definition for rapid weakening of North Atlantic and eastern North Pacific tropical cyclones. Geophys. Res. Lett., 42, 10 09110 097, https://doi.org/10.1002/2015GL066697.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wright, M. N., and A. Ziegler, 2015: ranger: A fast implementation of random forests for high dimensional data in C++ and R. J. Stat. Software, 77, https://doi.org/10.18637/jss.v077.i01.

    • Search Google Scholar
    • Export Citation
  • Yuan, M., and Y. Lin, 2006: Model selection and estimation in regression with grouped variables. J. Roy. Stat. Soc., 68B, 4967, https://doi.org/10.1111/j.1467-9868.2005.00532.x.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    DAV(r) as an ORB function for organization: DAV as an ORB function of the threshold r and deviation angles for each image in Fig. 8, i.e., Edouard (2014) (solid line and left inset) and Nicole (2016) (dashed line and right inset).

  • Fig. 2.

    SIZE and SKEW as ORB statistics for bulk morphology: (left) Example level set for a threshold of c = −20°C on Edouard’s Tb (Fig. 8). (right) Level sets for {0°, −20°, −40°, −60°C} (in descending order, with −20°C repeated). SIZE(c) is the area covered by a level set; here, the size is 400 000 km2, or 20% of the stamp area. SKEW(c) is the displacement of a level set’s center of mass (yellow; 34 km from center) normalized by the average displacement of points in the set (green; 92 km from center); here, the skew is 0.37 west-northwest.

  • Fig. 3.

    EOFs and mean ORB functions: EOFs for all 6 ORB functions in the (top) NAL and (middle) ENP, with (bottom) sample means with 95% pointwise confidence intervals estimated by a stationary bootstrap (Politis and Romano 1994). The computed EOFs differ significantly only in the sign of the second ORB coefficient for eccentricity (ECC2), which merely flips the direction of interpretation. The difference between basins is largely contained in the sample means. Despite these differences, the same qualitative structures persist between basins.

  • Fig. 4.

    EOFs for radial profiles: Results from PCA of NAL TCs. The first three EOFs capture 90% of the variability in the ORB functions for radial structure and describe the three primary orthogonal shapes present in their profiles; see section 3b for details.

  • Fig. 5.

    Hourly SIZE functions for Hurricane Katrina (2005). Lighter colors indicate higher intensities. The first two EOFs in the inset capture 94% of the variance in the size functions.

  • Fig. 6.

    Binary classification by logistic lasso and nonlinear classifiers: The area under the ROC curve (AUC; white markers) for test data, with bootstrap-estimated 95% confidence intervals (colored intervals) computed for each of the 16 event–basin–predictor type combinations using the classifiers (top) LASSO, (middle) RF, and(bottom) GBCT. For all settings, ORB-only has a performance on par with SHIPS-only. Qualitative results persist across different classifiers. See text for details.

  • Fig. 7.

    Interpreting regression coefficients in logistic lasso with the SHIPS + ORB predictor set (NAL): Regression coefficients for SHIPS + ORB lasso models for (left) RI events and (center) RW events, as well as (right) the difference between the RI and RW regression coefficients in the NAL basin. Variables are ordered vertically by RI − RW such that an increase in the variable with the largest RI − RW coefficient, holding all other variables constant, is associated with an increase in the probability of RI much more strongly than an increase in the probability of RW. See the text for more discussion.

  • Fig. 8.

    GOES Tb stamps for (left) Edouard at 1800 UTC 16 Sep 2014 (showing an eye–eyewall structure at an intensity of 95 kt) and (right) Nicole at 0100 UTC 9 Oct 2016 (showing an asymmetric central dense overcast at an intensity of ~47 kt).

  • Fig. 9.

    Radial profile T¯(r) as an ORB function for radial structure: Radial profiles of Edouard’s Tb (Fig. 8; 1800 UTC 16 Sep 2014) are shown both centered on the best track (solid line) and centered on the eye (dashed line). If we use the best track center as the origin about which T¯(r) is computed, the temperature of the eye (near r = 0) is deflated; we start including the eyewall at r = 0 because the center of the image is not the center of the eye.

  • Fig. 10.

    Hourly radial profiles: Example hourly eye-centered radial profiles for Hurricane Nicole (2016). Lighter colors indicate higher intensities. The inset depicts Nicole’s intensity trajectory. The high-intensity phase of the TC’s evolution corresponds to the clearest eye–eyewall structure.

  • Fig. 11.

    Visualizing SIZE evolution with ORB: (a) The spatial trajectory shows the path of Hurricane Katrina (2005) through the Gulf of Mexico from 1800 UTC 25 Aug through 0000 UTC 30 Aug, with the labels marking the locations of displayed ORB functions. (b) At six points (A–F, labeled) in time, we plot the SIZE function (black) and the sample mean SIZE function for the NAL basin (gray). (c) The trajectory of the ORB coefficients of the SIZE function in phase space during the storm’s path. The clockwise cyclic path in (c) matches the local night/day cycle as well as the storm’s path in (a); see the text for details.

  • Fig. 12.

    Full ORB evolution of Katrina (2005) with selected stamps: (top left) A map of the trajectory of Katrina with select points (A–H) labeled. (top right) The intensity over time, with the same points labeled. (middle) GOES imagery for the eight labeled points. (bottom) The full suite of ORB coefficients as a function of time, with the original time series (blue) smoothed by an exponential weighted moving average (black) with decay chosen to achieve a roughness [average value of |d2f(t)/dt2|] of 0.2σ h−2 across the basin. Where no blue curve is visible, the smoothed time series is nearly equal to the original time series.

  • Fig. 13.

    Classification by logistic lasso: ROC curve comparison across all four predictor sets, by basin and by type of rapid change. See Fig. 6 and Table 1 for AUC metrics with bootstrap confidence intervals and significance tests of differences in AUC between statistical models.

  • Fig. A1.

    Reconstructing an ORB function: The radial profile for Edouard (Fig. 9) is projected onto the first three EOFs for radial structure (Fig. 4), resulting in K = 3 ORB coefficients (−24.2, −55.7, and 102.1). (top) The contribution of each EOF to the reconstruction of T¯(r), i.e., the weighted EOFs αifi(r) for i = 1, 2, and 3. (bottom) To approximate the original radial profile (“Original”; thick solid black line) we add these three contributions to the basin mean (“Mean”; solid gray line). With all K = 3 contributions (“Mean+1+2+3”; green dashed line), we find a close approximation to the original function, capturing the overall structure of the observed ORB function T¯(r).

All Time Past Year Past 30 Days
Abstract Views 288 0 0
Full Text Views 2200 1779 542
PDF Downloads 454 105 15