1. Introduction
Atmospheric fronts are well-established phenomena that drive midlatitude weather and therefore are important to understand for operational use and climatological understanding (Bjerknes and Solberg 1922; Shapiro and Keyser 1990; Hewson 1998). Fronts are associated with extreme precipitation events, flooding events, sea ice variability, and wildfire events, all of which present significant socioeconomic impacts on society (Mills 2005; Catto et al. 2012; Catto and Pfahl 2013; Hope et al. 2014; Naud et al. 2015). Understanding the distribution and frequency of fronts is also crucial for research regarding climate change as fronts are a primary source of midlatitude short-term meteorological variability (Catto et al. 2014, 2015a).
For much of the twentieth century, atmospheric fronts were manually identified using a mixture of variables from surface charts, potentially allowing inconsistency in their identification. A National Meteorological Center Workshop in March of 1991 highlighted existing issues with manual analysis of fronts. A large spread in the placement of fronts by workshop participants arose as a result of differing opinions on several issues, such as the inclusion of various mesoscale features and uncertainty regarding the interpretation of satellite imagery (see Fig. 2 in Uccellini et al. 1992). Although this workshop resulted in the creation of several standard guidelines for manual frontal identification, the potential for issues remains (Thomas and Schultz 2019b).
The manual identification of fronts for the full temporal range of a dataset is not feasible given the amount of time that would be required. Therefore, for datasets with a large temporal range, objective frontal identification is often used. Early attempts at objectively identifying atmospheric fronts occurred with the application of the thermal front parameter (TFP; Renard and Clarke 1965). The TFP as commonly used today is defined in Hewson (1998) as “a scalar variable representing the following: ‘the gradient of the magnitude of the gradient of a thermodynamic scalar quantity, resolved into the direction of the gradient of the quantity’.” Numerous variations in the TFP have since been utilized, although many new identification methods have also been developed (Huber-Pock and Kress 1989; Hewson 1998; McCann and Whistler 2001; Simmonds et al. 2012; Catto et al. 2015b; Parfitt et al. 2017; Lagerquist et al. 2020). Currently, objective identification methods have widespread use for climatological research (Hewson 1998; Hope et al. 2014; Catto et al. 2015a,b; Rudeva and Simmonds 2015; Dowdy and Catto 2017; Hénin et al. 2019; Lagerquist et al. 2020; Pepler et al. 2020). In general, however, the application of objective diagnostics differs in at least two ways.
First, objective diagnostics often use different types of atmospheric fields for identification. Diagnostics exist that only use a thermal field (Hewson 1998), only a dynamic field (Simmonds et al. 2012; Bitsa et al. 2019), and also a mixture of both a thermal and dynamic quantity (Parfitt et al. 2017; Bitsa et al. 2021). There is also the additional consideration of which specific variable to choose. For example, if using a thermal field only, one could opt for temperature (Hope et al. 2014), potential temperature (Hewson 1998), equivalent potential temperature (Thomas and Schultz 2019b), or wet-bulb potential temperature (Berry et al. 2011), with the understanding that the first two variables do not account for moisture. Similarly, one could use the change in wind direction (Simmonds et al. 2012) or relative vorticity (Parfitt et al. 2017).
Second, diagnostics might differ based on the criteria applied by the user for a given grid cell to be considered a front. For example, some users will require a fixed contiguous length or a certain number of connected frontal grid points for those grid points to be considered frontal (e.g., Berry et al. 2011, Schemm et al. 2015, Smirnov et al. 2015). Other studies do not require any length criteria or a number of connected frontal grid points at all (de La Torre et al. 2008; Simmonds et al. 2012; Thomas and Schultz 2019a). At present however, there is no commonly accepted length criterion in the literature. Another example is the potential smoothing of the input atmospheric fields. Jenkner et al. (2010) performed multiple rounds of smoothing when using data on a 7 km × 7 km grid, whereas Catto et al. (2012) regridded ERA-Interim data to a 2.5° × 2.5° grid for use with a separate satellite precipitation dataset to examine frontal precipitation. Other studies such as Parfitt et al. (2017) and Thomas and Schultz (2019b) do not perform any smoothing. Once again, there is no agreed upon approach regarding the smoothing or preprocessing of input data.
A number of studies have noted that significant impacts can result from these differences in objective frontal diagnostics. For example, Parfitt et al. (2017) found an absolute difference in frontal frequency of around 5% in the North Pacific Ocean between two diagnostics applied with the same dataset on the same atmospheric level. Meanwhile, Thomas and Schultz (2019b) found an absolute difference in frontal frequency closer to 10% in the same region when comparing climatologies using three different detection methods and two different thermal quantities. For the same diagnostic with two different quantities specifically, these absolute differences ranged between 2% and 5%. In the North Atlantic Ocean during boreal summer, Schemm et al. (2015) found frontal frequencies between 4% and 4.5% with a variation of the TFP but only between 2.5% and 3% with a solely dynamic diagnostic.
There are also suggestions that the choice of reanalysis dataset affects the identification of atmospheric fronts significantly. For example, using the same diagnostic and the same atmospheric level, an average frontal frequency absolute difference of up to 10% was found in the North Atlantic between Berry et al. (2011) who used the ERA-40 reanalysis and Catto et al. (2012) who used the ERA-Interim reanalysis, with both studies using the same 2.5° × 2.5° grid. Despite the clear variability among frontal diagnostics and their implementation in reanalysis datasets, many studies of atmospheric fronts nevertheless rely on the use of either a single diagnostic, a single dataset, or indeed both (e.g., Berry et al. 2011; Catto et al. 2012; Simmonds et al. 2012; Hope et al. 2014; Catto et al. 2015b; Naud et al. 2015; Crespo et al. 2017; Parfitt and Seo 2018; Kern et al. 2019; Raveh-Rubin and Catto 2019; Lagerquist et al. 2020; Parfitt and Kwon 2020).
To the authors’ knowledge, there has been no comprehensive examination of the sensitivity of objective frontal identification to the choice of reanalysis dataset. In this study, two different objective frontal identification diagnostics are used in combination with eight different reanalysis products to examine how sensitive the identification of atmospheric fronts and their associated precipitation is to the choice of diagnostic and dataset. The methodology, including the choice of diagnostics and details about reanalysis products, is presented in section 2. Case studies, climatologies, geographical comparisons, frontal precipitation, frontal precipitation proportion, and the impact of grid size are discussed in section 3. A summary of the conclusions from this work and a discussion of the results are presented in section 4.
2. Data and methods
a. Reanalysis datasets
Table 1 lists all eight reanalysis products by their abbreviation and includes the native atmospheric model grids, the output grids used in this study, and the center(s) that produced the reanalysis product. The following datasets were used: The National Centers for Environmental Prediction Climate Forecast System Reanalysis (CFSR; Saha et al. 2010); the European Centre for Medium-Range Weather Forecasts (ECMWF) Twentieth Century Reanalysis (ERA-20C; Poli et al. 2016); the ECMWF ERA-40 reanalysis (Uppala et al. 2005); the ECMWF ERA5 reanalysis (Hersbach et al. 2020); the ECMWF ERA-Interim reanalysis (ERA-INT; Dee et al. 2011); the Japanese Meteorological Agency (JMA) Japanese 55-year Reanalysis (JRA-55; Kobayashi et al. 2015); the National Aeronautics and Space Administration (NASA) Modern-Era Retrospective Analysis for Research and Applications, version 2 (MERRA-2; Gelaro et al. 2017; GMAO 2015); and the National Oceanic and Atmospheric Administration/Cooperative Institute for Research in Environmental Sciences (NOAA/CIRES) Twentieth Century Reanalysis Project, version 2c (NOAA20C; Compo et al. 2011). The choice of these datasets represents as broad a range of reanalyses as possible, incorporating widely used products across several different generations from multiple different modeling centers around the world. In this study, the period of comparison is 1980–2001, which is the shared time period across all eight datasets.
List of reanalysis datasets with the native atmospheric model spatial resolution at the equator, the output grid size upon which fronts were identified, and the center that produced the reanalysis.


Surface fronts in this study are identified at 900 hPa, as recommended in Hewson (1998). As such, temperature, zonal and meridional winds, relative vorticity, and surface pressure were each obtained at 6-hourly intervals at 900 hPa from January 1980 through December 2001, the shared temporal range across all eight reanalyses. It is noted that the following fields for ERA-40 were vertically interpolated to 900 hPa using a weighted mean from the 850- and 925-hPa levels because of the lack of the isobaric-level availability from ECMWF: temperature, zonal wind, meridional wind, and relative vorticity. Surface precipitation from each respective reanalysis was accumulated at 6-hourly time steps following the method described in Hénin et al. (2019). No smoothing was applied to any of the reanalysis fields in order to remain as objective as possible, as different amounts of smoothing would be required depending on the grid size of each dataset, and, as mentioned before, there is no consensus in the literature on the extent to which input fields should be smoothed. Indeed, the aim of this research is to apply objective frontal diagnostics to reanalysis datasets “as is” in order to determine the potential differences that might arise based on the choice of any individual dataset and diagnostic.
b. The choice of frontal diagnostics
While objective diagnostics can be applied at various atmospheric isobaric levels, there is no agreed-upon level for the identification of surface fronts (Hewson 1998; Catto et al. 2012; Parfitt et al. 2017; Thomas and Schultz 2019a). Ultimately, 900 hPa was chosen as in Hewson (1998) to balance the potential of the atmospheric boundary layer’s inclusion at the given isobaric level with the need to remain close to the surface for the classification of surface precipitation as frontal or nonfrontal. For both diagnostics, any grid point with a surface pressure of less than 900 hPa is automatically masked out. Last, a synoptic-scale length criterion is applied similar to that used in Schemm et al. (2015) and Schemm and Sprenger (2015). For a grid point to be considered frontal, it must be connected contiguously to a chain of other atmospheric frontal grid points spanning a length of at least 500 km.
c. Sorting frontal grid points into front types
d. Frontal precipitation
Classifying precipitation as frontal or nonfrontal consists of two main steps. First, precipitation from each reanalysis was grouped into 6-h accumulations temporally centered at 0000, 0600, 1200, and 1800 UTC following the method of Hénin et al. (2019). Then, the methodology of Catto et al. (2012) for allocating precipitation to atmospheric fronts was applied to all eight reanalyses. For each grid cell containing precipitation, a 5° × 5° box surrounding the cell was searched for fronts. If fronts were located within the search box, then the precipitation was deemed frontal. As in Catto et al. (2012), the proportion of each front type within the search box was used to assign precipitation to each front type. For example, if a grid cell contained 10 mm of precipitation, and the search box contained five warm-frontal grid points, four cold-frontal grid points, and one quasi-stationary-frontal grid point, then 5 mm of precipitation would be attributed to the warm-frontal grid points, 4 mm of precipitation would be attributed to the cold-frontal grid points, and 1 mm of precipitation would be attributed to the quasi-stationary-frontal grid point.
3. Results
a. Case study
Figure 1 shows frontal grid points identified by the F diagnostic sorted into cold, warm, and quasi-stationary fronts from each reanalysis for 0600 UTC 6 January 2000. For each reanalysis dataset in Fig. 1, most frontal grid points are associated with the frontal system, but some notable differences do exist. In higher-resolution and finer grid size reanalyses,1 such as ERA5 (Fig. 1a), CFSR (Fig. 1b), MERRA-2 (Fig. 1c), and ERA-INT (Fig. 1d), frontal grid points are also identified near coastlines near 50°N, 55°W. This increase in the number of surface frontal grid points adjacent to coastlines and topography has been previously noted (Parfitt et al. 2017).

A case study of a frontal system in the North Atlantic Ocean with frontal grid points identified using the F diagnostic for 0600 UTC 6 Jan 2000 on the 900-hPa isobaric level for (a) ERA5, (b) CFSR, (c) MERRA-2, (d) ERA-INT, (e) ERA-20C, (f) JRA-55, (g) NOAA20C, and (h) ERA-40. Blue, red, and magenta dots represent frontal grid points, and the dot size is proportional to grid size. The 900-hPa temperature field is shaded, and the 900-hPa wind vectors are plotted. Note that the temperature fields and wind vectors are not plotted over and adjacent to Greenland because these grid points contain surface pressures less than 900 hPa and are therefore nonphysical at the 900-hPa isobaric level.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1

A case study of a frontal system in the North Atlantic Ocean with frontal grid points identified using the F diagnostic for 0600 UTC 6 Jan 2000 on the 900-hPa isobaric level for (a) ERA5, (b) CFSR, (c) MERRA-2, (d) ERA-INT, (e) ERA-20C, (f) JRA-55, (g) NOAA20C, and (h) ERA-40. Blue, red, and magenta dots represent frontal grid points, and the dot size is proportional to grid size. The 900-hPa temperature field is shaded, and the 900-hPa wind vectors are plotted. Note that the temperature fields and wind vectors are not plotted over and adjacent to Greenland because these grid points contain surface pressures less than 900 hPa and are therefore nonphysical at the 900-hPa isobaric level.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1
A case study of a frontal system in the North Atlantic Ocean with frontal grid points identified using the F diagnostic for 0600 UTC 6 Jan 2000 on the 900-hPa isobaric level for (a) ERA5, (b) CFSR, (c) MERRA-2, (d) ERA-INT, (e) ERA-20C, (f) JRA-55, (g) NOAA20C, and (h) ERA-40. Blue, red, and magenta dots represent frontal grid points, and the dot size is proportional to grid size. The 900-hPa temperature field is shaded, and the 900-hPa wind vectors are plotted. Note that the temperature fields and wind vectors are not plotted over and adjacent to Greenland because these grid points contain surface pressures less than 900 hPa and are therefore nonphysical at the 900-hPa isobaric level.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1
Figure 2 is the same as Fig. 1 but with frontal grid points identified with the T diagnostic. As in Fig. 1, good agreement in the main frontal features exists, but differences between individual datasets are more pronounced. The T diagnostic exhibits a larger difference than the F diagnostic in the number of frontal grid points between finer and coarser gridded reanalyses. For example, for ERA5 (Fig. 2a), many more frontal grid points are identified near coastlines, such as the coastline of southern Greenland in the north-central section of the domain. In addition, frontal grid points are also identified in the southern section of the panel around 40°W. These frontal grid points are also identified with the T diagnostic in CFSR (Fig. 2b) and MERRA-2 (Fig. 2c), but no frontal grid points are detected in this region at all in NOAA20C (Fig. 2g) and ERA-40 (Fig. 2h). Furthermore, none of these grid points are identified in any reanalysis dataset with the F diagnostic.

As in Fig. 1, but for the T diagnostic.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1

As in Fig. 1, but for the T diagnostic.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1
As in Fig. 1, but for the T diagnostic.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1
Overall, this case study illustrates the differences that can arise when using different diagnostics and datasets. These differences between individual datasets and diagnostics have many subsequent implications, for example to the classification of frontal precipitation. Figure 3 displays frontal grid points for the F diagnostic along with 6-h precipitation accumulation (both frontal and nonfrontal) for 0600 UTC 6 January 2000. Generally, the most intense precipitation occurs near 55°N and 30°W, which corresponds with warm-frontal grid points for each dataset. In ERA5 (Fig. 3a), CFSR (Fig. 3b), and MERRA-2 (Fig. 3c), regions of precipitation away from the maxima in precipitation are also collocated with frontal grid points (i.e., near 65°N, 30°W). In coarser gridded datasets, precipitation in these areas is less intense and not collocated with frontal grid points.

A case study of a frontal system in the North Atlantic with frontal grid points identified using the F diagnostic for 0600 UTC 6 Jan 2000 for (a) ERA5, (b) CFSR, (c) MERRA-2, (d) ERA-INT, (e) ERA-20C, (f) JRA-55, (g) NOAA20C, and (h) ERA-40. Frontal grid points are differentiated by symbol and opacity, and the symbol size is proportional to grid size. Precipitation (both frontal and nonfrontal) is shaded. Grid points with anomalous precipitation values from the interpolation of the precipitation field from the native grid to the standard grid are masked out with gray-dotted shading.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1

A case study of a frontal system in the North Atlantic with frontal grid points identified using the F diagnostic for 0600 UTC 6 Jan 2000 for (a) ERA5, (b) CFSR, (c) MERRA-2, (d) ERA-INT, (e) ERA-20C, (f) JRA-55, (g) NOAA20C, and (h) ERA-40. Frontal grid points are differentiated by symbol and opacity, and the symbol size is proportional to grid size. Precipitation (both frontal and nonfrontal) is shaded. Grid points with anomalous precipitation values from the interpolation of the precipitation field from the native grid to the standard grid are masked out with gray-dotted shading.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1
A case study of a frontal system in the North Atlantic with frontal grid points identified using the F diagnostic for 0600 UTC 6 Jan 2000 for (a) ERA5, (b) CFSR, (c) MERRA-2, (d) ERA-INT, (e) ERA-20C, (f) JRA-55, (g) NOAA20C, and (h) ERA-40. Frontal grid points are differentiated by symbol and opacity, and the symbol size is proportional to grid size. Precipitation (both frontal and nonfrontal) is shaded. Grid points with anomalous precipitation values from the interpolation of the precipitation field from the native grid to the standard grid are masked out with gray-dotted shading.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1
Figure 4 is similar to Fig. 3 but for fronts identified with the T diagnostic. For finer grid sized reanalyses, there are now a substantial number of frontal grid points (e.g., near 40°N and 40°W) that are not associated with precipitation accumulations greater than 0.75 mm 6 h−1. Frontal grid points adjacent to land near 50°N and 55°W are also not collocated with precipitation accumulations greater than 0.75 mm 6 h−1. In addition, few frontal grid points are identified adjacent to the area of heaviest precipitation in NOAA20C (Fig. 4g) and ERA-40 (Fig. 4h) near 55°N and 30°W, in contrast with the F diagnostic.

As in Fig. 3, but for the T diagnostic
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1

As in Fig. 3, but for the T diagnostic
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1
As in Fig. 3, but for the T diagnostic
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1
b. Global frontal frequency climatologies
Figures 5 and 6 display the total mean frontal frequency for the F and T diagnostic, respectively, for all eight reanalyses on their respective grids, as well as the absolute multireanalysis mean and range from January 1980 through December 2001. For a given grid point, the total mean frontal frequency is defined as the number of time steps in which a front is identified divided by the total number of time steps. As both diagnostics are calculated on regular latitude–longitude grids, regions poleward of 65° latitude are not included in figures due to issues related to the convergence of meridians, small values of frontal frequency over the Arctic, and the effect of terrain from Antarctica in the Southern Hemisphere (Berry et al. 2011; Catto et al. 2012; Parfitt et al. 2017).

The total mean frontal frequency for the F diagnostic from January 1980 through December 2001 for (a) ERA5, (b) CFSR, (c) MERRA-2, (d) ERA-INT, (e) ERA-20C, (f) JRA-55, (g) NOAA20C, and (h) ERA-40. (i) The mean, and (j) the absolute range across all eight reanalyses in the total mean frontal frequency. Grid points with a mean surface pressure less than 900 hPa or adjacent to grid points with a mean surface pressure less than 900 hPa are masked out by gray-dotted shading. Because the F diagnostic relies on the Coriolis parameter, the tropics region between 12.5°S and 12.5°N is masked out by a light transparent gray box.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1

The total mean frontal frequency for the F diagnostic from January 1980 through December 2001 for (a) ERA5, (b) CFSR, (c) MERRA-2, (d) ERA-INT, (e) ERA-20C, (f) JRA-55, (g) NOAA20C, and (h) ERA-40. (i) The mean, and (j) the absolute range across all eight reanalyses in the total mean frontal frequency. Grid points with a mean surface pressure less than 900 hPa or adjacent to grid points with a mean surface pressure less than 900 hPa are masked out by gray-dotted shading. Because the F diagnostic relies on the Coriolis parameter, the tropics region between 12.5°S and 12.5°N is masked out by a light transparent gray box.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1
The total mean frontal frequency for the F diagnostic from January 1980 through December 2001 for (a) ERA5, (b) CFSR, (c) MERRA-2, (d) ERA-INT, (e) ERA-20C, (f) JRA-55, (g) NOAA20C, and (h) ERA-40. (i) The mean, and (j) the absolute range across all eight reanalyses in the total mean frontal frequency. Grid points with a mean surface pressure less than 900 hPa or adjacent to grid points with a mean surface pressure less than 900 hPa are masked out by gray-dotted shading. Because the F diagnostic relies on the Coriolis parameter, the tropics region between 12.5°S and 12.5°N is masked out by a light transparent gray box.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1

As in Fig. 5, but for the T diagnostic.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1

As in Fig. 5, but for the T diagnostic.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1
As in Fig. 5, but for the T diagnostic.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1
In Fig. 5, maxima in frontal frequency typically occur in the storm-track regions as expected (Berry et al. 2011; Catto et al. 2012; Parfitt et al. 2017; Thomas and Schultz 2019a), and these are also regions where datasets display the most agreement in frontal frequency. Differences between datasets are generally greatest outside of the storm-track regions. For example, for ERA5 (Fig. 5a) and CFSR (Fig. 5b), frontal frequency values exceeding 10% are observed near 20°N, 120°W. In this same region, NOAA20C (Fig. 5g) and ERA-40 (Fig. 5h) show frontal frequency values less than 2.5%, indicating an absolute difference of at least 7.5% between these datasets. This is reflected in the absolute multireanalysis range (the absolute range in the means of all eight reanalyses), as given in Fig. 5j. High range values are also observed adjacent to coastlines and elevated topography, such as the coastal region west of the Andes Mountains.
Differences between datasets are generally more pronounced for the T diagnostic as shown in Fig. 6. As in Fig. 5, maxima in frontal frequency are typically observed in the storm-track regions for each dataset, but differences between datasets are large. For example, in the North Pacific near 40°N, 180°E, the T diagnostic frontal frequency for ERA-40 in Fig. 6h is mostly lower than 2.5%. For this same region for CFSR in Fig. 6b, it is approximately 15%. Therefore, an absolute range of approximately 12.5%–15% is observed in this region in Fig. 6j. The corresponding absolute range for the F diagnostic is roughly 2.5%–5%. It is noted that frequencies for higher-resolution datasets on finer grids appear to be more consistent between the F and T diagnostic than those calculated in lower-resolution reanalyses on coarser grids. For example, in the North Pacific near 40°N, 180°E, T diagnostic frontal frequency for ERA-40 in Fig. 6h is mostly lower than 2.5%, whereas for the F diagnostic in Fig. 5h, values of frontal frequency are close to 10%. The corresponding values in ERA5 are approximately 15% for both the F and T diagnostic. Climatologies for cold- and warm-frontal types specifically are presented in Figs. S1–S4 in the online supplemental material for both the F and T diagnostic. These figures clearly demonstrate that the notable differences in the total mean frontal frequency identified in Figs. 5 and 6 between datasets and diagnostics are not restricted to one particular front type.
For both diagnostics, the smallest range values are typically found equatorward of the subtropics as expected, as synoptic-scale atmospheric fronts are rarely observed in the tropics. One area of interest, however, is in the subtropical eastern Pacific west of Mexico, where both diagnostics show an absolute range of at least 10%. Other climatologies have similarly identified notable frontal frequency in this area (Berry et al. 2011; Catto et al. 2012; Thomas and Schultz 2019a). This area is further explored in Fig. 7, which shows the area-averaged mean F and T diagnostic frontal frequency for each reanalysis for two different regions as well as globally (Figs. 7a and 7b, respectively). For clarity, globally here refers to all geographical grid points at 900 hPa excluding areas with a mean surface pressure less than 900 hPa, on which atmospheric fronts are objectively defined for each diagnostic. The standard error of the total mean frontal frequency is given by the error bars on the graphs. Reanalyses are plotted on the x axis based on their grid size, with the spacing between reanalyses on the x axis being proportional to the difference in grid size. For example, MERRA-2 has a grid size that is only slightly coarser than CFSR, and thus, the two reanalyses are horizontally very close together.

A comparison of the (a),(b) total mean frontal frequency; (c),(d) mean frontal precipitation; and (e),(f) total mean frontal precipitation proportion in for all eight reanalyses for the (left) F and (right) T diagnostics across two select regions and globally. (g) The North Atlantic region (pink) was taken to be from 290° to 320°E and from 35° to 50°N, and the eastern Pacific region (yellow) was taken to be from 200° to –260°E and from 12.5° to –20°N. The standard error for the total mean frontal frequency is also shown by the vertical error bars. Reanalyses are given on the x axis and are scaled by grid size.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1

A comparison of the (a),(b) total mean frontal frequency; (c),(d) mean frontal precipitation; and (e),(f) total mean frontal precipitation proportion in for all eight reanalyses for the (left) F and (right) T diagnostics across two select regions and globally. (g) The North Atlantic region (pink) was taken to be from 290° to 320°E and from 35° to 50°N, and the eastern Pacific region (yellow) was taken to be from 200° to –260°E and from 12.5° to –20°N. The standard error for the total mean frontal frequency is also shown by the vertical error bars. Reanalyses are given on the x axis and are scaled by grid size.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1
A comparison of the (a),(b) total mean frontal frequency; (c),(d) mean frontal precipitation; and (e),(f) total mean frontal precipitation proportion in for all eight reanalyses for the (left) F and (right) T diagnostics across two select regions and globally. (g) The North Atlantic region (pink) was taken to be from 290° to 320°E and from 35° to 50°N, and the eastern Pacific region (yellow) was taken to be from 200° to –260°E and from 12.5° to –20°N. The standard error for the total mean frontal frequency is also shown by the vertical error bars. Reanalyses are given on the x axis and are scaled by grid size.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1
Together with Figs. 5 and 6, Fig. 7 clearly demonstrates that in general, higher-resolution reanalyses on finer grids identify more frontal grid points than lower-resolution reanalyses on coarser grids. For the T diagnostic in Fig. 7b, no reanalysis in the North Atlantic with a fine grid size has a smaller value of total mean frontal frequency compared to a reanalysis with a coarser grid size. It is noted that while this is generally true for the F diagnostic (Fig. 7a), there are exceptions, with CFSR showing a higher total mean frontal frequency than ERA5 in the North Atlantic by approximately 1%, and JRA-55 displaying a higher total mean frontal frequency than ERA-20C in the North Atlantic also by approximately 1%. Across both diagnostics however, JRA-55 exhibits a higher total mean frontal frequency than ERA-20C globally. One possibility for this discrepancy might be the native resolution of the atmospheric model. In Table 1, JRA-55 has a native atmospheric model resolution of approximately 55 km near the equator, while ERA-20C has a native atmospheric model resolution closer to 125 km near the equator.
While Fig. 7 shows that the grid size of the reanalysis is important, a linear trend line cannot be accurately drawn through any of the lines in Figs. 7a and 7b, indicative that inherent differences between reanalyses (other than grid size) are also important. While reanalysis datasets are complex, there are two aspects that set reanalyses apart from one another. The first aspect is the observations that are assimilated into the reanalysis, and the different methods for assimilating them. The second aspect is the specific model used by the reanalysis, where each one employs different dynamical cores and physical parameterizations. While attribution of differences in frontal frequency between reanalyses to any one of these inherent reanalysis properties is nontrivial and left as a topic for future research, estimation of the relative importance of reanalysis grid size as opposed to these inherent reanalysis differences is provided later in section 3e.
c. Frontal precipitation
Figures 8 and 9 show the mean frontal precipitation (mm day−1) for each dataset, as well as the absolute multireanalysis mean and range, for the F and T diagnostic, respectively. As before, the region between 12.5°N and 12.5°S are masked for the F diagnostic, and grid points that have a surface pressure less than 900 hPa are masked. For reference, Fig. S5 in the online supplemental material shows the mean precipitation (mm day−1) for each reanalysis as well as the absolute multireanalysis mean and range. Figures S6–S9 in the online supplemental material show the mean frontal precipitation (mm day−1) from warm and cold fronts for both diagnostics.

The mean frontal precipitation from all front types for the F diagnostic from January 1980 through December 2001 for (a) ERA5, (b) CFSR, (c) MERRA-2, (d) ERA-INT, (e) ERA-20C, (f) JRA-55, (g) NOAA20C, and (h) ERA-40. (i) The mean, and (j) the absolute range across all eight reanalyses in the mean frontal precipitation. Grid points with a mean surface pressure less than 900 hPa or within 2.5° of grid points with a mean surface pressure less than 900 hPa are masked out by gray-dotted shading. Because the F diagnostic relies on the Coriolis parameter, the tropics region between 12.5°S and 12.5°N is masked out by a light transparent gray box.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1

The mean frontal precipitation from all front types for the F diagnostic from January 1980 through December 2001 for (a) ERA5, (b) CFSR, (c) MERRA-2, (d) ERA-INT, (e) ERA-20C, (f) JRA-55, (g) NOAA20C, and (h) ERA-40. (i) The mean, and (j) the absolute range across all eight reanalyses in the mean frontal precipitation. Grid points with a mean surface pressure less than 900 hPa or within 2.5° of grid points with a mean surface pressure less than 900 hPa are masked out by gray-dotted shading. Because the F diagnostic relies on the Coriolis parameter, the tropics region between 12.5°S and 12.5°N is masked out by a light transparent gray box.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1
The mean frontal precipitation from all front types for the F diagnostic from January 1980 through December 2001 for (a) ERA5, (b) CFSR, (c) MERRA-2, (d) ERA-INT, (e) ERA-20C, (f) JRA-55, (g) NOAA20C, and (h) ERA-40. (i) The mean, and (j) the absolute range across all eight reanalyses in the mean frontal precipitation. Grid points with a mean surface pressure less than 900 hPa or within 2.5° of grid points with a mean surface pressure less than 900 hPa are masked out by gray-dotted shading. Because the F diagnostic relies on the Coriolis parameter, the tropics region between 12.5°S and 12.5°N is masked out by a light transparent gray box.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1

As in Fig. 8, but for the T diagnostic for all front types.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1

As in Fig. 8, but for the T diagnostic for all front types.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1
As in Fig. 8, but for the T diagnostic for all front types.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1
For each dataset in Fig. 8, frontal precipitation exhibits maxima in the storm-track regions, collocated with the maxima in frontal frequency observed previously. Both ERA5 (Fig. 8a) and CFSR (Fig. 8b) show mean frontal precipitation exceeding 6 mm day−1 in both the North Pacific and North Atlantic, closely resembling the oceanic western boundary currents. Meanwhile, ERA-20C (Fig. 8e) and ERA-40 (Fig. 8h) display mean frontal precipitation maxima in the storm-track regions of only approximately 3.6 mm day−1, the smallest maxima observed across all eight reanalyses. In the eastern Pacific off the western Mexican coast, where both ERA5 and CFSR displayed frontal frequencies of about 10%, ERA5 (Fig. 8a), CFSR (Fig. 8b), and MERRA-2 (Fig. 8c) show up to 4.5 mm day−1 of mean frontal precipitation, whereas ERA-20C (Fig. 8e), NOAA20C (Fig. 8g) and ERA-40 (Fig. 8h) each display no more than 1.5 mm day−1. High absolute multireanalysis range values are also observed near Southeast Asia, a region where frontal frequency is strongly influenced by the effects of topography and coastlines. Indeed, differences in frontal frequency identification between datasets illustrated in Fig. 5j and 6j appear to reflect strongly on the subsequent identification of frontal precipitation.
Frontal precipitation identified with the T diagnostic in Fig. 9 exhibits similar differences between datasets to that identified with the F diagnostic in Fig. 8. Again, both ERA5 (Fig. 9a) and CFSR (Fig. 9b) display mean frontal precipitation values exceeding 6 mm day−1 in the North Pacific and North Atlantic storm-track regions, while NOAA20C (Fig. 9g) and ERA-40 (Fig. 9h) display the smallest values of mean frontal precipitation there. NOAA20C and ERA-40, along with ERA-20C (Fig. 9e), also differ from all the other datasets in that more frontal precipitation is detected in the North Atlantic compared to the North Pacific. It is noted this was not the case for frontal precipitation identified with the F diagnostic. The absolute multireanalysis range shown in Fig. 9j generally exhibits the largest values in both Northern Hemisphere storm-track regions, with the absolute range in the North Pacific storm track exceeding the North Atlantic storm track by approximately 1 mm day−1. While each diagnostic generally identifies more frontal precipitation in finely gridded datasets and less frontal precipitation in coarsely gridded datasets, as with the frontal frequency, the T diagnostic exhibits larger differences between datasets. As with frontal frequency, the greatest climatological difference in frontal precipitation between diagnostics occurs with coarsely gridded datasets. This is demonstrated in Figs. 7c and d, which show the mean frontal precipitation for each reanalysis and illustrates that the general trends in frontal precipitation between datasets are similar to those for frontal frequency. Figures S6–S9 in the online supplemental material demonstrate that as with the frontal frequency, the differences between diagnostics and datasets for frontal precipitation are present for both warm- and cold-frontal precipitation.
d. Frontal precipitation proportion
To better understand the differences in frontal precipitation between datasets, the total accumulation of frontal precipitation is normalized by the total accumulation of all precipitation in Figs. 10 and 11 for the F and T diagnostic, respectively. For a given grid point, the total mean frontal precipitation is the total accumulation of frontal precipitation divided by the total amount of all precipitation for the period from January 1980 through December 2001.

As in Fig. 8, but for the total mean frontal precipitation proportion from all front types. For a given grid box, the total mean frontal precipitation proportion is the total frontal precipitation divided by the total precipitation.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1

As in Fig. 8, but for the total mean frontal precipitation proportion from all front types. For a given grid box, the total mean frontal precipitation proportion is the total frontal precipitation divided by the total precipitation.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1
As in Fig. 8, but for the total mean frontal precipitation proportion from all front types. For a given grid box, the total mean frontal precipitation proportion is the total frontal precipitation divided by the total precipitation.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1

As in Fig. 9, but for the total mean frontal precipitation proportion for all front types.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1

As in Fig. 9, but for the total mean frontal precipitation proportion for all front types.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1
As in Fig. 9, but for the total mean frontal precipitation proportion for all front types.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1
For the F diagnostic in Fig. 10, most datasets show at least a 50% majority of precipitation as frontal for the North Atlantic and North Pacific storm tracks, with NOAA20C in Fig. 10g having between 40% and 45% in the North Pacific. ERA5 (Fig. 10a), CFSR (Fig. 10b), and MERRA-2 (Fig. 10c) display at least 80% precipitation as frontal poleward of 30° latitude, and in certain areas of the subtropics, more than 50% of precipitation is deemed frontal. The absolute multireanalysis range across all eight datasets in Fig. 10j exhibits its lowest values globally in the storm-track regions, with the North Atlantic and North Pacific having an absolute range of 20%–25%, although this number nevertheless represents a large inconsistency between datasets. Despite NOAA20C (Fig. 8g) showing slightly greater values of mean frontal precipitation compared to ERA-40 (Fig. 8h), NOAA20C in Fig. 10g exhibits smaller values of frontal precipitation proportion almost globally compared to ERA-40 in Fig. 10h. This is most notable in the Southern Hemisphere near 40°S and 120°W, where the two datasets have an absolute difference of 30%–35%.
The largest values in the absolute multireanalysis range across all eight datasets are generally observed in the subtropics for both diagnostics. Large differences in the mean frontal precipitation proportion between datasets are also identified for the T diagnostic in Fig. 11. ERA5 (Fig. 11a), CFSR (Fig. 11b), and MERRA-2 (Fig. 11c) display mean frontal precipitation proportions greater than 80% poleward of 30° for much of the globe and more than 90% across much of the North Atlantic and North Pacific storm tracks. For ERA-40 in Fig. 11h, less than 30% of precipitation is frontal in the North Pacific storm track while approximately 65% of precipitation is frontal in the North Atlantic. In general, the lowest values in the absolute range between datasets outside of the tropics can be found in the storm-track regions as is evident by examining Figs. 10j and 11j. Exceptionally high absolute range values (close to 100%) are observed over Australia, in the eastern Pacific, and across much of North Africa.
As with frontal precipitation and frontal frequency, the absolute multireanalysis range is generally much higher (except in certain regions of the subtropics) for the T diagnostic (Fig. 11j) than for the F diagnostic (Fig. 10j). As before, when comparing diagnostics, the frontal precipitation proportion differs mostly between coarsely gridded datasets. It is noted that results for the total mean frontal precipitation proportion are consistent for frontal precipitation proportion assigned to individual front types. Figures 7e and 7f show the mean frontal precipitation proportion for each reanalysis, illustrating that frontal precipitation proportion follows the same general trends with regards to reanalysis grid size as frontal frequency and frontal precipitation. For example, it is clear in Figs. 7e and 7f that higher-resolution reanalyses on finer grids generally have larger frontal precipitation proportions relative to lower-resolution reanalyses on coarser grids (such as comparing ERA5 with NOAA20C). Figures S10–S13 in the online supplemental material demonstrate that, as with the frontal frequency and the frontal precipitation, the differences between diagnostics and datasets for frontal precipitation proportion are present for both warm- and cold-frontal precipitation proportion.
e. Grid size comparison
As previously discussed, it is clear that higher-resolution reanalyses on finer grids generally exhibit higher frontal frequencies and more frontal precipitation. One plausible explanation for this observation that can be explored is the grid size of the dataset. For example, does ERA5 generally contain more fronts than ERA-20C simply because the data are on a finer grid? The first step taken to investigate this hypothesis is made by interpolating the ERA5 fields from their original 0.25° × 0.25° grid to both a 0.75° × 0.75° grid, the same standard grid as ERA-INT data used in this research, and a 2.5° × 2.5° grid, the coarsest grid among all eight reanalyses. Then, both frontal diagnostics are recomputed on both new grids, and then the total mean frontal frequency, the mean frontal precipitation, and the total mean frontal precipitation proportion are calculated on each new grid as well. These results are displayed in Fig. 12. Figs. 12a–f indicate that frontal frequency, frontal precipitation, and frontal precipitation proportion decrease with coarsening grid size, a finding consistent with earlier results indicating a dependence of frontal frequency and frontal precipitation on the grid size of a reanalysis. In addition, the dependence on grid size for the T diagnostic appears to be much stronger than the F diagnostic both globally and regionally, which is also consistent with earlier results.

A comparison of the (a),(b) total mean frontal frequency; (c),(d) mean frontal precipitation; and (e),(f) total mean frontal precipitation proportion for the ERA5 reanalysis interpolated to different resolutions for the (left) F and (right) T diagnostics across two select regions and globally. the North Atlantic and eastern Pacific regions are defined as in Fig. 7g. The standard error for the total mean frontal frequency is also shown by the vertical error bars. The three different grid sizes are indicated on the x axis, which is proportionally scaled as in Fig. 7.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1

A comparison of the (a),(b) total mean frontal frequency; (c),(d) mean frontal precipitation; and (e),(f) total mean frontal precipitation proportion for the ERA5 reanalysis interpolated to different resolutions for the (left) F and (right) T diagnostics across two select regions and globally. the North Atlantic and eastern Pacific regions are defined as in Fig. 7g. The standard error for the total mean frontal frequency is also shown by the vertical error bars. The three different grid sizes are indicated on the x axis, which is proportionally scaled as in Fig. 7.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1
A comparison of the (a),(b) total mean frontal frequency; (c),(d) mean frontal precipitation; and (e),(f) total mean frontal precipitation proportion for the ERA5 reanalysis interpolated to different resolutions for the (left) F and (right) T diagnostics across two select regions and globally. the North Atlantic and eastern Pacific regions are defined as in Fig. 7g. The standard error for the total mean frontal frequency is also shown by the vertical error bars. The three different grid sizes are indicated on the x axis, which is proportionally scaled as in Fig. 7.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1
Globally, for the T diagnostic in Fig. 12b, the mean frontal frequency decreases from approximately 22% at 0.25° × 0.25° to just 6% at 2.5° × 2.5°, whereas for the F diagnostic, it only decreases from around 20% to 15% in Fig. 12a. Similar patterns are observed for both the frontal precipitation (Figs. 12c and d) and frontal precipitation proportion (Figs. 12e and f). Furthermore, while the majority of precipitation is considered frontal at all three grid sizes for the F diagnostic, the majority of precipitation is nonfrontal for the T diagnostic for the 2.5° × 2.5° grid. Therefore, this implies that even when using the same frontal diagnostic and dataset, the choice of grid size can also yield significantly different results. Indeed, modeling centers often provide reanalysis data on a variety of different grid sizes in addition to their standard grid size.
To better understand the effect of grid size on frontal identification and frontal precipitation, all reanalyses are now regridded to the same 2.5° × 2.5° grid (the coarsest grid across all eight reanalyses). For each reanalysis, both frontal diagnostics are recomputed, and then both the total mean frontal frequency and the total mean frontal precipitation proportion are recalculated for each diagnostic. Following Fig. 7, these results are displayed in Fig. 13. It is immediately clear that regridding reanalyses to the same coarsened grid has removed a large degree of the absolute multireanalysis range in both the frontal frequency and the frontal precipitation proportion, as evident by the flatter lines in Figs. 13a–f compared to Figs. 7a–f. To quantify this reduction in the absolute multireanalysis range, the percentage difference between the absolute multireanalysis ranges for reanalyses at their standard grids and the coarsened grids is calculated for both the total mean frontal frequency and the total mean frontal precipitation proportion, summarized in Table 2 and Table 3, respectively.

As in Fig. 7, but now each reanalysis is interpolated to a 2.5° × 2.5° grid and the frontal diagnostics and frontal precipitation algorithms were rerun at this coarser grid size.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1

As in Fig. 7, but now each reanalysis is interpolated to a 2.5° × 2.5° grid and the frontal diagnostics and frontal precipitation algorithms were rerun at this coarser grid size.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1
As in Fig. 7, but now each reanalysis is interpolated to a 2.5° × 2.5° grid and the frontal diagnostics and frontal precipitation algorithms were rerun at this coarser grid size.
Citation: Journal of Climate 35, 14; 10.1175/JCLI-D-21-0596.1
The percentage reduction in the absolute multireanalysis range of total mean frontal frequency resulting from regridding all datasets from their original grids to the same coarser 2.5° × 2.5° grid.


The percentage reduction in the absolute multireanalysis range of total mean frontal precipitation proportion resulting from regridding all datasets from their original grids to the same coarser 2.5° × 2.5° grid.


Globally, the absolute multireanalysis range in the total mean frontal frequency decreases from just over 10% to just over 6%, resulting in a percentage reduction of 37% in the absolute range of the total mean frontal frequency. Likewise, globally, the absolute multireanalysis range in the total mean frontal precipitation proportion decreases from just over 59% to just under 32%, resulting in a percentage reduction of 47% in the absolute range of the total mean frontal precipitation proportion. Greater percentage reductions occur for the T diagnostic, with the absolute multireanalysis range in the total mean frontal frequency decreasing from just under 12% to just under 3%, resulting in a percentage reduction in the absolute range of approximately 77% upon regridding datasets to the same grid, and the absolute multireanalysis range in the total mean frontal precipitation proportion decreasing from just over 69% to just under 13%, resulting in a percentage reduction in the absolute range of approximately 81% upon regridding datasets to the same grid.
Variations from the effect of grid size differ regionally. For example, in the North Atlantic, the absolute multireanalysis range in the total mean frontal frequency decreases from just over 8% for reanalyses at their standard grids to less than 4% for reanalyses at their coarsened grids, resulting in a percentage reduction in the absolute range of approximately 53% upon regridding all datasets to the same grid, and the absolute multireanalysis range in the total mean frontal precipitation proportion) decreases from just under 27% for reanalyses at their standard grids to just over 14% for reanalyses at their coarsened grids, resulting in a percentage reduction in the absolute range of approximately 48% upon regridding all datasets to the same grid. Similarly, for the T diagnostic, the absolute multireanalysis range in the total mean frontal frequency decreases from just over 18% to just over 2%, resulting in a percentage reduction of approximately 87% upon regridding, and the absolute multireanalysis range in the total mean frontal precipitation proportion decreases from just over 58% to just over 11%, resulting in a percentage reduction of approximately 81% upon regridding. Therefore, these results suggest that, alongside the choices of both frontal diagnostic and reanalysis dataset, the choice of grid size must also be taken under consideration for frontal identification and/or frontal precipitation allocation, because all three choices can yield differing results.
4. Summary and discussion
a. Summary
An examination of a case study revealed the significant discrepancies in frontal identification that can arise as a result of the specific choice of objective diagnostic and reanalysis dataset. In general, higher-resolution reanalyses on finer grids identified many more frontal grid points, particularly near coastlines and over land. This was extensively highlighted in comparisons of the frontal frequency climatologies calculated in multiple reanalysis datasets. Further differences were also found when comparing the absolute multireanalysis ranges in frontal frequency as calculated by both the F and T diagnostic. Globally, an absolute difference of almost 10% was found between ERA5 and NOAA20C for the F diagnostic. For the T diagnostic, ERA40 exhibited lower frontal frequency than NOAA20C, resulting in an absolute difference of over 12% between ERA5 and ERA40. Regionally, in the North Atlantic, an absolute difference of almost 8% was found between ERA-40 and CFSR, while for the T diagnostic, this difference was found to be almost 18%. Similarly, NOAA20C displayed a mean frontal frequency of approximately 14% for the F diagnostic, but only 8% for the T diagnostic in the North Atlantic.
These inconsistencies in frontal identification were shown to subsequently affect the allocation of precipitation to fronts. For example, for NOAA20C for the F diagnostic, approximately 30% of precipitation was allocated to fronts globally, while only 14% of precipitation was allocated to fronts for the T diagnostic. Discrepancies also existed for higher-resolution reanalyses on finer grids, with CFSR having almost 84% of is precipitation allocated to fronts for the F diagnostic, and less than 73% of its precipitation allocated to fronts for the T diagnostic. In the North Atlantic, for ERA40, nearly 75% of its precipitation was allocated to fronts for the F diagnostic, while approximately 40% of its precipitation was allocated to fronts for the T diagnostic.
The question of the impact of grid size on frontal identification and frontal precipitation was explored by regridding each reanalysis to the coarsest grid among all eight reanalyses. Globally, for the F diagnostic, a percentage reduction of 37% and 47% in the absolute multireanalysis range of frontal frequency and frontal precipitation proportion, respectively, occurs when regridding all datasets to the same, coarser grid. For the T diagnostic globally, a percentage reduction of 77% and 81% in the absolute multireanalysis range of the frontal frequency and frontal precipitation proportion, respectively, occurs. These percentage numbers vary geographically. For example, in the North Atlantic, a percentage reduction of 53% and 48% in the absolute multireanalysis range of the frontal frequency and frontal precipitation proportion, respectively, occurs for the F diagnostic upon regridding, whereas a percentage reduction of 87% and 81% in the absolute multireanalysis range of the frontal frequency and frontal precipitation proportion, respectively, occurs for the T diagnostic.
The largest differences between both diagnostics and datasets were generally found over land, where surface fronts at 900 hPa were more likely to be impacted by the boundary layer, and in the tropics. The inconsistencies in frontal identification based on either identification method and/or dataset strongly affect the frontal precipitation proportion, with the absolute range for both diagnostics across all eight datasets exceeding 50% across much of the world. Therefore, research regarding frontal precipitation and precipitation associated with different types of atmospheric phenomenon will be strongly impacted by the choice of diagnostic and/or reanalysis dataset. Last, the aforementioned differences in frontal frequency, frontal precipitation, and frontal precipitation proportion are consistently observed across individual front types, as can be seen in Figs. S1–S4, S6–S9, and S10–S13, respectively, in the online supplemental material.
b. Discussion
It is important to note that the goal of this research was to apply the diagnostics as given in the literature to a wide array of reanalysis products. This is because there is no commonly accepted methodology regarding preprocessing of data, and as previously discussed many studies will therefore often use one specific diagnostic and dataset for their study involving atmospheric fronts. The results contained in this paper illustrate the significant differences that can arise because of these choices in individual frontal identification. This will necessarily impact many related areas of research, such as studies on the spatiotemporal variability of precipitation extremes, for which reanalysis datasets are regularly employed. Indeed, it was shown in section 3 that the amount of precipitation assigned to atmospheric fronts is highly dependent on the diagnostic and dataset chosen. This is particularly relevant in the midlatitudes, where atmospheric fronts account for most of the precipitation (Catto et al. 2012; Papritz et al. 2014). As such, it is clear that researchers seeking to study atmospheric fronts objectively in any dataset (reanalysis or otherwise) should always ensure that their findings are not overly sensitive to their choice of diagnostic and dataset.
This potentially presents a headache for researchers wanting to study atmospheric fronts objectively. Indeed, it is difficult to pinpoint a solution. Even if one were to define a standard set of guidelines, studies such as Jenkner et al. (2010) demonstrate how any preprocessing of input fields into diagnostics may necessarily result in the loss of information. For example, the smoothing of a temperature field to a standard grid size would necessarily decrease the intensity of a temperature gradient, and thus, frontal boundaries. Of course, interpolation to a higher-resolution grid will not result in the capture of higher-resolution dynamics. Further issues are provided by the subjectivity of numerous additional criteria such as a minimum length requirement, which also has no agreed upon definition in the literature.
With a grid size nearing 30 km for ERA5 (and finer for operational models), it is becoming easier to resolve frontal zones with more than one grid point in reanalysis datasets. This allows us to better study the dynamics and thermodynamics within the front on the cross-frontal scale that can occur on the mesoscale and finer. It is noted, however, that 30-km resolution is still not sufficient to properly resolve all aspects of fronts (i.e., submesoscale variations in the wind field). For ERA5 in Figs. 1a and 2a, both diagnostics identified the entire width of frontal zones with on the order of 10 grid points. In comparison, ERA-40 on a 2.5° × 2.5° grid can only represent the cross-frontal scale with a single grid point. The ability to resolve such frontal dynamics is only going to continue as reanalysis datasets improve, as will the issue of consistency in objective studies of atmospheric fronts.
Although this research considered only two diagnostics, as mentioned in the introduction, many more exist in the literature, and there are also many more questions to be answered that are beyond the scope of this study. For example, to what extent do fronts at different atmospheric levels exhibit similar dependencies on diagnostic and dataset? Further investigation into the nature of the differences between datasets and diagnostics for each specific front type would also be useful. With the popularity of intercomparison projects in the literature, perhaps this is an opportunity for an intercomparison project of many frontal diagnostics across many datasets, similar to the IMILAST project from Neu et al. (2013) for extratropical cyclones. Regardless, it is clear that more research is required to properly investigate how a definition of an atmospheric front can potentially impact studies using different datasets. However, it is the clear recommendation of this study that any researcher applying objective atmospheric frontal diagnostics to a dataset repeat their analyses with multiple diagnostics and datasets to ensure that their results are robust to those choices.
Although reanalysis products with high native resolution are generally provided on finer grid sizes, this is not always the case. For example, for JRA-55, the native model resolution is approximately 0.5° but the standard data are output on a 1.25° × 1.25° grid. Therefore, we use “higher resolution” to refer to the native model’s spatial resolution and “grid size” to refer to the standard output grid on which the reanalysis data are provided.
Acknowledgments.
The authors first thank the following centers for providing access to their datasets: ECMWF for ERA-20C, ERA-40, ERA-INT, and ERA5; NCEP for CFSR; JMA for JRA-55; NASA for MERRA-2; and NOAA for NOAA20C. Support for the Twentieth Century Reanalysis Project, version 2c, dataset is provided by the U.S. Department of Energy Office of Science Biological and Environmental Research (BER) and by the National Oceanic and Atmospheric Administration Climate Program Office. Author Parfitt gratefully acknowledges support from NSF-OCE Award 2023585. The authors thank Evan Jones for his assistance with obtaining the CFSR, JRA-55, and NOAA20C datasets. The authors also thank Bob Hart for useful discussions and acknowledge the helpful comments of Dr. Isla Simpson and three anonymous reviewers.
Data availability statement.
All CFSR data as cited in Saha et al. (2010) were freely obtained from the Research Data Archive at the National Center for Atmospheric Research (https://doi.org/10.5065/D69K487J and https://doi.org/10.5065/D6513W89). All JRA-55 data as cited in Kobayashi et al. (2015) were freely obtained from the Research Data Archive at the National Center for Atmospheric Research (https://doi.org/10.5065/D6HH6H41). All NOAA20C, version 2c, data as cited in Compo et al. (2011) were freely obtained from the Research Data Archive at the National Center for Atmospheric Research (https://doi.org/10.5065/D6N877TW). All ERA-20C data as cited in Poli et al. (2016) used in this research were freely obtained from the European Centre for Medium-Range Weather Forecasts (https://www.ecmwf.int/en/forecasts/datasets/reanalysis-datasets/era-20c). All ERA-40 data as cited in Uppala et al. (2005) used in this research were freely obtained from the European Centre for Medium-Range Weather Forecasts (https://apps.ecmwf.int/datasets/data/era40-daily/levtype=pl/ and https://apps.ecmwf.int/datasets/data/era40-daily/levtype=sfc/). All ERA5 data as cited in Hersbach et al. (2020) used in this research were freely obtained from the Climate Data Store (https://doi.org/10.24381/cds.adbb2d47 and https://doi.org/10.24381/cds.bd0915c6). All ERA-Interim data as cited in Dee et al. (2011) used in this research were freely obtained from the European Centre for Medium-Range Weather Forecasts (https://www.ecmwf.int/en/forecasts/datasets/reanalysis-datasets/era-interim). All MERRA-2 data as cited in Gelaro et al. (2017) used in this research were freely obtained from the NASA MDISC (https://doi.org/10.5067/A7S6XP56VZWS, https://doi.org/10.5067/7MCPBJ41Y0K6, and https://doi.org/10.5067/QBZ6MG944HW0).
REFERENCES
Berry, G., M. J. Reeder, and C. Jakob, 2011: A global climatology of atmospheric fronts. Geophys. Res. Lett., 38, L04809, https://doi.org/10.1029/2010GL046451.
Bitsa, E., H. Flocas, J. Kouroutzoglou, M. Hatzaki, I. Rudeva, and I. Simmonds, 2019: Development of a front identification scheme for compiling a cold front climatology of the Mediterranean. Climate, 7, 130, https://doi.org/10.3390/cli7110130.
Bitsa, E., H. Flocas, J. Kouroutzoglou, G. Galanis, M. Hatzaki, G. Latsas, I. Rudeva, and I. Simmonds, 2021: A Mediterranean cold front identification scheme combining wind and thermal criteria. Int. J. Climatol., 41, 6497–6510, https://doi.org/10.1002/joc.7208.
Bjerknes, J., and H. Solberg, 1922: Life cycle of cyclones and the polar front theory of atmospheric circulation. Geofys. Publ., 3, 1–18.
Catto, J. L., and S. Pfahl, 2013: The importance of fronts for extreme precipitation. J. Geophys. Res. Atmos., 118, 10 791–10 801, https://doi.org/10.1002/jgrd.50852.
Catto, J. L., C. Jakob, G. Berry, and N. Nicholls, 2012: Relating global precipitation to atmospheric fronts. Geophys. Res. Lett., 39, L10805, https://doi.org/10.1029/2012GL051736.
Catto, J. L., N. Nicholls, C. Jakob, and K. L. Shelton, 2014: Atmospheric fronts in present and future climates. Geophys. Res. Lett., 41, 7642–7650, https://doi.org/10.1002/2014GL061943.
Catto, J. L., C. Jakob, and N. Nicholls, 2015a: Can the CMIP5 models represent winter frontal precipitation? Geophys. Res. Lett., 42, 8596–8604, https://doi.org/10.1002/2015GL066015.
Catto, J. L., E. Madonna, H. Joos, I. Rudeva, and I. Simmonds, 2015b: Global relationship between fronts and warm conveyor belts and the impact on extreme precipitation. J. Climate, 28, 8411–8429, https://doi.org/10.1175/JCLI-D-15-0171.1.
Compo, G. P., and Coauthors, 2011: The Twentieth Century Reanalysis project. Quart. J. Roy. Meteor. Soc., 137, 1–28, https://doi.org/10.1002/qj.776.
Crespo, J. A., D. J. Posselt, C. M. Naud, and C. Busy-Virat, 2017: Assessing CYGNSS’s potential to observe extratropical fronts and cyclones. J. Appl. Meteor. Climatol., 56, 2027–2034, https://doi.org/10.1175/JAMC-D-17-0050.1.
Dee, D. P., and Coauthors, 2011: The ERA-Interim reanalysis: Configuration and performance of the data assimilation system. Quart. J. Roy. Meteor. Soc., 137, 553–597, https://doi.org/10.1002/qj.828.
de La Torre, L., R. Nieto, M. Noguerol, J. A. Añel, and L. Gimeno, 2008: A climatology based on reanalysis of baroclinic development regions in the extratropical Northern Hemisphere. Ann. N. Y. Acad. Sci., 1146, 235–255, https://doi.org/10.1196/annals.1446.017.
Dowdy, A., and J. Catto, 2017: Extreme weather caused by concurrent cyclone, front and thunderstorm occurrences. Sci. Rep., 7, 40359, https://doi.org/10.1038/srep40359.
Gelaro, R., and Coauthors, 2017: The Modern-Era Retrospective Analysis for Research and Applications, version 2 (MERRA-2). J. Climate, 30, 5419–5454, https://doi.org/10.1175/JCLI-D-16-0758.1.
GMAO, 2015: MERRA-2 inst6_3d_ana_Np: 3d, 6-hourly, instantaneous, pressure-level, analysis, analyzed meteorological fields V5.12.4. Goddard Earth Sciences Data and Information Services Center, accessed 29 April 2019, https://doi.org/10.5067/A7S6XP56VZWS.
Hénin, R., A. M. Ramos, S. Schemm, C. M. Gouveia, and M. L. R. Liberato, 2019: Assigning precipitation to mid‐latitudes fronts on sub‐daily scales in the North Atlantic and European sector: Climatology and trends. Int. J. Climatol., 39, 317–330, https://doi.org/10.1002/joc.5808.
Hersbach, H., and Coauthors, 2020: The ERA5 global reanalysis. Quart. J. Roy. Meteor. Soc., 146, 1999–2049, https://doi.org/10.1002/qj.3803.
Hewson, T. D., 1997: Objective identification of frontal wave cyclones. Meteor. Appl., 4, 311–315, https://doi.org/10.1017/S135048279700073X.
Hewson, T. D., 1998: Objective fronts. Meteor. Appl., 5, 37–65, https://doi.org/10.1017/S1350482798000553.
Hope, P., and Coauthors, 2014: A comparison of automated methods of front recognition for climate studies: A case study in southwest Western Australia. Mon. Wea. Rev., 142, 343–363, https://doi.org/10.1175/MWR-D-12-00252.1.
Huber-Pock, F., and C. Kress, 1989: An operational model of objective frontal analysis based on ECMWF products. Meteor. Atmos. Phys., 40, 170–180, https://doi.org/10.1007/BF01032457.
Jenkner, J., M. Sprenger, I. Schwenk, C. Schwierz, S. Dierer, and D. Leuenberger, 2010: Detection and climatology of fronts in a high-resolution model reanalysis over the Alps. Meteor. Appl., 17, 1–18, https://doi.org/10.1002/met.142.
Kern, M., T. Hewson, A. Schatler, R. Westermann, and M. Rautenhaus, 2019: Interactive 3D visual analysis of atmospheric fronts. IEEE Trans. Vis. Comput. Graph., 25, 1080–1090, https://doi.org/10.1109/TVCG.2018.2864806.
Kobayashi, S., and Coauthors, 2015: The JRA-55 reanalysis: General specifications and characteristics. J. Meteor. Soc. Japan, 93, 5–48, https://doi.org/10.2151/jmsj.2015-001.
Lagerquist, R., J. T. Allen, and A. McGovern, 2020: Climatology and variability of warm and cold fronts over North America from 1979 to 2018. J. Climate, 33, 6531–6554, https://doi.org/10.1175/JCLI-D-19-0680.1.
McCann, D. W., and J. P. Whistler, 2001: Problems and solutions for drawing fronts objectively. Meteor. Appl., 8, 195–203, https://doi.org/10.1017/S1350482701002079.
Mills, G. A., 2005: A re-examination of the synoptic and mesoscale meteorology of Ash Wednesday 1983. Aust. Meteor. Mag., 54, 35–55.
Naud, C. M., D. J. Posselt, and S. C. van den Heever, 2015: A CloudSat–CALIPSO view of cloud and precipitation properties across cold fronts over the global oceans. J. Climate, 28, 6743–6762, https://doi.org/10.1175/JCLI-D-15-0052.1.
Neu, U., and Coauthors, 2013: IMILAST: A community effort to intercompare extratropical cyclone detection and tracking algorithms. Bull. Amer. Meteor. Soc., 94, 529–547, https://doi.org/10.1175/BAMS-D-11-00154.1.
Papritz, L., S. Pfahl, I. Rudeva, I. Simmonds, H. Sodemann, and H. Wernli, 2014: The role of extratropical cyclones and fronts for Southern Ocean freshwater fluxes. J. Climate, 27, 6205–6224, https://doi.org/10.1175/JCLI-D-13-00409.1.
Parfitt, R., and H. Seo, 2018: A new framework for near-surface wind convergence over the Kuroshio Extension and Gulf Stream in wintertime: The role of atmospheric fronts. Geophys. Res. Lett., 45, 9909–9918, https://doi.org/10.1029/2018GL080135.
Parfitt, R., and Y. Kwon, 2020: The modulation of Gulf Stream influence on the troposphere by the eddy-driven jet. J. Climate, 33, 4109–4120, https://doi.org/10.1175/JCLI-D-19-0294.1.
Parfitt, R., A. Czaja, and H. Seo, 2017: A simple diagnostic for the detection of atmospheric fronts. Geophys. Res. Lett., 44, 4351–4358, https://doi.org/10.1002/2017GL073662.
Pepler, A. S., and Coauthors, 2020: The contributions of fronts, lows and thunderstorms to southern Australian rainfall. Climate Dyn., 55, 1489–1505, https://doi.org/10.1007/s00382-020-05338-8.
Poli, P., and Coauthors, 2016: ERA-20C: An atmospheric reanalysis of the twentieth century. J. Climate, 29, 4083–4097, https://doi.org/10.1175/JCLI-D-15-0556.1.
Raveh-Rubin, S., and J. L. Catto, 2019: Climatology and dynamics of the link between dry intrusions and cold fronts during winter, Part II: Front-centred perspective. Climate Dyn., 53, 1893–1909, https://doi.org/10.1007/s00382-019-04793-2.
Renard, R. J., and L. C. Clarke, 1965: Experiments in numerical objective frontal analysis. Mon. Wea. Rev., 93, 547–556, https://doi.org/10.1175/1520-0493(1965)093<0547:EINOFA>2.3.CO;2.
Rudeva, I., and I. Simmonds, 2015: Variability and trends of global atmospheric frontal activity and links with large-scale modes of variability. J. Climate, 28, 3311–3330, https://doi.org/10.1175/JCLI-D-14-00458.1.
Saha, S., and Coauthors, 2010: The NCEP Climate Forecast System Reanalysis. Bull. Amer. Meteor. Soc., 91, 1015–1058, https://doi.org/10.1175/2010BAMS3001.1.
Schemm, S., and M. Sprenger, 2015: Frontal-wave cyclogenesis in the North Atlantic—A climatological characterization. Quart. J. Roy. Meteor. Soc., 141, 2989–3005, https://doi.org/10.1002/qj.2584.
Schemm, S., I. Rudeva, and I. Simmonds, 2015: Extratropical fronts in the lower troposphere—Global perspectives obtained from two automated methods. Quart. J. Roy. Meteor. Soc., 141, 1686–1698, https://doi.org/10.1002/qj.2471.
Shapiro, M. A., and D. Keyser, 1990: Fronts, jet streams and the tropopause. Extratropical Cyclones, The Erik Palmén Memorial Volume, C. W. Newton and E. O. Holopainen, Eds., Amer. Meteor. Soc., 167–191.
Simmonds, I., and M. Li, 2021: Trends and variability in polar sea ice, global atmospheric circulations, and baroclinicity. Ann. N. Y. Acad. Sci., 1504, 167–186, https://doi.org/10.1111/nyas.14673.
Simmonds, I., K. Keay, and J. A. Tristram Bye, 2012: Identification and climatology of Southern Hemisphere mobile fronts in a modern reanalysis. J. Climate, 25, 1945–1962, https://doi.org/10.1175/JCLI-D-11-00100.1.
Smirnov, D., M. Newman, M. A. Alexander, Y. Kwon, and C. Frankignoul, 2015: Investigating the local atmospheric response to a realistic shift in the Oyashio sea surface temperature front. J. Climate, 28, 1126–1147, https://doi.org/10.1175/JCLI-D-14-00285.1.
Solmon, S. A., and I. Orlanski, 2014: Poleward shift and change of frontal activity in the Southern Hemisphere over the last 40 years. J. Atmos. Sci., 71, 536–552, https://doi.org/10.1175/JAS-D-13-0105.1.
Thomas, C. M., and D. M. Schultz, 2019a: Global climatologies of fronts, airmass boundaries, and airstream boundaries: Why the definition of “front” matters. Mon. Wea. Rev., 147, 691–717, https://doi.org/10.1175/MWR-D-18-0289.1.
Thomas, C. M., and D. M. Schultz, 2019b: What are the best thermodynamic quantity and function to define a front in gridded model output? Bull. Amer. Meteor. Soc., 100, 873–895, https://doi.org/10.1175/BAMS-D-18-0137.1.
Uccellini, L. W., S. F. Corfidi, N. F. Junker, P. L. Kocin, and D. A. Olson, 1992: Report on the surface analysis workshop held at the National Meteorological Center 25–28 March 1991. Bull. Amer. Meteor. Soc., 73, 459–472, https://doi.org/10.1175/1520-0477-73.4.459.
Uppala, S. M., and Coauthors, 2005: The ERA-40 Re-Analysis. Quart. J. Roy. Meteor. Soc., 131, 2961–3012, https://doi.org/10.1256/qj.04.176.