Simulating the Chesapeake Bay Breeze: Sensitivities to Water Surface Temperature

Patrick Hawbecker aNational Center for Atmospheric Research, Boulder, Colorado

Search for other papers by Patrick Hawbecker in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0002-2641-6464
and
Jason C. Knievel aNational Center for Atmospheric Research, Boulder, Colorado

Search for other papers by Jason C. Knievel in
Current site
Google Scholar
PubMed
Close
Open access

Abstract

Simulations of Chesapeake Bay breezes are performed with varying water surface temperature (WST) datasets and formulations for the diurnal cycle of WST to determine whether more accurate depictions of water surface temperature improve prediction of bay breezes. The accuracy of simulations is measured against observed WST, inland wind speed and temperature, and in simulations’ ability to detect bay breezes via a detection algorithm developed for numerical model output. Missing WST data are found to be problematic within the Weather Research and Forecasting (WRF) Model framework, especially when activating the prognostic equation for skin temperature, sst_skin. This is alleviated when filling all missing WST values with skin temperature values within the initial and boundary conditions. Performance of bay-breeze prediction is shown to be somewhat associated with the resolution of the WST dataset. Further, model performance in simulating WST as well as in simulating the Chesapeake Bay breeze is improved when diurnal fluctuations of WST are considered via the sst skin option. Prior to running simulations, model performance in simulating the bay breeze can be accurately predicted through the use of a simple formulation.

© 2022 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Patrick Hawbecker, hawbecke@ucar.edu

Abstract

Simulations of Chesapeake Bay breezes are performed with varying water surface temperature (WST) datasets and formulations for the diurnal cycle of WST to determine whether more accurate depictions of water surface temperature improve prediction of bay breezes. The accuracy of simulations is measured against observed WST, inland wind speed and temperature, and in simulations’ ability to detect bay breezes via a detection algorithm developed for numerical model output. Missing WST data are found to be problematic within the Weather Research and Forecasting (WRF) Model framework, especially when activating the prognostic equation for skin temperature, sst_skin. This is alleviated when filling all missing WST values with skin temperature values within the initial and boundary conditions. Performance of bay-breeze prediction is shown to be somewhat associated with the resolution of the WST dataset. Further, model performance in simulating WST as well as in simulating the Chesapeake Bay breeze is improved when diurnal fluctuations of WST are considered via the sst skin option. Prior to running simulations, model performance in simulating the bay breeze can be accurately predicted through the use of a simple formulation.

© 2022 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Patrick Hawbecker, hawbecke@ucar.edu

1. Introduction

Water body (WB) breezes—including the widely known sea breeze and lesser known lake and bay breezes—are the onshore components of local, thermally direct circulations driven by differential heating between a warmer land surface and cooler body of water (Segal and Pielke 1985; Miller et al. 2003; Sikora et al. 2010; Crosman and Horel 2010). Although these circulations are complex, Biggs and Graves (1962) found that prediction of WB breezes (specifically, lake breezes) might boil down to two measurable parameters: the near-surface wind speed and the temperature difference between the land and water body ΔT that is directly responsible for the low-level water-to-land gradient in air temperature fundamental to WB breezes. Biggs and Graves (1962) postulated that ΔT must be strong enough to overcome the background pressure gradient for a lake breeze to penetrate inland. The circulation can be altered or interrupted by several factors, including the shape, size, and dimensions of the water and land bodies, the static stability over land and sea, and surface roughness (Miller et al. 2003; Crosman and Horel 2010). It is due to these factors that differences arise between sea breezes and lake or bay breezes because lakes and bays are generally shallower and smaller than oceans, thus, the temperature difference and associated pressure gradient are typically smaller, making it more difficult for the developed pressure gradient to overcome the background winds. For example, it has been shown that sea breezes can form with offshore background winds of 6–10 m s−1 (Biggs and Graves 1962; Arritt 1993; Porson et al. 2007; Crosman and Horel 2010), while studies to date suggest that small to medium-size lakes typically do not generate a lake breeze if background winds exceed 3–5 m s−1 (Segal et al. 1997; Shen 1998; Crosman and Horel 2010). The pressure gradients associated with lake and bay breezes are generally weaker and more localized than those associated with sea breezes, so the former are more easily overwhelmed by opposing background pressure gradients.

The aforementioned shallower depths of lakes and bays allow for larger and more rapid changes in water surface temperature (Porson et al. 2007), potentially augmenting ΔT. Rapid changes in water surface temperature (WST) due to factors such as currents and diurnal heating have been shown in numerical studies to influence lake- and other shallow-water body breezes (Segal and Pielke 1985; Arritt 1987; Porson et al. 2007; Crosman and Horel 2010). However, most modeling studies of water-to-land breezes assume constant WST (Crosman and Horel 2010). For sea breezes, this assumption often is reasonable. For lake and bay breezes, it often is not. Thus, in this study, we investigate the impacts of WST on bay-breeze formation. Our focus is the Chesapeake Bay in the Washington, D.C., Maryland, and Virginia metropolitan areas, where bay breezes are common in spring and summer (Sikora et al. 2010; Stauffer and Thompson 2015; Stauffer et al. 2015). Several studies have shown higher pollution levels near the ground in urban corridors on days associated with the Chesapeake Bay breeze (Segal et al. 1982; Loughner et al. 2011, 2014; Stauffer et al. 2015). The Chesapeake Bay breeze can increase cloud development (Loughner et al. 2011), and, although not specific to the Chesapeake, WB breezes can trigger convective initiation (Kingsmill 1995). Additionally, the Chesapeake Bay is shallow, and the diurnal cycle of its surface temperature will be shown to be several degrees Celsius in areas. It is for these reasons that we select the Chesapeake Bay as our area of focus. We hypothesize that by capturing the diurnal cycle of WST more accurately, or, more specifically, by capturing ΔT more accurately, the numerical model will benefit in its simulation of the Chesapeake Bay breeze.

For the purposes of this study, we approximate the temperature of the land surface with the 2-m air temperature above the land. For the temperature of the water in the calculation of ΔT, we use the surface temperature of the water, as described below in more detail. Detection of the numerically simulated Chesapeake Bay breeze will be done through the model-based detection algorithm developed by Hawbecker and Knievel (2022, hereinafter HK22).

We use the term water surface temperature to refer generally to temperatures of the “surfaces” of water bodies such as seas, bays, and lakes. When referring to variables or fields that formally are named sea surface temperature (SST), or some variation thereof, we retain that specific name. For example, as we later describe, the name of the variable in some of the datasets we used is SST, so we use that abbreviation, not WST, when specifically referring to that variable.

It is important to note, as Donlon et al. (2002) explained, there is a range of possible meanings of “surface” water temperature, and measurements from in situ platforms such as buoys and ships are not at the same water depths (roughly the upper 10 m) as the layer for which temperatures from satellite radiometers are valid (top layer of the water surface). Similar to satellite retrievals, numerical weather models calculate the water skin temperature as the interface between the water body and atmosphere, as opposed to the “surface” temperature measured by buoys. While temperatures based on in situ sensors and the satellite sensors might be different, the differences are found to be, on average, within 0.5 K, although they can potentially be much higher, depending on conditions such as wind speed (Donlon et al. 2002; Schluessel et al. 1990). Thus, while imperfect, for the analysis in this study we directly compare buoy water surface temperature observations with the satellite-derived products and numerical skin temperature output.

2. Method

A brief overview of the period of interest and model setup is provided here, however, more detailed documentation of the case study and observational data can be found in HK22.

a. Period of interest

July of 2019 had the highest number of bay-breeze days in the Chesapeake region (HK22). The 2-week stretch between 16 July and 1 August 2019 produced 12 bay-breeze days observed by surface stations in Maryland and Virginia. This is a large sample of observed bay-breeze days interspersed with days in which bay breezes were not observed, which allows for comparison of the model correctly versus incorrectly simulating bay breezes in space and time.

b. Surface observations

Following HK22, the National Oceanic and Atmospheric Administration (NOAA) National Climatic Data Center (NCDC) FTP server (Rutledge et al. 2006) is used to retrieve data from seven stations for both Automated Weather Observing Systems (AWOS; typically, 20-min output) and Automated Surface Observing Systems (ASOS; 5-min output) datasets in Maryland and Virginia. We consider six nearshore stations: Martin State Airport (MTN), Baltimore–Washington International Airport (KWI), U.S. Naval Academy (NAK), Patuxent River Naval Air Station (NHK), Webster Naval Outlying Field (NUI), and Aberdeen Proving Ground (APG) along with one inland station that is assumed to be out of the range of even the strongest bay breezes, Washington Dulles International Airport (IAD). Because of missing data from APG, observations from Phillips Army Airfield at APG, provided by the U.S. Army Test and Evaluation Command, are used in place of the APG ASOS data. Locations of these stations are shown in Fig. 1a. Temperature is missing from MTN, and thus it will be excluded from certain analyses.

Fig. 1.
Fig. 1.

(a) Locations of the AWOS and ASOS stations used in this study. Coral-colored circles are the locations of “near shore” stations, and the black circle shows the location of the inland station KIAD. Cones represent the “onshore” wind direction from the MBDA. (b) Buoy locations, denoted by blue triangles, with the extent of (a) denoted by the dash-outlined box.

Citation: Journal of Applied Meteorology and Climatology 61, 11; 10.1175/JAMC-D-22-0002.1

Oceanic and atmospheric data from 12 buoys (see Fig. 1b) are collected from the Chesapeake Bay Interpretive Buoy System (CBIBS; hourly output) and the National Data Buoy Center (NDBC; output time varies per station). The data are then resampled into hourly intervals. Measurements of interest from the buoys include WST, wind speed and direction, temperature, dewpoint, and pressure. Of the 12 buoys, two are selected from offshore locations within the Atlantic Ocean: Virginia Beach (VAB) and Delaware Beach (DEB). These locations are useful for evaluating model performance between deeper and shallower waters in the area of interest. The remaining 10 buoys are within the Chesapeake Bay and are referred to as shallow-water buoys in this paper.

c. Water surface temperature datasets

Eight daily WST datasets are downloaded to be incorporated into the initial and boundary conditions of the WRF Model (Fig. 2). Seven of these datasets are downloaded from the Group for High Resolution Sea Surface Temperature Level-4 (GHRSST-L4) database including: the Global 1-km SST (G1SST) analysis from JPL OurOcean Group (JPL OurOcean 2010), the Multiscale Ultrahigh Resolution (MUR) dataset (NASA Jet Propulsion Laboratory 2015), the Office of Satellite and Product Operations (OSPO) analysis (OSPO 2015), the Operational Sea Surface Temperature and Sea Ice Analysis (OSTIA) analysis (UKMO 2005), the Naval Oceanographic Office (NAVO) dataset (NASA Jet Propulsion Laboratory 2018), the Canadian Meteorological Center (CMC) analysis product (Canada Meteorological Center 2016), and the NOAA National Centers for Environmental Information (NCEI) analysis (NCEI 2016) (see Table 1). The Moderate Resolution Imaging Spectroradiometers (MODIS) composite WST dataset as described by Knievel et al. (2010) is also used. The MODIS-based product is on a grid with 4.625-km spacing in both the latitudinal and longitudinal directions. Previous studies have shown the ability of this dataset to be incorporated into numerical weather prediction models for simulating sea breezes (Knievel et al. 2010) and weather influenced by shallow lakes (Grim et al. 2013), improving on results from simulations based on other WST datasets.

Fig. 2.
Fig. 2.

Water surface temperature and buoy locations for each case. Buoys are denoted by orange open circles where WST data are available and by red crosses where data are missing. The number of buoys with WST values within the domain is noted near the top of each panel.

Citation: Journal of Applied Meteorology and Climatology 61, 11; 10.1175/JAMC-D-22-0002.1

Table 1

Spatial granularity and types of sensors used in each WST dataset.

Table 1

Several of the datasets used within this study include assimilated surface observations from ships (OSPO and NCEI) and both floating and moored buoys (G1SST, MUR, OSPO, OSTIA, CMC, and NCEI) in the satellite-derived product to bias-correct and reduce error (see Table 1). Two datasets, MODIS and NAVO, do not include assimilated surface observations. However, NAVO relies on climatological data to fill gaps of missing data, and the MODIS composite utilizes a 12-day running average to fill gaps.

The highest-resolution datasets, G1SST and MUR (Figs. 2a and 2b, respectively), are comparable in coverage and have valid data at all buoy locations (coral-colored circles), but have some noticeable differences in WST, especially within the Chesapeake Bay and its tributaries. Both OSPO and OSTIA (Figs. 2d and 2e, respectively) also cover the vast majority of the Chesapeake Bay, although the OSPO data are much smoother and lack the fine-scale features that are captured in OSTIA and the higher-resolution datasets. NAVO (Fig. 2f) and CMC (Fig. 2g), although nearly one-half of the granularity of OSPO, appear to capture as much detail in the WST pattern around the Chesapeake Bay. In NCEI (Fig. 2h) and the MODIS composite (Fig. 2c), there are many more areas of missing values as coverage in the narrower water bodies worsens. These datasets provide no WST values at the LWT buoy location but do at the rest of the buoy locations. The ERA-Interim native WST dataset (ERA-I; Fig. 2i) is missing data at all shallow-water buoy locations and will be discussed further in section 5.

d. Bay-breeze detection algorithm

Many observation-based detection algorithms (OBDAs) have been developed in order to identify bay breezes from surface observations (Laird et al. 2001; Sikora et al. 2010; Azorin-Molina et al. 2011; Stauffer and Thompson 2015; Stauffer et al. 2015; Hughes and Veron 2018; Mazzuca et al. 2019, for example). These detection algorithms, however, perform poorly when applied to model output, for which they were not intended (HK22). Therefore, we use the model-based detection algorithm (MBDA) from HK22 to analyze how well the simulations conducted herein produce bay breezes.

The “truth” we use for assessing model performance comes from applying OBDAs to the surface station data discussed in section 2b. In particular we apply the algorithms from Stauffer and Thompson (2015), Stauffer et al. (2015), and Sikora et al. (2010), as implemented by HK22, with the definition of onshore winds coming from the MBDA (Fig. 1a). For comparison with the MBDA, we combine the results from all OBDAs to develop a list of days on which a bay breeze was observed at each station. We note this to explicitly recognize that the OBDAs and observational datasets are not perfect, thus, the truth that we are using to evaluate model results is also imperfect. With that said, we are confident that the quality of the collected observations and the implementation of the OBDAs allows for useful comparison and discussion.

3. Model setup

The simulations conducted for this study use the same model setup described by HK22. The simulations consist of three one-way nested domains with Δx,y = 27, 9, and 3 km on domains 1, 2, and 3, respectively. The 2-week simulations comprise eight 60-h runs that overlap each other by 12 h, the model spinup time. By designing the simulations in this way, model drift from the initial and boundary conditions is limited. The result is a continuous set of simulated data from 0600 UTC 16 July to 0600 UTC 1 August 2019. Each simulation uses the Yonsei University (YSU) boundary layer scheme (Hong and Lim 2006), revised Monin–Obukhov surface layer scheme (Jiménez et al. 2012), the unified Noah land surface model (Tewari et al. 2004), the RRTMG longwave and shortwave radiation schemes (Iacono et al. 2008), and on d01 and d02, the Kain–Fritsch cumulus parameterization (Kain 2004). By default, WST is not updated in the WRF Model during a simulation, so we turned on the flag sst_update to ensure that WST changes with time. All results in this article are from the innermost domain 3 (Δx,y = 3 km).

To test the impact of varying WST on bay-breeze simulations, we select a single initial condition and boundary condition (ICBC) from the simulations from HK22 consisting of ERA-I (Dee et al. 2011), the ERA5 reanalysis (Hersbach et al. 2020), and the Final Operational Global Analysis product (FNL). To do this, we compare the bias and root-mean-square error (RMSE) of WST, 2-m temperature, and 10-m wind speed from the aforementioned ICBC products at the AWOS and ASOS locations over the 2-week period (Fig. 3). ERA-I produces the lowest bias and RMSE in both 2-m temperature and 10-m wind speed, while performing poorly in WST. It is for this reason that we select ERA-I to be the base ICBC product for this study: Overland performance seems promising, while overwater performance could be improved. Thus, we generate new sets of ICBCs with ERA-I as the base and overwrite SST with the auxiliary WST datasets. This is done by iterating over all grid points used for the boundary conditions of each domain (the met_em files, specifically) and replacing each SST value with the spatially closest value from the WST datasets.

Fig. 3.
Fig. 3.

(a) WST averaged over each buoy and (c) 2-m temperature, and (e) 10-m wind speed averaged over each nearshore station, along with (b),(d),(f) the respective plots of bias vs RMSE. Observations are denoted by a dotted black line in (a), (c), and (e).

Citation: Journal of Applied Meteorology and Climatology 61, 11; 10.1175/JAMC-D-22-0002.1

a. Simulation matrix

Some of the auxiliary WST datasets have missing values within the Chesapeake Bay (see Fig. 2). Currently in WRF, any missing SST values are filled with the skin temperature values from the beginning of the simulation, and these values are not updated for the duration of the simulation. The assumption that water surface temperature and water skin temperature are interchangeable is not entirely accurate, though it is most likely the best solution when data are missing. The issue of WRF not updating the SST data throughout the simulation could be problematic. Thus, an additional set of ICBCs are generated for each auxiliary WST product in which the values of skin temperature from the ERA-I ICBCs are used to fill any missing SST values prior to the simulation. This effectively defines a complete SST dataset for WRF, and every water cell has SST updated along with the boundary conditions. From this we are able to assess the impact of missing WST data on simulating and detecting the Chesapeake Bay breeze.

The auxiliary WST datasets are daily, so they cannot capture the diurnal variation of WST. It has been shown that capturing the diurnal cycle of WST can be important for the accuracy of numerical simulations (Zeng and Beljaars 2005; Kawai and Wada 2007; Salisbury et al. 2018) and can influence the formation of simulated lake and bay breezes (Porson et al. 2007; Crosman and Horel 2010). This can be done through coupling an atmospheric model with a lake or ocean model (Zhang et al. 2019; Salisbury et al. 2018), but at a high computational cost. Alternatively, a popular option is to use prognostic equations to calculate the diurnal cycle of WST in the model (Fairall et al. 1996; Webster et al. 1996; Stuart-Menteth et al. 2003; Zeng and Beljaars 2005; Takaya et al. 2010; Filipiak et al. 2012; Salisbury et al. 2018). Within WRF, an option exists, sst_skin, that is able to accurately reproduce the diurnal cycle in WST over the open ocean (Zeng and Beljaars 2005). However, WST over the open ocean does not vary as much, diurnally, as in shallower waters (Porson et al. 2007). It has already been shown that shallow-water diurnal variations can strongly affect water-to-land breezes, thus, capturing the diurnal cycle of WST might strongly affect the formation of simulated bay breezes. For each combination of ICBC and updated WST we run additional simulations with sst_skin turned on.

In total, this generates a 9 × 2 × 2 matrix of simulations (36 in total) consisting of 9 WST datasets (refered to as cases in this study) run with either missing values or WST filled by skin temperature (2 runs) and with sst_skin on or off (2 runs).

4. Preliminary assessment

Considering strictly the ERA-I boundary conditions with each WST dataset—prior to any simulations—we are able to speculate on how each will perform in simulating the Chesapeake Bay breeze. The 2-m temperature at the inland station, IAD, from the ERA-I dataset agrees well with observations (Fig. 4a). Each WST dataset varies in its depiction of WST averaged over all shallow-water buoy locations (Fig. 4b); however, datasets such as OSTIA, MUR, and OSPO provide the most accurate depiction when considering RMSE and bias (Fig. 4c). MODIS and NAVO both clearly overestimate WST during the second half of the period of interest (Fig. 4b). As mentioned previously, these two datasets do not include assimilated surface observations from buoys and ships. It is possible that this contributed to the inaccuracies of these datasets. Taking the 2-m temperature values at IAD and subtracting them by the average WST values over all shallow-water buoys, we generate a depiction of ΔT in the Chesapeake Bay region for each WST dataset (Fig. 4d). Most WST products perform similarly considering RMSE of 〈ΔT〉 (Fig. 4e); however, the native ERA-I WST data and MODIS composite are the worst performers. In particular, the native ERA-I ΔT is strongly warm biased and has the highest RMSE.

Fig. 4.
Fig. 4.

The (a) 2-m temperature from IAD for ERA-I ICBCs, (b) WST for each WST dataset averaged over each buoy location, and (d) ΔT calculated as the difference between 2-m temperature at IAD and the average WST over all nearshore stations. Observations are denoted by a dotted black line in (a), (b), and (d). Also shown are bias vs RMSE in (c) WST and (e) ΔT for each case.

Citation: Journal of Applied Meteorology and Climatology 61, 11; 10.1175/JAMC-D-22-0002.1

Consider the lake breeze index (LBI; Biggs and Graves 1962), defined as U2/(CpΔT), where U is the inland near-surface wind speed, Cp is the heat capacity of air, and ΔT is the temperature difference between land and the water body. In this study, the inland wind speed and temperature are taken from IAD, which is roughly 85 km from the Chesapeake Bay (see Fig. 1). This distance from shore is significantly larger than that used by Biggs and Graves (1962); however, we aim to simply test the applicability of the LBI in this scenario, for which it was not strictly designed. According to the LBI, values of 3.0 or less indicate a lake-breeze day; calibrated based on experiments in the Lake Erie region (Biggs and Graves 1962).

The LBI values based on ERA-I will be smaller than that of the other WST datasets because of a larger ΔT in the denominator (Fig. 4e). Accordingly, we expect simulations with the complete ERA-I ICBCs to generate more WB breezes than the other datasets. Conversely, if we consider the MODIS composite, in which ΔT is biased low (negative), the opposite is true, and we expect fewer WB breezes to develop in these simulations (Fig. 4e). Inherently in the LBI, we see the physical balance between background wind speeds and the developed pressure gradient between land and water. If the pressure gradient equated with the differential heating between land and sea (approximated in some sense by ΔT) can sufficiently overcome the background pressure gradient (approximated by the near-surface wind speed U) to produce an LBI value of 3.0 or less, then a WB breeze is predicted to form.

The critical value of 3.0 that discriminates between WB-breeze days and non-WB-breeze might be specific to the Lake Erie region for which it was designed. Thus, calculating the LBI for each WST dataset with different critical values and comparing with observations yields insight into how the different WST datasets may affect bay-breeze formation (Table 2). The previous predictions of the number of breezes generated by the ERA-I and MODIS datasets based solely on ΔT are confirmed by the LBI analysis in which ERA-I is predicted to have the highest number of bay-breeze days when the critical LBI value is 3.0 and remains among the highest when the critical LBI value is increased. Meanwhile, the LBI for the MODIS composite predicts the fewest bay-breeze days across all critical LBI values. Outside of these two cases, we might expect the simulations based on the rest of the WST datasets to produce a similar number of WB breezes, according to the LBI.

Table 2

WB breeze days predicted from each WST dataset according to the LBI of Biggs and Graves (1962) for four definitions of the critical LBI value.

Table 2

5. Results

We first validate simulations by the WRF Model against offshore observations in order to determine which configuration of the model produces the most accurate WST field (section 5a). This performance is cross checked against how well each configuration simulates inland conditions, including the temperature difference between land and water (section 5b). Next, the model-based detection algorithm (MBDA) developed by HK22 is applied to the model output to detect the simulated bay breezes and determine any effects of WST accuracy on the simulation of the Chesapeake Bay breeze (section 5c). Last, we revisit the LBI-based predictions of WB breeze performance and compare with the simulated results from the numerical simulations using the MBDA.

a. WST results

WST from the native ERA-I dataset and from each of the auxiliary base WST datasets show varying agreement with observations at different buoy locations (Fig. 5). At the shallow-water buoys (Figs. 5a–c), the observations have a strong diurnal cycle of temperature that is lacking in the numerical model. This diurnal cycle is still apparent, but generally weaker at the deeper buoys (Figs. 5d,e), where the simulations agree more with observations. At each of these buoys, the MODIS dataset appears not to capture fully a net cooling that starts around 25 July 2019, and the dataset overpredicts WST for the remainder of the period of interest (POI) at each buoy location. The NAVO dataset better approximates the net cooling of WST, but the cooling is delayed at most buoy locations by several days. Recall that the native ERA-I WST dataset is missing values at each of the shallow-water buoy locations (Fig. 2i). This appears to result in a pattern of a constant value for two days, then an adjustment for another two days for the entire POI. Within the WRF code, it appears that when the model encounters missing WST information, it fills the cell with the value of skin temperature at the beginning of the simulation and then does not update the value for the remainder of the simulation. Given that these simulations are eight individual simulations stitched together, we can see that this holds true, and the only time WST is updated at the shallow-water buoys is every time the individual simulations restart and update. At the deep-water buoys, the WST values update daily, which is the refresh rate within the native ERA-I WST dataset.

Fig. 5.
Fig. 5.

Time series of WST at shallow-water buoys (a) ANN, (b) BIS, (c) FLN and deep-water buoys (d) VAB and (e) DEB for each case. Observations are denoted by black lines.

Citation: Journal of Applied Meteorology and Climatology 61, 11; 10.1175/JAMC-D-22-0002.1

To compare these simulations in a more objective way, WST results are plotted on Taylor diagrams (Fig. 6; Taylor 2001). As recommended by Taylor (2001), the root-mean-square difference and standard deviation of WST at each buoy location are normalized through division by the standard deviation of the observations at that location. By doing this, we are able to average the WST results at each shallow-water buoy location to yield an overall performance of each WST dataset. As expected because of the lack of a diurnal cycle in these datasets, the correlation and normalized standard deviation are fairly low. MODIS, NAVO, and ERA-I produce the lowest correlating results and highest RMSE, as a result of the aforementioned issues in capturing the cooling of WST in the middle of the POI. The rest of the WST datasets lead to simulations that perform somewhat similarly, with the lone notable exception that the normalized standard deviation of the G1SST dataset is slightly larger than that of the observations. This appears to be because of overestimation of WST early in the simulation followed by underestimation toward the end of the simulation, resulting in an overall increased standard deviation.

Fig. 6.
Fig. 6.

Normalized Taylor diagram for WST for each base case averaged over all shallow-water buoy locations.

Citation: Journal of Applied Meteorology and Climatology 61, 11; 10.1175/JAMC-D-22-0002.1

From this, we suspect that the inclusion of the prognostic equation for diurnal variations in WST might help to improve WST correlation, standard deviation, and RMSE. Activating sst_skin results in smoother WST fields that do include diurnal oscillations (Fig. 7). At several buoys, however, the amplitude of these diurnal oscillations is nearly 20 K in multiple simulations (Figs. 7a,c). These extreme oscillations occur strictly at buoys where the WST datasets are missing data: ERA-I, NCEI, and the MODIS composite at LWT (Fig. 7a), and ERA-I at TBL (Fig. 7c). When the missing values are filled with skin temperature values from the ICBCs, WRF updates WST as expected. From this, the erroneous performance of sst_skin is resolved, and WST at each of the problematic locations is now more realistic (Figs. 7b,d).

Fig. 7.
Fig. 7.

Time series of WST at (a),(b) LWT and (c),(d) TBL (left) without filling missing WST values and (right) with filling the missing WST values for each case with sst_skin activated.

Citation: Journal of Applied Meteorology and Climatology 61, 11; 10.1175/JAMC-D-22-0002.1

For the cases in which there are no missing WST data at buoy locations, correlation and standard deviation are higher, and RMSE is lower when activating sst_skin (Figs. 8a,b,d–g). A bimodal distribution can be seen in that the performances of the default setup and filled setup are the same in each of these cases with complete WST fields, as are the sst_skin and fill + skin setups. This is because filling the datasets in which there are no missing values at the buoy location does nothing to the WST at these locations. Thus, WST performance for these setups, filled or unfilled, is exactly the same.

Fig. 8.
Fig. 8.

Normalized Taylor diagrams of WST averaged over all shallow-water buoy locations for each case. Arrows point from simulations with sst_skin turned off to the respective simulations with sst_skin turned on.

Citation: Journal of Applied Meteorology and Climatology 61, 11; 10.1175/JAMC-D-22-0002.1

For the cases in which there are missing WST values at a single buoy location (MODIS and NCEI), simply filling the WST values does not appear to greatly change the simulated WST performance (Figs. 8c,h). The native ERA-I WST data are missing values at every shallow-water buoy location and filling in the missing values improves performance according to each Taylor diagram metric (Fig. 8i). The major differences can be seen when sst_skin is enabled, which allows extreme oscillations of WST at the missing buoy locations to strongly effect the results even when only one buoy is missing data. Use of sst_skin improves correlation in all cases, and by filling the missing WST values before running the WRF simulations, correlation is further increased, RMSE is lowered, and the standard deviation is protected from the vast overpredictions of the WST diurnal cycle. This is exemplified most clearly with the ERA-I case, in which the use of sst_skin with a WST field that is missing at all shallow-water buoy locations produces large error and standard deviation (Fig. 8i).

In the ERA-I case, when sst_skin is turned on, the range of simulated WST is roughly 2 times that of observed WST (Fig. 9b). How WRF handles missing WST is potentially problematic in that the spread and correlation of simulated values are small due to WST being assigned once at the beginning of the simulation and then not updated (Fig. 9a). When filling the WST values (Fig. 9e), we see improved spread and correlation of WST in comparison with observations, further enhanced by the use of sst_skin (Fig. 9f).

Fig. 9.
Fig. 9.

Two-dimensional histograms of observed (x axis) vs simulated (y axis) WST from the four ERA-I configurations [(a) default; (b) sst_skin; (e) filled; (f) filled with sst_skin] and the four OSTIA configurations [(c) default; (d) sst_skin; (g) filled; (h) filled with sst_skin].

Citation: Journal of Applied Meteorology and Climatology 61, 11; 10.1175/JAMC-D-22-0002.1

For cases in which there are no missing WST values at the buoy locations, such as the OSTIA runs, we see that the distributions and correlations are roughly equal between the nonfilled and filled simulations (Figs. 9c,g). This holds true for the simulations with sst_skin (Figs. 9d,h) as well. When using sst_skin with the OSTIA dataset, the distributions are close but not exactly equivalent between the filled and unfilled simulations. This arises from the fact that while the OSTIA dataset does cover every buoy location that we consider herein, it does not cover every water cell in the simulation domain. Thus, because sst_skin updates WST based on a function of radiative variables at the surface, wind speed, and temperature (to name a few), filling any missing WST values in the domain will change the model solution for WST, which changes the model solution for radiation, wind speed, etc., which in turn affects the solution for WST elsewhere in the domain.

b. ΔT results

The driving factor in WB breeze formation is the temperature difference between land and sea, ΔT. To calculate ΔT, we take the 2-m temperature (as a proxy for temperature of the land surface) at the inland station, IAD, and subtract the mean WST of the shallow-water buoys. For each of the cases in which there are no missing WSTs (Figs. 10a,b,d–g), there is little difference in RMSE, correlation, and standard deviation between the filled and nonfilled setups. Of these cases, NAVO shows slightly worse performance according to all Taylor diagram metrics, overall. In the cases with missing WSTs at buoy locations, we see a changes in the sst_skin simulations (Figs. 10c,h,i), most notably in simulations based on the native ERA-I dataset, in which correlation and RMSE drastically worsen. The default ERA-I case is improved upon when missing WSTs are filled with skin temperatures before running the simulations.

Fig. 10.
Fig. 10.

Normalized Taylor diagrams of ΔT averaged over all nearshore stations for each case.

Citation: Journal of Applied Meteorology and Climatology 61, 11; 10.1175/JAMC-D-22-0002.1

Among cases with no missing WSTs at buoy locations, we do not observe much difference in performance when filling missing WSTs at locations away from the buoys, or when using sst_skin. This is because of the amplitude of inland 2-m temperature is much larger than that of WST, thus dominating the calculation of ΔT (Fig. 11a). However, in cases in which WST is missing at buoy locations, WST when sst_skin is activated has amplitudes that are on par with, or larger than, that of 2-m temperature (Fig. 11b). When these buoy locations are filled with skin temperature, performance resembles that of the simulations without missing WST.

Fig. 11.
Fig. 11.

Time series output of 2-m temperature (shades of red) and WST (shades of blue) for each setup of the (a) OSTIA and (b) ERA-I cases.

Citation: Journal of Applied Meteorology and Climatology 61, 11; 10.1175/JAMC-D-22-0002.1

c. Detection of the Chesapeake Bay breeze

Using hourly output of wind speed, wind direction, temperature, cloud water, and precipitation, we apply the MBDA to each simulation to detect the simulated Chesapeake Bay breeze. Results are compared with observed bay breezes identified by the three OBDAs described in section 2. For the comparison, we use a confusion matrix, also known as a 2 × 2 contingency table (Wilks 2011; Jolliffe and Stephenson 2012). In the confusion matrix, predicting a bay breeze when it is observed results in a hit (or true positive), but when one is not observed a false alarm (FA; or false positive) results. Conversely, not predicting a bay breeze when one is observed results in a miss (or false negative), but when one is also not observed, a correct rejection (CR; or true negative) results.

Aside from the ERA-I and MODIS composite cases, the WST datasets produce very similar amounts of hits (or true positives) for bay-breeze prediction (Table 3). ERA-I produces, on average, more hits than the other cases, whereas the MODIS composite produces fewer. The MODIS composite also produces the second fewest FAs of all cases (G1SST produces the fewest), whereas ERA-I produces the most. The ERA-I simulation with sst_skin—the simulation with the problematic WST results—produces the fewest WB breezes of the ERA-I case simulations (hits + FAs). Overall, it is difficult to draw additional hard conclusions from simply inspecting Table 3. Thus, we summarize the results of each confusion matrix by computing the F1 score, defined as
F1=2×(hitshits+FAs×hitshits+misseshitshits+FAs+hitshits+misses).
The F1 score is the harmonic mean of precision and recall, metrics that quantify the ratio of correctly predicted bay breezes (hits) to the total number of predicted bay breezes (correct or incorrect; hits + FAs), and the ratio of correctly predicted bay breezes to the total number of observed bay breezes (hits + misses). A perfect model’s F1 score would be 1, meaning no FAs or misses, and the worst possible score is 0.
Table 3

Confusion matrix results for each WST case and setup. An X denotes that this option was active in the simulation.

Table 3

The F1 scores for nearly every WST dataset are between 0.5 and 0.6, which essentially means the models predicted the outcome correctly just over one-half of the time (Fig. 12). The MUR simulations produce, on average, the highest F1 scores (Fig. 12b), and MODIS composite simulations produce the lowest (Fig. 12c). Among the cases without missing WSTs (Figs. 12a,b,d–g), the configuration with the highest F1 score is typically one of the simulations with sst_skin (diamond or spade). Among the cases with missing WSTs (Figs. 12c,h,i), the unfilled sst_skin configuration (diamond) is the worst performer, and filling the missing WSTs appears to generally improve performance (club and spade).

Fig. 12.
Fig. 12.

The F1 scores for each simulation.

Citation: Journal of Applied Meteorology and Climatology 61, 11; 10.1175/JAMC-D-22-0002.1

To further compare performances among these cases, we calculate the critical skill index (CSI; also known as threat score), accuracy, Matthews correlation coefficient (MCC), markedness (MK), and informedness (IF), defined as follows:
accuracy=hits+CRshits+CRs+FAs+misses,
MCC=(hits×CRs)(FAs×misses)(hits+FAs)×(hits+misses)×(CRs+FAs)×(CRs+misses),
MK=hitshits+FAs+CRsCRs+misses1,  and
IF=hitshits+misses+CRsCRs+FAs1.
Similar to the F1 score, accuracy varies between 0 and 1 (perfect). MCC varies between −1 and 1 and is evaluated like other correlation coefficients. Markedness and informedness are related respectively to precision and recall and are adjusted to consider the impact of the inverse of precision and recall, respectively, to account for the negative predicted values (CRs and misses) (Powers 2020). Markedness quantifies how consistent the model is in predicting the right (MK = 1) or wrong (MK = −1) values, or if we simply cannot trust the model (MK = 0). Informedness quantifies how often the model correctly predicts positives as positives, and negatives as negatives. From IF = 1, we would conclude that the model correctly predicts both bay-breeze days and non-bay-breeze days. From IF = −1, we would conclude that the model predicts the exact opposite of what is observed—is unfailingly wrong, in other words.

The accuracy of each case shows a subtle relationship with the resolution of the WST dataset, in that the higher resolution datasets generally produce higher accuracy (Fig. 13a). The accuracy scores in general are very high, although this is mostly because the number of CRs is roughly equal to all of the other confusion matrix outcomes combined (see Table 3). Similar to the F1 scores, the highest accuracy configuration for each WST case is generally one of the setups with sst_skin (diamond or spade). According to these metrics, however, even the ERA-I simulation with sst_skin performs better than the other ERA-I setups because the many CRs make up for the deficiency in hits. The values of MCC (Fig. 13b) are not very impressive on their own, though they are all positive, implying a positive correlation between the model predictions and observed outcomes. We continue to see the trend that the setups with sst_skin activated produce the best results. Informedness and markedness (Figs. 13c,d) show similar trends with, again, decent but not great overall performance. These metrics are also heavily weighted by the relatively large number of CRs, though are still positive and above 0.5 if the influence of CRs is ignored, essentially, analyzing recall and precision (see Fig. 12). Values of IF and MK in this range suggest that we can trust the simulations to give us the correct result far more often than not, but there is overall room for improvement among the simulations, MBDA procedure or parameters, and/or observation of bay breezes. Surprisingly, the worst performer according to these metrics is the CMC dataset because of the nearly equal numbers of hits, misses, and FAs (Table 3). The number of hits is not lower than the hits from other setups, nor are the numbers of misses and FAs extraordinary. Rather, the CMC simulations fail to produce many more positive outcomes than negative outcomes.

Fig. 13.
Fig. 13.

(a) Accuracy, (b) MCC, (c) IF, and (d) MK (all defined in section 5c) for each simulation.

Citation: Journal of Applied Meteorology and Climatology 61, 11; 10.1175/JAMC-D-22-0002.1

d. LBI comparison

Because of their cooler simulated water surface (equating to a high bias of ΔT) the simulations based on the native ERA-I WST dataset do, in fact, produce the most bay breezes (Fig. 14a) and bay-breeze days (Fig. 14b) on average. Conversely, the relatively lower ΔTs of the MODIS composite (warm-biased water surface) end up producing the fewest bay breezes and bay-breeze days. Surface sensible heat fluxes over land and water (not shown) are consistent with this finding. Heat fluxes during the day over water in the simulations are directly related to the different WST datasets, while heat fluxes over land differ very little among simulations because of how we designed the investigation. Cases in which WST is lower (e.g., ERA-I) produce lower values of surface sensible heat flux over water resulting in higher ΔT values. Considering again the LBI predictions from the ICBCs (section 4), we can see that the index’s predictions of bay-breeze days are good when the critical value is adjusted to 4.0 (Fig. 14). The LBI is a fairly simple algorithm for the prediction of WB breezes (lake breeze, specifically), yet for this study it captures the necessary physical ingredients for WB formation and is sensitive to the subtle differences that promote, or do not promote, the formation of bay breezes.

Fig. 14.
Fig. 14.

(a) Number of WB breezes and (b) number of days in which a WB breeze is recorded at any nearshore station for each simulation. Open crosses denote the values predicted by the LBI with a threshold of 4.0.

Citation: Journal of Applied Meteorology and Climatology 61, 11; 10.1175/JAMC-D-22-0002.1

6. Summary and discussion

We perform simulations of Chesapeake Bay breezes with varying WST datasets and formulations for the diurnal cycle of WST in order to determine if more accurate depictions of water surface temperature improve bay-breeze prediction. Our emphasis on the role of the temperature of the water surface makes this study distinctive from many other studies of water-body breezes in which the focus is on the temperature of the land surface and its diurnal fluctuation. Our simulations are based on eight WST datasets that vary in their resolution and coverage of the waters in and around the Chesapeake Bay. Additionally, we allow for diurnal variations in WST to be calculated within the WRF Model by activating the sst_skin option. Several WST datasets have missing data at the observation locations, so we also run a suite of simulations in which the missing WST values are filled with the skin temperature from the ICBCs. The simulations, 36 in total, are carried out for two weeks from 16 July to 1 August 2019 in which there were 12 days with bay breezes observed at one or more nearshore stations. Simulated bay breezes are detected via an MBDA (from HK22) applied to model output from each simulation.

Many of the eight WST datasets compare well to buoy observations within the Chesapeake Bay, but no diurnal fluctuations are captured because all eight are daily datasets. Two of the datasets, the MODIS composite and NAVO, do not represent an observed net-cooling of the Chesapeake in the middle of the POI and, thus, result in simulations that perform worse than the other datasets in both WST and ΔT. Additionally, the datasets that include missing WST at buoy locations (MODIS, NCEI, and the native ERA-I WST dataset) prove to be problematic mostly due to how WRF handles missing WST values on water cells. Within WRF, if a water cell is missing a WST value, the value for WST is filled with the initial skin temperature and is not updated for the duration of the simulation. If these simulations were individual 2-week runs (with observational nudging, for example, to keep the simulations from drifting), then WST would not be updated for two weeks. In the case of our study, because each simulation comprises eight 2-day-long runs, the WST values are updated every two days. This is problematic on its own, yet made worse when sst_skin is activated: Diurnal fluctuations produced by sst_skin at locations with missing WST values lead to outlandish calculations of WST and ΔT. The native ERA-I WST dataset is missing data at all shallow-water buoy locations in this region and particularly suffers from these issues. Filling the missing values in the boundary condition files prior to running the simulations alleviates the problematic sst_skin results and generally improves WST and ΔT performance. With that said, for simulations without any missing WST data, ΔT performance is not highly influenced by sst_skin because the amplitude of the diurnal oscillations of inland 2-m temperature is much larger and dominates the ΔT solution.

The MBDA’s skill at detecting bay breezes depends on resolution of the WST datasets: Higher resolution generally leads to better performance. Overall, for this period and region of interest, the MUR dataset produces the best results, followed by G1SST, according to many metrics. Additionally, we see the impact of sst_skin on bay-breeze simulations where, for many of the WST datasets, the best performer according to F1 score, accuracy, MCC, informedness, and markedness is the configuration in which sst_skin is activated.

While there is a subtle relationship between the resolution of the WST datasets and performance, it is also very likely that performance will vary based on the period of interest. In this study, MODIS and NAVO datasets yielded some of the worst simulations despite the higher resolution of the datasets. We think this is because of their erroneously high WST values during the POI within the Chesapeake Bay, which could be due to the lack of assimilated surface observations such as ships and buoys. A different region or period of interest might lead to different performances from each WST dataset.

We acknowledge the limitations of the surface observational datasets used in this study. Water body breezes are three-dimensional in space, but surface observations merely characterize the near-water and near-ground conditions. Thus, high resolution vertical profiles of wind speed and temperature could provide more insight into how well the water body breezes are captured by the model and how sensitive model results are to the different WST datasets in, for example, capturing the inland propagation speed and distance, and bay-breeze depth.

With that said, this admittedly limited assessment of the WST datasets and the ICBCs allows us to analyze how well the input data capture the temperature gradient from water to land, and to predict (prior to running any simulations) how well each case might perform. By applying a simple algorithm developed by Biggs and Graves (1962)—the LBI—we predicted that the ERA-I case with native WST would produce the most bay breezes and that the MODIS dataset would produce the fewest. This result is verified by running WRF and detecting the bay breezes with the MBDA, suggesting that simple preliminary assessment of WST data and ICBCs might sometimes be enough to infer how well a WST dataset will perform in full-physics simulations for a particular region and POI. In addition, this provides evidence of the LBI’s value and the utility of its application within a numerical model framework, and for WB breezes beyond the lake breeze.

Acknowledgments.

This research was funded by the U.S. Army Test and Evaluation Command through an interagency agreement with the National Science Foundation, which sponsors the National Center for Atmospheric Research (NCAR).

Data availability statement.

ASOS and AWOS data were obtained from the NCDC FTP server (see instructions for download at https://www.ncdc.noaa.gov/nomads/documentation/user-guide/retrieve-plot-data). Initial and boundary conditions for the WRF Model were obtained from the NCAR/UCAR Research Data Archive (https://rda.ucar.edu/). HRRR forecast and analysis were obtained via the University of Utah HRRR Archive (https://home.chpc.utah.edu/∼u0553130/Brian_Blaylock/cgi-bin/hrrr_download.cgi). Analysis and plotting codes were written in Python. Mapping plots utilize Cartopy (https://scitools.org.uk/cartopy/docs/latest/). A subset of data along with namelists is available through Zenodo (https://zenodo.org/record/7038414). Analysis code is hosted on GitHub (https://github.com/phawbeck/publications/tree/master/Chesapeake/SST).

REFERENCES

  • Arritt, R. W., 1987: The effect of water surface temperature on lake breezes and thermal internal boundary layers. Bound.-Layer Meteor., 40, 101125, https://doi.org/10.1007/BF00140071.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Arritt, R. W., 1993: Effects of the large-scale flow on characteristic features of the sea breeze. J. Appl. Meteor. Climatol., 32, 116125, https://doi.org/10.1175/1520-0450(1993)032<0116:EOTLSF>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Azorin-Molina, C., S. Tijm, and D. Chen, 2011: Development of selection algorithms and databases for sea breeze studies. Theor. Appl. Climatol., 106, 531546, https://doi.org/10.1007/s00704-011-0454-4.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Biggs, W. G., and M. E. Graves, 1962: A lake breeze index. J. Appl. Meteor., 1, 474480, https://doi.org/10.1175/1520-0450(1962)001<0474:ALBI>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Canada Meteorological Center, 2016: GHRSST Level 4 CMC0.1deg global foundation sea surface temperature analysis (GDS version 2). NASA Physical Oceanography DAAC, accessed 16 March 2021, https://doi.org/10.5067/GHCMC-4FM03.

    • Search Google Scholar
    • Export Citation
  • Crosman, E. T., and J. D. Horel, 2010: Sea and lake breezes: A review of numerical studies. Bound.-Layer Meteor., 137, 129, https://doi.org/10.1007/s10546-010-9517-9.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Dee, D. P., and Coauthors, 2011: The ERA-interim reanalysis: Configuration and performance of the data assimilation system. Quart. J. Roy. Meteor. Soc., 137, 553597, https://doi.org/10.1002/qj.828.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Donlon, C. J., P. J. Minnett, C. Gentemann, T. Nightingale, I. J. Barton, B. Ward, and M. J. Murray, 2002: Toward improved validation of satellite sea surface skin temperature measurements for climate research. J. Climate, 15, 353369, https://doi.org/10.1175/1520-0442(2002)015<0353:TIVOSS>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Fairall, C. W., E. F. Bradley, J. S. Godfrey, G. A. Wick, J. B. Edson, and G. S. Young, 1996: Cool-skin and warm-layer effects on sea surface temperature. J. Geophys. Res., 101, 12951308, https://doi.org/10.1029/95JC03190.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Filipiak, M. J., C. J. Merchant, H. Kettle, and P. Le Borgne, 2012: An empirical model for the statistics of sea surface diurnal warming. Ocean Sci., 8, 197209, https://doi.org/10.5194/os-8-197-2012.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Grim, J. A., J. C. Knievel, and E. T. Crosman, 2013: Techniques for using MODIS data to remotely sense lake water surface temperatures. J. Atmos. Oceanic Technol., 30, 24342451, https://doi.org/10.1175/JTECH-D-13-00003.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hawbecker, P., and J. C. Knievel, 2022: An algorithm for detecting the Chesapeake Bay breeze from mesoscale NWP model output. J. Appl. Meteor. Climatol., 61, 6175, https://doi.org/10.1175/JAMC-D-21-0097.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hersbach, H., and Coauthors, 2020: The ERA5 global reanalysis. Quart. J. Roy. Meteor. Soc., 146, 19992049, https://doi.org/10.1002/qj.3803.

  • Hong, S.-Y., and J.-O. J. Lim, 2006: The WRF single-moment 6-class microphysics scheme (WSM6). J. Korean Meteor. Soc., 42, 129151.

  • Hughes, C. P., and D. E. Veron, 2018: A characterization of the Delaware sea breeze using observations and modeling. J. Appl. Meteor. Climatol., 57, 14051421, https://doi.org/10.1175/JAMC-D-17-0186.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Iacono, M. J., J. S. Delamere, E. J. Mlawer, M. W. Shephard, S. A. Clough, and W. D. Collins, 2008: Radiative forcing by long-lived greenhouse gases: Calculations with the AER radiative transfer models. J. Geophys. Res., 113, D13103, https://doi.org/10.1029/2008JD009944.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Jiménez, P. A., J. Dudhia, J. F. González-Rouco, J. Navarro, J. P. Montávez, and E. García-Bustamante, 2012: A revised scheme for the WRF surface layer formulation. Mon. Wea. Rev., 140, 898918, https://doi.org/10.1175/MWR-D-11-00056.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Jolliffe, I. T., and D. B. Stephenson, 2012: Forecast Verification: A Practitioner’s Guide in Atmospheric Science. John Wiley and Sons, 296 pp.

    • Search Google Scholar
    • Export Citation
  • JPL OurOcean, 2010: GHRSST Level 4 G1SST global foundation sea surface temperature analysis, version 1. NASA Physical Oceanography DAAC, accessed 16 March 2021, https://doi.org/10.5067/GHG1S-4FP01.

    • Search Google Scholar
    • Export Citation
  • Kain, J. S., 2004: The Kain–Fritsch convective parameterization: An update. J. Appl. Meteor. Climatol., 43, 170181, https://doi.org/10.1175/1520-0450(2004)043<0170:TKCPAU>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kawai, Y., and A. Wada, 2007: Diurnal sea surface temperature variation and its impact on the atmosphere and ocean: A review. J. Oceanogr., 63, 721744, https://doi.org/10.1007/s10872-007-0063-0.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kingsmill, D. E., 1995: Convection initiation associated with a sea-breeze front, a gust front, and their collision. Mon. Wea. Rev., 123, 29132933, https://doi.org/10.1175/1520-0493(1995)123<2913:CIAWAS>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Knievel, J. C., D. L. Rife, J. A. Grim, A. N. Hahmann, J. P. Hacker, M. Ge, and H. H. Fisher, 2010: A simple technique for creating regional composites of sea surface temperature from MODIS for use in operational mesoscale NWP. J. Appl. Meteor. Climatol., 49, 22672284, https://doi.org/10.1175/2010JAMC2430.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Laird, N. F., D. A. R. Kristovich, X.-Z. Liang, R. W. Arritt, and K. Labas, 2001: Lake Michigan lake breezes: Climatology, local forcing, and synoptic environment. J. Appl. Meteor. Climatol., 40, 409424, https://doi.org/10.1175/1520-0450(2001)040<0409:LMLBCL>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Loughner, C. P., D. J. Allen, K. E. Pickering, D.-L. Zhang, Y.-X. Shou, and R. R. Dickerson, 2011: Impact of fair-weather cumulus clouds and the Chesapeake Bay breeze on pollutant transport and transformation. Atmos. Environ., 45, 40604072, https://doi.org/10.1016/j.atmosenv.2011.04.003.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Loughner, C. P., and Coauthors, 2014: Impact of bay-breeze circulations on surface air quality and boundary layer export. J. Appl. Meteor. Climatol., 53, 16971713, https://doi.org/10.1175/JAMC-D-13-0323.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mazzuca, G. M., K. E. Pickering, D. A. New, J. Dreessen, and R. R. Dickerson, 2019: Impact of bay breeze and thunderstorm circulations on surface ozone at a site along the Chesapeake Bay 2011–2016. Atmos. Environ., 198, 351365, https://doi.org/10.1016/j.atmosenv.2018.10.068.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Miller, S. T. K., B. D. Keim, R. W. Talbot, and H. Mao, 2003: Sea breeze: Structure, forecasting, and impacts. Rev. Geophys., 41, 1101, https://doi.org/10.1029/2003RG000124.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • NASA Jet Propulsion Laboratory, 2015: GHRSST Level 4 MUR global foundation sea surface temperature analysis, version 4.1. NASA Physical Oceanography DAAC, accessed 16 March 2021, https://doi.org/10.5067/GHGMR-4FJ04.

    • Search Google Scholar
    • Export Citation
  • NASA Jet Propulsion Laboratory, 2018: GHRSST Level 4 K10_SST Global 10 km analyzed sea surface temperature from Naval Oceanographic Office (NAVO) in GDS2.0, version 1.0. NASA Physical Oceanography DAAC, accessed 16 March 2021, https://doi.org/10.5067/GHK10-L4N01.

    • Search Google Scholar
    • Export Citation
  • NCEI, 2016: GHRSST Level 4 AVHRR_OI global blended sea surface temperature analysis (GDS version 2) from NCEI. NASA Physical Oceanography DAAC, accessed 16 March 2021, https://doi.org/10.5067/GHAAO-4BC02.

    • Search Google Scholar
    • Export Citation
  • OSPO, 2015: GHRSST Level 4 OSPO global foundation sea surface temperature analysis (GDS version 2). NASA Physical Oceanography DAAC, accessed 16 March 2021, https://doi.org/10.5067/GHGPB-4FO02.

    • Search Google Scholar
    • Export Citation
  • Porson, A., D. G. Steyn, and G. Schayes, 2007: Formulation of an index for sea breezes in opposing winds. J. Appl. Meteor. Climatol., 46, 12571263, https://doi.org/10.1175/JAM2525.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Powers, D. M. W., 2020: Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. arXiv, 2010.16061v1, https://doi.org/10.48550/arXiv.2010.16061.

    • Search Google Scholar
    • Export Citation
  • Rutledge, G. K., J. Alpert, and W. Ebisuzaki, 2006: NOMADS: A climate and weather model archive at the national oceanic and atmospheric administration. Bull. Amer. Meteor. Soc., 87, 327342, https://doi.org/10.1175/BAMS-87-3-327.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Salisbury, D., K. Mogensen, and G. Balsamo, 2018: Use of in situ observations to verify the diurnal cycle of sea surface temperature in ECMWF coupled model forecasts. ECMWF Tech. Memo. 826, 19 pp., https://www.ecmwf.int/node/18745.

    • Search Google Scholar
    • Export Citation
  • Schluessel, P., W. J. Emery, H. Grassl, and T. Mammen, 1990: On the bulk-skin temperature difference and its impact on satellite remote sensing of sea surface temperature. J. Geophys. Res., 95, 13 34113 356, https://doi.org/10.1029/JC095iC08p13341.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Segal, M., and R. A. Pielke, 1985: The effect of water temperature and synoptic winds on the development of surface flows over narrow, elongated water bodies. J. Geophys. Res., 90, 49074910, https://doi.org/10.1029/JC090iC03p04907.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Segal, M., R. T. McNider, R. A. Pielke, and D. S. McDougal, 1982: A numerical model simulation of the regional air pollution meteorology of the greater Chesapeake Bay area—Summer day case study. Atmos. Environ., 16, 13811397, https://doi.org/10.1016/0004-6981(82)90059-2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Segal, M., M. Leuthold, R. W. Arritt, C. Anderson, and J. Shen, 1997: Small lake daytime breezes: Some observational and conceptual evaluations. Bull. Amer. Meteor. Soc., 78, 11351148, https://doi.org/10.1175/1520-0477(1997)078<1135:SLDBSO>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Shen, J., 1998: Numerical modelling of the effects of vegetation and environmental conditions on the lake breeze. Bound.-Layer Meteor., 87, 481498, https://doi.org/10.1023/A:1000906300218.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Sikora, T. D., G. S. Young, and M. J. Bettwy, 2010: Analysis of the western shore Chesapeake Bay bay-breeze. Natl. Wea. Dig., 34, 5665.

    • Search Google Scholar
    • Export Citation
  • Stauffer, R. M., and A. M. Thompson, 2015: Bay breeze climatology at two sites along the Chesapeake Bay from 1986–2010: Implications for surface ozone. J. Atmos. Chem., 72, 355372. https://doi.org/10.1007/s10874-013-9260-y.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Stauffer, R. M., and Coauthors, 2015: Bay breeze influence on surface ozone at Edgewood, MD during July 2011. J. Atmos. Chem., 72, 335353, https://doi.org/10.1007/s10874-012-9241-6.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Stuart-Menteth, A. C., I. S. Robinson, and P. G. Challenor, 2003: A global study of diurnal warming using satellite-derived sea surface temperature. J. Geophys. Res., 108, 3155, https://doi.org/10.1029/2002JC001534.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Takaya, Y., J.-R. Bidlot, A. C. M. Beljaars, and P. A. E. M. Janssen, 2010: Refinements to a prognostic scheme of skin sea surface temperature. J. Geophys. Res., 115, C06009, https://doi.org/10.1029/2009JC005985.

    • Search Google Scholar
    • Export Citation
  • Taylor, K. E., 2001: Summarizing multiple aspects of model performance in a single diagram. J. Geophys. Res., 106, 71837192, https://doi.org/10.1029/2000JD900719.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tewari, M., and Coauthors, 2004: Implementation and verification of the unified NOAH land surface model in the WRF model. 20th Conf. on Weather Analysis and Forecasting/16th Conf. on Numerical Weather Prediction, Seattle, WA, Amer. Meteor. Soc., 14.2a, https://ams.confex.com/ams/pdfpapers/69061.pdf.

    • Search Google Scholar
    • Export Citation
  • UKMO, 2005: GHRSST Level 4 OSTIA global foundation sea surface temperature analysis. NASA Physical Oceanography DAAC, accessed 16 March 2021, https://doi.org/10.5067/GHOST-4FK01.

    • Search Google Scholar
    • Export Citation
  • Webster, P. J., C. A. Clayson, and J. A. Curry, 1996: Clouds, radiation, and the diurnal cycle of sea surface temperature in the tropical western Pacific. J. Climate, 9, 17121730, https://doi.org/10.1175/1520-0442(1996)009<1712:CRATDC>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wilks, D. S., 2011: Statistical Methods in the Atmospheric Sciences. Vol. 100, Elsevier Science, 704 pp.

  • Zeng, X., and A. Beljaars, 2005: A prognostic scheme of sea surface skin temperature for modeling and data assimilation. Geophys. Res. Lett., 32, L14605, https://doi.org/10.1029/2005GL023030.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Zhang, X., and Coauthors, 2019: Improving lake-breeze simulation with WRF nested LES and lake model over a large shallow lake. J. Appl. Meteor. Climatol., 58, 16891708, https://doi.org/10.1175/JAMC-D-18-0282.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
Save
  • Arritt, R. W., 1987: The effect of water surface temperature on lake breezes and thermal internal boundary layers. Bound.-Layer Meteor., 40, 101125, https://doi.org/10.1007/BF00140071.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Arritt, R. W., 1993: Effects of the large-scale flow on characteristic features of the sea breeze. J. Appl. Meteor. Climatol., 32, 116125, https://doi.org/10.1175/1520-0450(1993)032<0116:EOTLSF>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Azorin-Molina, C., S. Tijm, and D. Chen, 2011: Development of selection algorithms and databases for sea breeze studies. Theor. Appl. Climatol., 106, 531546, https://doi.org/10.1007/s00704-011-0454-4.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Biggs, W. G., and M. E. Graves, 1962: A lake breeze index. J. Appl. Meteor., 1, 474480, https://doi.org/10.1175/1520-0450(1962)001<0474:ALBI>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Canada Meteorological Center, 2016: GHRSST Level 4 CMC0.1deg global foundation sea surface temperature analysis (GDS version 2). NASA Physical Oceanography DAAC, accessed 16 March 2021, https://doi.org/10.5067/GHCMC-4FM03.

    • Search Google Scholar
    • Export Citation
  • Crosman, E. T., and J. D. Horel, 2010: Sea and lake breezes: A review of numerical studies. Bound.-Layer Meteor., 137, 129, https://doi.org/10.1007/s10546-010-9517-9.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Dee, D. P., and Coauthors, 2011: The ERA-interim reanalysis: Configuration and performance of the data assimilation system. Quart. J. Roy. Meteor. Soc., 137, 553597, https://doi.org/10.1002/qj.828.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Donlon, C. J., P. J. Minnett, C. Gentemann, T. Nightingale, I. J. Barton, B. Ward, and M. J. Murray, 2002: Toward improved validation of satellite sea surface skin temperature measurements for climate research. J. Climate, 15, 353369, https://doi.org/10.1175/1520-0442(2002)015<0353:TIVOSS>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Fairall, C. W., E. F. Bradley, J. S. Godfrey, G. A. Wick, J. B. Edson, and G. S. Young, 1996: Cool-skin and warm-layer effects on sea surface temperature. J. Geophys. Res., 101, 12951308, https://doi.org/10.1029/95JC03190.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Filipiak, M. J., C. J. Merchant, H. Kettle, and P. Le Borgne, 2012: An empirical model for the statistics of sea surface diurnal warming. Ocean Sci., 8, 197209, https://doi.org/10.5194/os-8-197-2012.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Grim, J. A., J. C. Knievel, and E. T. Crosman, 2013: Techniques for using MODIS data to remotely sense lake water surface temperatures. J. Atmos. Oceanic Technol., 30, 24342451, https://doi.org/10.1175/JTECH-D-13-00003.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hawbecker, P., and J. C. Knievel, 2022: An algorithm for detecting the Chesapeake Bay breeze from mesoscale NWP model output. J. Appl. Meteor. Climatol., 61, 6175, https://doi.org/10.1175/JAMC-D-21-0097.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hersbach, H., and Coauthors, 2020: The ERA5 global reanalysis. Quart. J. Roy. Meteor. Soc., 146, 19992049, https://doi.org/10.1002/qj.3803.

  • Hong, S.-Y., and J.-O. J. Lim, 2006: The WRF single-moment 6-class microphysics scheme (WSM6). J. Korean Meteor. Soc., 42, 129151.

  • Hughes, C. P., and D. E. Veron, 2018: A characterization of the Delaware sea breeze using observations and modeling. J. Appl. Meteor. Climatol., 57, 14051421, https://doi.org/10.1175/JAMC-D-17-0186.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Iacono, M. J., J. S. Delamere, E. J. Mlawer, M. W. Shephard, S. A. Clough, and W. D. Collins, 2008: Radiative forcing by long-lived greenhouse gases: Calculations with the AER radiative transfer models. J. Geophys. Res., 113, D13103, https://doi.org/10.1029/2008JD009944.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Jiménez, P. A., J. Dudhia, J. F. González-Rouco, J. Navarro, J. P. Montávez, and E. García-Bustamante, 2012: A revised scheme for the WRF surface layer formulation. Mon. Wea. Rev., 140, 898918, https://doi.org/10.1175/MWR-D-11-00056.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Jolliffe, I. T., and D. B. Stephenson, 2012: Forecast Verification: A Practitioner’s Guide in Atmospheric Science. John Wiley and Sons, 296 pp.

    • Search Google Scholar
    • Export Citation
  • JPL OurOcean, 2010: GHRSST Level 4 G1SST global foundation sea surface temperature analysis, version 1. NASA Physical Oceanography DAAC, accessed 16 March 2021, https://doi.org/10.5067/GHG1S-4FP01.

    • Search Google Scholar
    • Export Citation
  • Kain, J. S., 2004: The Kain–Fritsch convective parameterization: An update. J. Appl. Meteor. Climatol., 43, 170181, https://doi.org/10.1175/1520-0450(2004)043<0170:TKCPAU>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kawai, Y., and A. Wada, 2007: Diurnal sea surface temperature variation and its impact on the atmosphere and ocean: A review. J. Oceanogr., 63, 721744, https://doi.org/10.1007/s10872-007-0063-0.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kingsmill, D. E., 1995: Convection initiation associated with a sea-breeze front, a gust front, and their collision. Mon. Wea. Rev., 123, 29132933, https://doi.org/10.1175/1520-0493(1995)123<2913:CIAWAS>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Knievel, J. C., D. L. Rife, J. A. Grim, A. N. Hahmann, J. P. Hacker, M. Ge, and H. H. Fisher, 2010: A simple technique for creating regional composites of sea surface temperature from MODIS for use in operational mesoscale NWP. J. Appl. Meteor. Climatol., 49, 22672284, https://doi.org/10.1175/2010JAMC2430.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Laird, N. F., D. A. R. Kristovich, X.-Z. Liang, R. W. Arritt, and K. Labas, 2001: Lake Michigan lake breezes: Climatology, local forcing, and synoptic environment. J. Appl. Meteor. Climatol., 40, 409424, https://doi.org/10.1175/1520-0450(2001)040<0409:LMLBCL>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Loughner, C. P., D. J. Allen, K. E. Pickering, D.-L. Zhang, Y.-X. Shou, and R. R. Dickerson, 2011: Impact of fair-weather cumulus clouds and the Chesapeake Bay breeze on pollutant transport and transformation. Atmos. Environ., 45, 40604072, https://doi.org/10.1016/j.atmosenv.2011.04.003.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Loughner, C. P., and Coauthors, 2014: Impact of bay-breeze circulations on surface air quality and boundary layer export. J. Appl. Meteor. Climatol., 53, 16971713, https://doi.org/10.1175/JAMC-D-13-0323.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mazzuca, G. M., K. E. Pickering, D. A. New, J. Dreessen, and R. R. Dickerson, 2019: Impact of bay breeze and thunderstorm circulations on surface ozone at a site along the Chesapeake Bay 2011–2016. Atmos. Environ., 198, 351365, https://doi.org/10.1016/j.atmosenv.2018.10.068.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Miller, S. T. K., B. D. Keim, R. W. Talbot, and H. Mao, 2003: Sea breeze: Structure, forecasting, and impacts. Rev. Geophys., 41, 1101, https://doi.org/10.1029/2003RG000124.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • NASA Jet Propulsion Laboratory, 2015: GHRSST Level 4 MUR global foundation sea surface temperature analysis, version 4.1. NASA Physical Oceanography DAAC, accessed 16 March 2021, https://doi.org/10.5067/GHGMR-4FJ04.

    • Search Google Scholar
    • Export Citation
  • NASA Jet Propulsion Laboratory, 2018: GHRSST Level 4 K10_SST Global 10 km analyzed sea surface temperature from Naval Oceanographic Office (NAVO) in GDS2.0, version 1.0. NASA Physical Oceanography DAAC, accessed 16 March 2021, https://doi.org/10.5067/GHK10-L4N01.

    • Search Google Scholar
    • Export Citation
  • NCEI, 2016: GHRSST Level 4 AVHRR_OI global blended sea surface temperature analysis (GDS version 2) from NCEI. NASA Physical Oceanography DAAC, accessed 16 March 2021, https://doi.org/10.5067/GHAAO-4BC02.

    • Search Google Scholar
    • Export Citation
  • OSPO, 2015: GHRSST Level 4 OSPO global foundation sea surface temperature analysis (GDS version 2). NASA Physical Oceanography DAAC, accessed 16 March 2021, https://doi.org/10.5067/GHGPB-4FO02.

    • Search Google Scholar
    • Export Citation
  • Porson, A., D. G. Steyn, and G. Schayes, 2007: Formulation of an index for sea breezes in opposing winds. J. Appl. Meteor. Climatol., 46, 12571263, https://doi.org/10.1175/JAM2525.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Powers, D. M. W., 2020: Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. arXiv, 2010.16061v1, https://doi.org/10.48550/arXiv.2010.16061.

    • Search Google Scholar
    • Export Citation
  • Rutledge, G. K., J. Alpert, and W. Ebisuzaki, 2006: NOMADS: A climate and weather model archive at the national oceanic and atmospheric administration. Bull. Amer. Meteor. Soc., 87, 327342, https://doi.org/10.1175/BAMS-87-3-327.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Salisbury, D., K. Mogensen, and G. Balsamo, 2018: Use of in situ observations to verify the diurnal cycle of sea surface temperature in ECMWF coupled model forecasts. ECMWF Tech. Memo. 826, 19 pp., https://www.ecmwf.int/node/18745.

    • Search Google Scholar
    • Export Citation
  • Schluessel, P., W. J. Emery, H. Grassl, and T. Mammen, 1990: On the bulk-skin temperature difference and its impact on satellite remote sensing of sea surface temperature. J. Geophys. Res., 95, 13 34113 356, https://doi.org/10.1029/JC095iC08p13341.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Segal, M., and R. A. Pielke, 1985: The effect of water temperature and synoptic winds on the development of surface flows over narrow, elongated water bodies. J. Geophys. Res., 90, 49074910, https://doi.org/10.1029/JC090iC03p04907.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Segal, M., R. T. McNider, R. A. Pielke, and D. S. McDougal, 1982: A numerical model simulation of the regional air pollution meteorology of the greater Chesapeake Bay area—Summer day case study. Atmos. Environ., 16, 13811397, https://doi.org/10.1016/0004-6981(82)90059-2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Segal, M., M. Leuthold, R. W. Arritt, C. Anderson, and J. Shen, 1997: Small lake daytime breezes: Some observational and conceptual evaluations. Bull. Amer. Meteor. Soc., 78, 11351148, https://doi.org/10.1175/1520-0477(1997)078<1135:SLDBSO>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Shen, J., 1998: Numerical modelling of the effects of vegetation and environmental conditions on the lake breeze. Bound.-Layer Meteor., 87, 481498, https://doi.org/10.1023/A:1000906300218.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Sikora, T. D., G. S. Young, and M. J. Bettwy, 2010: Analysis of the western shore Chesapeake Bay bay-breeze. Natl. Wea. Dig., 34, 5665.

    • Search Google Scholar
    • Export Citation
  • Stauffer, R. M., and A. M. Thompson, 2015: Bay breeze climatology at two sites along the Chesapeake Bay from 1986–2010: Implications for surface ozone. J. Atmos. Chem., 72, 355372. https://doi.org/10.1007/s10874-013-9260-y.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Stauffer, R. M., and Coauthors, 2015: Bay breeze influence on surface ozone at Edgewood, MD during July 2011. J. Atmos. Chem., 72, 335353, https://doi.org/10.1007/s10874-012-9241-6.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Stuart-Menteth, A. C., I. S. Robinson, and P. G. Challenor, 2003: A global study of diurnal warming using satellite-derived sea surface temperature. J. Geophys. Res., 108, 3155, https://doi.org/10.1029/2002JC001534.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Takaya, Y., J.-R. Bidlot, A. C. M. Beljaars, and P. A. E. M. Janssen, 2010: Refinements to a prognostic scheme of skin sea surface temperature. J. Geophys. Res., 115, C06009, https://doi.org/10.1029/2009JC005985.

    • Search Google Scholar
    • Export Citation
  • Taylor, K. E., 2001: Summarizing multiple aspects of model performance in a single diagram. J. Geophys. Res., 106, 71837192, https://doi.org/10.1029/2000JD900719.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tewari, M., and Coauthors, 2004: Implementation and verification of the unified NOAH land surface model in the WRF model. 20th Conf. on Weather Analysis and Forecasting/16th Conf. on Numerical Weather Prediction, Seattle, WA, Amer. Meteor. Soc., 14.2a, https://ams.confex.com/ams/pdfpapers/69061.pdf.

    • Search Google Scholar
    • Export Citation
  • UKMO, 2005: GHRSST Level 4 OSTIA global foundation sea surface temperature analysis. NASA Physical Oceanography DAAC, accessed 16 March 2021, https://doi.org/10.5067/GHOST-4FK01.

    • Search Google Scholar
    • Export Citation
  • Webster, P. J., C. A. Clayson, and J. A. Curry, 1996: Clouds, radiation, and the diurnal cycle of sea surface temperature in the tropical western Pacific. J. Climate, 9, 17121730, https://doi.org/10.1175/1520-0442(1996)009<1712:CRATDC>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wilks, D. S., 2011: Statistical Methods in the Atmospheric Sciences. Vol. 100, Elsevier Science, 704 pp.

  • Zeng, X., and A. Beljaars, 2005: A prognostic scheme of sea surface skin temperature for modeling and data assimilation. Geophys. Res. Lett., 32, L14605, https://doi.org/10.1029/2005GL023030.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Zhang, X., and Coauthors, 2019: Improving lake-breeze simulation with WRF nested LES and lake model over a large shallow lake. J. Appl. Meteor. Climatol., 58, 16891708, https://doi.org/10.1175/JAMC-D-18-0282.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    (a) Locations of the AWOS and ASOS stations used in this study. Coral-colored circles are the locations of “near shore” stations, and the black circle shows the location of the inland station KIAD. Cones represent the “onshore” wind direction from the MBDA. (b) Buoy locations, denoted by blue triangles, with the extent of (a) denoted by the dash-outlined box.

  • Fig. 2.

    Water surface temperature and buoy locations for each case. Buoys are denoted by orange open circles where WST data are available and by red crosses where data are missing. The number of buoys with WST values within the domain is noted near the top of each panel.

  • Fig. 3.

    (a) WST averaged over each buoy and (c) 2-m temperature, and (e) 10-m wind speed averaged over each nearshore station, along with (b),(d),(f) the respective plots of bias vs RMSE. Observations are denoted by a dotted black line in (a), (c), and (e).

  • Fig. 4.

    The (a) 2-m temperature from IAD for ERA-I ICBCs, (b) WST for each WST dataset averaged over each buoy location, and (d) ΔT calculated as the difference between 2-m temperature at IAD and the average WST over all nearshore stations. Observations are denoted by a dotted black line in (a), (b), and (d). Also shown are bias vs RMSE in (c) WST and (e) ΔT for each case.

  • Fig. 5.

    Time series of WST at shallow-water buoys (a) ANN, (b) BIS, (c) FLN and deep-water buoys (d) VAB and (e) DEB for each case. Observations are denoted by black lines.

  • Fig. 6.

    Normalized Taylor diagram for WST for each base case averaged over all shallow-water buoy locations.

  • Fig. 7.

    Time series of WST at (a),(b) LWT and (c),(d) TBL (left) without filling missing WST values and (right) with filling the missing WST values for each case with sst_skin activated.

  • Fig. 8.

    Normalized Taylor diagrams of WST averaged over all shallow-water buoy locations for each case. Arrows point from simulations with sst_skin turned off to the respective simulations with sst_skin turned on.

  • Fig. 9.

    Two-dimensional histograms of observed (x axis) vs simulated (y axis) WST from the four ERA-I configurations [(a) default; (b) sst_skin; (e) filled; (f) filled with sst_skin] and the four OSTIA configurations [(c) default; (d) sst_skin; (g) filled; (h) filled with sst_skin].

  • Fig. 10.

    Normalized Taylor diagrams of ΔT averaged over all nearshore stations for each case.

  • Fig. 11.

    Time series output of 2-m temperature (shades of red) and WST (shades of blue) for each setup of the (a) OSTIA and (b) ERA-I cases.

  • Fig. 12.

    The F1 scores for each simulation.

  • Fig. 13.

    (a) Accuracy, (b) MCC, (c) IF, and (d) MK (all defined in section 5c) for each simulation.

  • Fig. 14.

    (a) Number of WB breezes and (b) number of days in which a WB breeze is recorded at any nearshore station for each simulation. Open crosses denote the values predicted by the LBI with a threshold of 4.0.