## 1. Introduction

Quantifying the impact of climate variability on application sectors has been studied for several decades in various sectors such as hydrology, agriculture, forest, and livestock farming (cf. Risbey and Entekhabi 1996; Jones et al. 2000; Adams et al. 2013; Cobon et al. 2020). In many related papers, the process-based approach has been widely adopted, which utilizes a physical process model for evaluating the impact. This approach is probably appropriate for predicting climate impact in a future period since a climate condition that has never occurred can break out (cf. Williams and Jackson 2007). Usually, the physical process models require reliable weather data over a small region with a highly spatiotemporal resolution as the input (cf. Ban et al. 2017; Lim et al. 2015; Lee et al. 2017). On the other hand, most climate research with respect to predictability mainly focuses on large-scale circulations, and the results are described in terms of circulation variables such as geopotential height and sea level pressure at a coarse spatial and temporal resolution (cf. Qiao et al. 2020; Zhao et al. 2020; Tian and Fan 2020). Thus, although large-scale circulations definitely affect local-scale weather and climate, their prediction results cannot be directly utilized in climate impact studies due to such gaps. Nevertheless, local-scale climate prediction must be derived from that of large-scale circulations because local climate results from the interaction of large-scale circulations with the local topography. Therefore, for applying seasonal climate forecast to regional climate impact studies, we need to apply a process, namely, downscaling to a coarse-resolution seasonal climate forecast for producing reliable high-resolution climate data.

As the first subject of our project, this study concentrated on developing a downscaling method to be applied for future tasks. Specifically, taking the case of a river basin in Korean Peninsula during boreal winter, we proposed and examined a weather generator–based method for downscaling. It was established that the East Asian winter monsoon (EAWM) influences the winter in the Korean Peninsula (cf. Kim et al. 2017). The EAWM system is characterized by the Siberian high over the northern part of Eurasia and the Aleutian low over the North Pacific Ocean (Fig. 1). This east–west pressure system is associated with land–ocean thermal contrasts. The monsoonal winds governed by the system can make cold and dry winters or warm and wet winters in the Korean Peninsula. Moreover, there have been proposed many indices representing the strength of the EAWM and its predictability was examined using the operational forecast in the previous studies (cf. Li and Yang 2010; Wang and Chen 2014; Kim et al. 2017). Based on this background, we proposed a downscaling method utilizing the response of the local basin climate to the EAWM strength. The response was represented by statistical models that constituted a stochastic weather generator. The downscaling was performed by running the weather generator based on EAWM strength prediction to produce diverse weather scenarios.

It is noteworthy that, with low computational cost, the proposed method can rapidly produce the number of weather scenarios required to obtain the final applicational output, such as crop yield prediction in agricultural sector, with proper uncertainty representation. Furthermore, the scenarios produced by using the proposed method reveal no or insignificant bias in weather/climate characteristics, whereas those produced by dynamical models are typically biased, and thus an additional bias-correction process is required for climate impact studies (cf. Kim et al. 2019). Actually, this is a typical advantage of statistical methods over their dynamical counterparts. It is also notable that the weather generator method is more prominent for temporal disaggregation than other statistical methods such as regression and weather-pattern-based approaches (cf. Wilby and Wigley 1997).

Considering the prediction uncertainty and feasibility of downscaling scheme, the EAWM strength prediction was implemented as a tercile forecast that results in a probability distribution of three categories: “weak,” “normal,” and “strong.” Meanwhile, the weather generator model was newly proposed for downscaling of the EAWM strength prediction. It consists of precipitation and temperature models simulating daily cumulative precipitation amounts (unit: mm) and daily maximum and minimum temperature values (unit: °C) at multiple stations in the basin. Regarding the downscaling, the key feature of the precipitation model is that every characteristic of simulated precipitation is determined by the monsoon strength category selected by using the tercile forecast. Moreover, that of the temperature model is to introduce a low-frequency oscillation component into the daily variation whose behavior is also determined by the EAWM strength category. Through these associations, the climate variability of the basin according to EAWM strength can be expressed by the weather generator model.

To evaluate the performance of the proposed downscaling scheme, we first examined whether the weather generator model reproduces the observed daily and monthly climate characteristics under a given EAWM strength. Conceptually speaking, it is equivalent to testing whether or not the climate variability of the basin according to EAWM strength is reproduced by the weather generator model, independently of the EAWM strength prediction skill of the operational forecast. Meanwhile, we also evaluated the performance of the hindcast downscaling data by measuring how informative its monthly characteristics were for predicting actual values. It can be viewed as a preliminary test regarding how effective the operational seasonal climate forecast is for a climate impact study, and thus it produces the key results of this paper.

The rest of this paper is organized as follows: Section 2 briefly describes the study area and data used in this paper. Section 3 proposes a prediction model of EAWM strength, a weather generator model, and the related downscaling scheme. Section 4 shows the test results of the prediction skill of the operational hindcast for EAWM strength, the reproducibility of the weather generator, and the informativeness of the hindcast downscaling data. Finally, section 5 presents the summary and conclusion of this study. The appendix deals with the technical details of the weather generator model used in section 3.

## 2. The study area and data

This study focuses on a basin in the Korean Peninsula where the main stream is the Nakdong River. It is one of the longest rivers in the country (the length is around 510 km), and two major cities having around 25% of the total national population are situated along it. In fact, the river plays a critical role in water supply for agriculture in the province, thus, the water quality control and drought management are important issues (cf. Korea Water Resources Corporation 2006; Jung et al. 2016). This study considered 14 stations around the river, spread widely in the meridional direction (Fig. 1). The detail description of the basin is found in Eum et al. (2010).

In this study, we used three kinds of datasets: observation data, reanalysis data, and hindcast data. As observation data, we employed Automatic Synoptic Observation System data (available online at https://data.kma.go.kr) for a 26-yr period (1 January 1988–31 December 2013) including 2347 data points during the boreal winter. We constructed a system of statistical models based on the following variables of the data: daily cumulative precipitation (mm), daily maximum temperature (TMAX; °C), and daily minimum temperature (TMIN; °C). Meanwhile, as reanalysis data, we adopted the monthly National Centers for Environmental Prediction–U.S. Department of Energy Atmospheric Model Intercomparison Project II (NCEP–DOE AMIP-II) Reanalysis dataset for the period from January 1979 to December 2013 (Kanamitsu et al. 2002). In particular, we used the fields of December–February (DJF) averaged sea level pressure (SLP) and 200-hPa zonal wind (U200), based upon which the EAWM indices considered in this paper are defined. The reanalysis data were mainly utilized to calculate the real value of the EAWM indices.

We used the Asia-Pacific Economic Cooperation (APEC) Climate Center (APCC) seasonal climate forecast system to obtain hindcast data. [The data are officially provided online (http://adss.apcc21.org), and a general description of the operational forecast is found in Min et al. (2017).] In this study, seasonal hindcast of some individual forecast models and their multimodel ensemble (MME) were available with period of more than 20 years (Table 1). In this paper, MME indicates the simple composite of the ensemble means of the individual models listed in Table 1. The hindcast data were used to evaluate the prediction skill of the operational forecast with respect to the EAWM strength, and furthermore, its downscaling data were tested in terms of informativeness.

The information of the individual forecast models and MME used in this study.

## 3. Main proposed downscaling method

In this section, we proposed a statistical method for downscaling of EAWM strength prediction into the local basin climate in daily time scale. For the downscaling scheme, we first identified a suitable EAWM index that is relevant to the basin climate and predictable. Next, we constructed a weather generator that is associated to the selected EAWM index. From these, we proposed the main downscaling scheme.

### a. Tercile probabilistic prediction of EAWM strength

Among the previously proposed indices representing the EAWM strength, we considered the indices presented in Table 2. Among these, LiYang10 is based on U200 field, whereas the others are based on SLP field. Note that the circulation fields are dealt with by the APCC seasonal climate forecast. All of the indices considered were calculated using the nonstandardized variable, though some of them are originally defined on the basis of the standardized ones. These indices exhibit relevance to the local climate as described in Table 3 that presents the correlation between the EAWM indices and the seasonal characteristics of the basin in terms of DJF mean of basin-averaged daily temperature and DJF total of basin-averaged daily precipitation. Most of the indices are highly correlated with the temperature, and some indices such as Shi96 are associated even with precipitation.

EAWM indices. All of the indices are based on the nonstandardized variable, although some of them are originally calculated using the standardized variable.

The correlation coefficients between the EAWM indices and the local climate. The single and double asterisks denote significance at the 5% and 1% levels, respectively.

We categorized each index into three intervals obtained by partitioning with its terciles as in operational seasonal climate forecasts. Table 4 presents the terciles of the indices calculated using the whole period of the reanalysis data. For convenience, the intervals were named “below normal” (BN), “near normal” (NN), and “above normal” (AN) in ascending order of index value. Then, for all of the indices except Kim17, BN, NN, and AN correspond to weak, normal, and strong intensity of EAWM, respectively, whereas for Kim17 the correspondence holds in the reverse direction because of the definition. From this categorization, a probabilistic prediction of EAWM strength was taken into account. We adopted a multinomial logistic regression model that assumes

where (*p*_{BN}, *p*_{NN}, *p*_{AN}) represents the probability distribution of EAWM strength category and *X* indicates the predicted value of EAWM index. The value of *X* is typically obtained from a single forecast model or MME forecast. Moreover, the *β* coefficients in Eq. (3.1) are estimated using maximum likelihood method based on dataset combining historical observed values of EAWM strength category and hindcast values of an EAWM index during the hindcast period. For evaluating the performance of the prediction model in Eq. (3.1), we utilized a ranked probability skill score (RPSS) that can be viewed as the multinomial version of Brier skill score (cf. Weigel et al. 2007). As its value becomes closer to 1, it is indicated that the performance is better.

The terciles of the EAWM indices used for the categorization. The values were calculated using the reanalysis data for the period 1979–2013. All of the results are based on nonstandardized variables.

### b. Weather generator method

A stochastic weather generator plays a prominent role in downscaling of EAWM strength. For valid downscaling, it has to describe the boreal winter weather/climate of the target basin, which reveals interannual variability with respect to EAWM strength. The proposed weather generator deals with daily cumulative precipitation and daily maximum/minimum temperature at multiple sites; namely, it is a multisite and multivariate weather generator. Thus, it is mainly composed of precipitation and temperature models. Below, we concentrate on explaining key features of the models conceptually. Their technical details are dealt with in the appendix.

#### 1) Precipitation model

The proposed weather generator simulates the precipitation on the basin using wet/dry-spell alternation scheme: a wet spell is followed by a dry spell, which is again done by another wet spell (cf. Racsko et al. 1991). The scheme assumes that each spell is independently generated. Thus, the precipitation model is divided into dry-spell- and wet-spell-generation models. In this paper, dry day is defined as day when it does not rain at any station in the target basin, and the opposite case is a wet day. The consecutive dry/wet days constitute a dry/wet spell, respectively. Dry-spell generation is simply performed by simulating the length of spell. For the simulation, negative binomial distribution was adopted in this study. Meanwhile, the wet-spell-generation model is relatively complicated since precipitation events are simulated on each day of wet spell. A first-order Markov model was employed for simulating the wet spell, of which each day is either “mildly wet” or “very wet” (cf. Apipattanavis et al. 2007). Since the target region is dry during boreal winter, the wet spells are shorter than the dry spells on average. The Markov model is suitable for simulating short wet spells, whereas negative binomial distribution is appropriate for doing relatively long dry spells. Moreover, for simulating precipitation event on wet-day, Gaussian-copula-based model with gamma marginals was adopted to simulate the spatial correlation of precipitation on the basin as well as the amount at each station for a given wetness. (See section a in the appendix for the technical details of the model.)

Note that both generation models were designed to be fully parametric. It implies that every characteristic of simulated precipitation is completely determined by a finite number of parameters. Moreover, this design gives rise to a feasible downscaling scheme with respect to precipitation that is to determine the values of the parameters according to EAWM strength. The downscaling scheme based on this association was implemented by algorithm 3 with algorithm 1 (generating precipitation amount on a wet day) and algorithm 2 (generating a wet spell) as the subroutines (which are all found in section a in the appendix) and produces the simulated precipitation that has characteristics varying according to EAWM strength prediction.

#### 2) Temperature model

Now, we explain the temperature model and the related downscaling scheme. Let *T*_{max}(*s*, *t*) and *T*_{min}(*s*, *t*) denote TMAX and TMIN of station *s* in day *t*, respectively. This notation is also applied to the variables appearing below. Their temporal variations are assumed to be decomposed into

where *μ*_{$\diamond $}(*s*, *t*), Δ_{$\diamond $}(*s*, *t*), and Δ_{$\diamond $}(*s*, *t*) represent the annually cyclic mean trend, the slowly varying component, and the precipitation effect on daily mean of *T*_{$\diamond $}(*s*, *t*), respectively. The variables *σ*_{$\diamond $}(*s*, *t*) and *ε*_{$\diamond $}(*s*, *t*) indicate the daily standard deviation and the standardized anomaly, respectively. (The specific models, estimations of the components, and simulations are dealt with in section b of the appendix.)

Note that the slowly varying component Λ_{$\diamond $}(*s*, *t*) was originally introduced to effectively represent the smooth temperature variation in monthly/seasonal time scale that is predicted by seasonal climate forecast. Figure 2 shows the estimated components at station 1 during DJF in randomly chosen years (see Fig. 1 for the location of the station and Figs. S.4–S.7 in the online supplemental material of this paper for the other years). As seen therein, slow variations were effectively detected together with rapid variations by the estimation method (presented in section b of the appendix). Though the estimation method was developed in the context of time domain analysis, in the viewpoint of frequency domain, the former and latter can be interpreted as the temporal variations of low-frequency and high-frequency level, respectively. Meanwhile, Fig. 3 shows the low-frequency variations for all of the stations in the chosen years, which exhibited strong spatial coherency and vivid interannual variation (see Figs. S.8–S.11 of the online supplemental material for the other years). Furthermore, they respond to the EAWM strength: Fig. 4 shows that the levels of the basin-averaged variations significantly decrease as LiYang10 index increases, that is, the intensity becomes strong. Thus, for downscaling of EAWM strength with respect to temperature, we assumed that the behavior of the low-frequency oscillation is determined by the monsoon strength, while the monsoon effect on the other components such as high-frequency oscillation represented by the anomaly was ignored for simplicity. By using algorithm 6, we simulated the low-frequency oscillation according to given EAWM strength in this study. Moreover, the temperature scenarios are generated by using algorithm 7 with algorithms 4–5 (generating standardized anomaly series) and algorithm 6 (generating low-frequency oscillations) as the subroutines. (All of the algorithms are found in section b of the appendix.)

The observed TMIN (gray circles), its mean climatology (black dashed line), its low-frequency variation (red solid line), its precipitation effect on mean (blue circles), and precipitation amount (green circles) at station 1 during DJF in the randomly chosen years. The *x* axis represents the yearday (yearday 0 indicates 1 Jan). Positive yearday values indicate the number of days elapsed from 1 Jan; negative values indicate the number of days remaining until the first day of the following year.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

The observed TMIN (gray circles), its mean climatology (black dashed line), its low-frequency variation (red solid line), its precipitation effect on mean (blue circles), and precipitation amount (green circles) at station 1 during DJF in the randomly chosen years. The *x* axis represents the yearday (yearday 0 indicates 1 Jan). Positive yearday values indicate the number of days elapsed from 1 Jan; negative values indicate the number of days remaining until the first day of the following year.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

The observed TMIN (gray circles), its mean climatology (black dashed line), its low-frequency variation (red solid line), its precipitation effect on mean (blue circles), and precipitation amount (green circles) at station 1 during DJF in the randomly chosen years. The *x* axis represents the yearday (yearday 0 indicates 1 Jan). Positive yearday values indicate the number of days elapsed from 1 Jan; negative values indicate the number of days remaining until the first day of the following year.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

The TMIN low-frequency variations at all of the stations during DJF in the randomly chosen years. They exhibit spatial coherency and interannual variability.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

The TMIN low-frequency variations at all of the stations during DJF in the randomly chosen years. They exhibit spatial coherency and interannual variability.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

The TMIN low-frequency variations at all of the stations during DJF in the randomly chosen years. They exhibit spatial coherency and interannual variability.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

The response of the basin-averaged low-frequency oscillations of (top) TMAX and (bottom) TMIN (gray solid lines) to EAWM for different years (gray lines). The black solid and red dashed lines indicate the average of the gray lines and the mean levels of the averages, respectively. We detect that the level decreases as EAWM becomes stronger.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

The response of the basin-averaged low-frequency oscillations of (top) TMAX and (bottom) TMIN (gray solid lines) to EAWM for different years (gray lines). The black solid and red dashed lines indicate the average of the gray lines and the mean levels of the averages, respectively. We detect that the level decreases as EAWM becomes stronger.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

The response of the basin-averaged low-frequency oscillations of (top) TMAX and (bottom) TMIN (gray solid lines) to EAWM for different years (gray lines). The black solid and red dashed lines indicate the average of the gray lines and the mean levels of the averages, respectively. We detect that the level decreases as EAWM becomes stronger.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

### c. Main downscaling scheme

We proposed the main downscaling algorithm as follows:

#### Main downscaling algorithm

Suppose that the *β* coefficients in Eq. (3.1) are priorly estimated and we have a seasonal climate forecast for the coming DJF period (or a hindcast for a past one). We generate a multivariate and multisite daily time series in the period of daily precipitation, TMAX, and TMIN through the following steps:

Actually, we carried out steps 3–5 in the above algorithm repeatedly (1000 times in this study) to generate a large number of scenarios that represent prediction uncertainty plus the local climate variability within an EAWM strength category. In the above algorithm, a precipitation scenario is generated prior to temperature simulation in order to calculate the effect of precipitation on daily mean of temperature [cf. Eq. (3.2)]. Note that they are intended for evaluating EAWM impact on the final output variables in the application fields.

Below, several remarks are presented that deal with how to generate and utilize the scenario data for the performance test in section 4.

Remark 1: In section 4b, the reproducibility of the weather generator model is tested regardless of the tercile prediction by comparing simulation and observation data of each EAWM strength category in terms of daily and monthly statistics. For producing the simulation data, by using the main downscaling algorithm except putting (*p*_{BN}, *p*_{NN}, *p*_{AN}) = (1, 0, 0) or (0, 1, 0) or (0, 0, 1) instead of steps 1–2 therein, we generate 1000 datasets for each strength category of the EAWM index selected in section 4a.

Remark 2: A bootstrap technique is employed for significance test of the difference between statistics of simulation and observation data or a distributional range representing the uncertainty of the simulated statistic. Suppose that the observational value of the statistic was calculated using a dataset of *k* years (*k* ∈ *N* times and collecting *N* simulation values of the statistic (*N* ∈

The 95% range of the distribution, the interval between 2.5% and 97.5% quantiles, is used for significance test: if the range does not contain the observational value, the difference is then significant at level 5%. In this paper, we also set *N* = 1000.

Remark 3: If we test reproducibility with respect to a monthly characteristic by examining the difference between the monthly statistics of observation and simulation, then we encounter a small sample case: only a few observational values are available for each EAWM strength category because only one value of the monthly statistic can be collected each year. Thus, for circumventing this small sample case, we conducted the test in the following manner: using the simulation data, we calculated the cumulative distribution function (CDF) *F*_{c} of the monthly statistic for EAWM strength category *c* = BN, NN, AN, and then put

where _{obs} is the observational period. Let *x*_{i} be the observational value of the monthly statistic of year *i* ∈ _{obs} and define *z*_{i} = Φ^{−1}[*F*_{i}(*x*_{i})], where Φ is the CDF of the standard normal. If *F*_{c}, *c* = BN, NN, AN are continuous and indeed suitably describe the observational monthly characteristic (*x*_{i} is a sample point of *F*_{i} in a rigorous sense), then {*z*_{i}: *i* ∈ _{obs}} are distributed as standard normal. Therefore, for testing whether a monthly characteristic is suitably reproduced, a normality test is carried out for {*z*_{i}: *i* ∈ _{obs}}.

Remark 4: In section 4c, the informativeness of hindcast downscaling data is examined. In section 4a, we identify a suitable EAWM index and skillful hindcasts. Then, by applying main downscaling algorithm to the index and hindcasts, we generated 1000 daily scenarios for each year of the hindcast periods. Especially, for completely separating calibration and validation periods, the estimation of the *β* coefficients in Eq. (3.1) and the weather generator model construction are performed based on the datasets from which the part of the target year is removed.

Using the downscaling data, informativeness test is carried out by measuring how informative its monthly statistics were for predicting actual values. For hindcast year *i* ∈ _{obs}, let *x*_{i} be the observational value of a monthly statistic and *x*_{i} (red points) for some stations in the hindcast period. Therein, we wish to measure how informative *x*_{i}. Here, the informativeness was quantified by the average of mean-square errors (AMSE) between *x*_{i} over *i* ∈ *p*_{BN} = *p*_{NN} = *p*_{AN} = 1/3, that is, no information on EAWM strength and then consider

As the ratio becomes smaller than 1 and closer to 0, the informativeness increases. Thus, if the ratio value is not greater than 1, then

is more convenient for interpretation, which indicates how much the AMSE is reduced below the baseline.

## 4. Performance test and the result

In this section, we carry out the performance test and present the results. [The related tables and figures (e.g., Table S.1) are presented in online supplemental material.]

### a. Prediction skill test of the operational forecast for the EAWM strength

First, as a preliminary analysis, we calculated the correlation coefficients between the model and actual values of the EAWM indices obtained from the hindcast and the reanalysis data, respectively, in the point of view that the predictability is very related to the correlation. For each hindcast listed in Table 1, the calculations are conducted in its hindcast period. (The period of the reanalysis data contains all of the hindcast periods.) Figure S.1 in the online supplemental material presents the calculation results for all of the indices. As apparent, LiYang10 index revealed significant nonzero correlations based on MME and POAMA. hindcast. Next, we calculated RPSS for the prediction model in Eq. (3.1) using leave-one-year-out cross validation technique, that is, for each year in hindcast period, the *β* coefficients in Eq. (3.1) were estimated based on the dataset with the row of the year removed, and then (*p*_{BN}, *p*_{NN}, *p*_{AN}) was calculated based on model value *X* of the year with model Eq. (3.1). Figure 5 shows the RPSS of the 1–3-month-lead prediction for the EAWM strength category. As expected, LiYang10 index again showed a good prediction skill based on POAMA and MME hindcast for every lead time, whereas the others still reveal poor skill based on any hindcasts. Therefore, we concluded that LiYang10 index was most predictable and suitable for downscaling among the indices considered here. Hence, we focused on LiYang10 subsequently because it not only is the most predictable but also has strong relationship to the basin climate as seen in Table 3.

The RPSS of the prediction model in Eq. (3.1) of the EAWM indices based on the hindcast data for a 1-, 2-, or 3-month lead.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

The RPSS of the prediction model in Eq. (3.1) of the EAWM indices based on the hindcast data for a 1-, 2-, or 3-month lead.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

The RPSS of the prediction model in Eq. (3.1) of the EAWM indices based on the hindcast data for a 1-, 2-, or 3-month lead.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

### b. Test for reproducibility of daily and monthly characteristics

In this subsection, we test whether the proposed weather generator model reproduces daily and monthly characteristics of each EAWM strength category of LiYang10 index suitably. As mentioned in section 1, it is equivalent to testing whether or not the basin climate variability according to the EAWM strength represented by LiYang10 index is reproduced by the weather generator model, regardless of the EAWM strength prediction skill. The test is conducted by comparing related statistics of the simulation data with those of the observation data of the same EAWM strength category. In particular, the simulation data were generated in the manner described in remark 1.

#### 1) Test for daily characteristics

We first tested the distribution of daily precipitation amount. For examining the difference between the distributions of the observation and simulation data, we used the Kolmogorov–Smirnov (K–S) test. Table 5 presents *p* values of the test, where in some cases the difference is significant at level 5%. Figures 6 and 7 show three kinds of plots comparing the distributions. Especially, as seen in Fig. 7, the difference appeared to arise in the range of small values. Since the difference was not significant in most cases and the significant differences are acceptable, we concluded that the distribution of daily precipitation amount was well reproduced.

The *p* values of a K–S test for examining the difference between CDFs of the observational and simulated daily precipitation amounts, given for the months of December, January, and February. The asterisk indicates significance at the 5% level.

Comparison between the distributions of the observational and simulated daily precipitation amounts: (top) the histograms of the observations with the PDF of the simulation data (dashed line), and Q–Q plots in the (middle) original and (bottom) logarithmic scales.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

Comparison between the distributions of the observational and simulated daily precipitation amounts: (top) the histograms of the observations with the PDF of the simulation data (dashed line), and Q–Q plots in the (middle) original and (bottom) logarithmic scales.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

Comparison between the distributions of the observational and simulated daily precipitation amounts: (top) the histograms of the observations with the PDF of the simulation data (dashed line), and Q–Q plots in the (middle) original and (bottom) logarithmic scales.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

Similar to Fig. 6, but for cases in which the difference is significant. The difference arises among small values.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

Similar to Fig. 6, but for cases in which the difference is significant. The difference arises among small values.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

Similar to Fig. 6, but for cases in which the difference is significant. The difference arises among small values.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

Next, we tested the spatial correlation of daily precipitation amounts. For all pairs of the 14 stations [thus, the number of combinations of pairs is

Comparison between the correlation coefficients of the observational and simulated daily precipitation amounts on wet days in December. The gray-colored intervals indicate the 95% range of the distribution of the observational coefficient. If the interval does not contain the dashed line of slope 1, then the difference is significant at the 5% level.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

Comparison between the correlation coefficients of the observational and simulated daily precipitation amounts on wet days in December. The gray-colored intervals indicate the 95% range of the distribution of the observational coefficient. If the interval does not contain the dashed line of slope 1, then the difference is significant at the 5% level.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

Comparison between the correlation coefficients of the observational and simulated daily precipitation amounts on wet days in December. The gray-colored intervals indicate the 95% range of the distribution of the observational coefficient. If the interval does not contain the dashed line of slope 1, then the difference is significant at the 5% level.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

For the months of December, January, and February, test results for the spatial correlation of the daily precipitation amount on wet days. For all 91 pairs at 14 stations, three kinds of coefficients (Pearson/Spearman coefficients and log-odds ratio) are calculated. For examining the differences between the observational and simulated coefficients, we calculate MAE and the number of the 91 pairs for which the difference is significant at the 5% level.

We applied a K–S test for examining the distributional similarity between observation and simulation of TMAX and TMIN. Tables S.1 and S.2 in the online supplemental material present *p* values of the test and Figs. 9 and 10 show plots comparing the distributions for some cases. As seen therein, the distributions were well reproduced. Similarly to the case of daily precipitation amount, we also tested the spatial correlations of TMAX and TMIN except using Pearson coefficient only, since their marginal distributions were almost symmetric and had light tails. Table 7 presents the test result and Fig. 11 shows plots comparing the observational and simulated coefficients in December. The magnitudes of the differences were small in comparison to the case of daily precipitation amount but their significance is checked in a fair number of cases. However, since the differences appeared to be tolerable, we concluded that the spatial correlations of daily temperatures were acceptably reproduced.

Comparison between the distributions of the observational and simulated daily maximum temperatures in December: (top) the histograms of the observations against the PDF of the simulation data (dashed line, and the (middle) corresponding boxplots and (bottom) Q–Q plots.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

Comparison between the distributions of the observational and simulated daily maximum temperatures in December: (top) the histograms of the observations against the PDF of the simulation data (dashed line, and the (middle) corresponding boxplots and (bottom) Q–Q plots.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

Comparison between the distributions of the observational and simulated daily maximum temperatures in December: (top) the histograms of the observations against the PDF of the simulation data (dashed line, and the (middle) corresponding boxplots and (bottom) Q–Q plots.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

As in Fig. 9, but for daily minimum temperatures.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

As in Fig. 9, but for daily minimum temperatures.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

As in Fig. 9, but for daily minimum temperatures.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

For the months of December, January, and February, test results for the spatial correlation of daily temperatures. For all 91 pairs of 14 stations, the Pearson coefficient was calculated. For examining the difference between the observed and simulated coefficients, we calculate MAE and the number of the 91 pairs for which the difference is significant at the 5% level.

Comparison between the Pearson correlation coefficients of the observational and simulated daily temperatures in December. The gray-colored intervals indicate the 95% range of the simulated coefficients. If the interval does not contain the dashed line of slope 1, then the difference is significant at the 5% level.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

Comparison between the Pearson correlation coefficients of the observational and simulated daily temperatures in December. The gray-colored intervals indicate the 95% range of the simulated coefficients. If the interval does not contain the dashed line of slope 1, then the difference is significant at the 5% level.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

Comparison between the Pearson correlation coefficients of the observational and simulated daily temperatures in December. The gray-colored intervals indicate the 95% range of the simulated coefficients. If the interval does not contain the dashed line of slope 1, then the difference is significant at the 5% level.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

As the autocorrelation of daily precipitation amount was almost negligible in both observation and simulation, we omit its testing. Due to limits on space and considering the fact that temperatures reveal strong spatial coherency, we carried out the test on the basin averages of the weather variables instead of for individual stations. The differences between the observational and simulated coefficients at up to ±5 lag (day) were summarized into MAE, which is defined as the average of the differences at all of the lags. Moreover, applying the significance test described in remark 2 for each lag, we count the number of the lags at which the difference is significant at level 5%. Table 8 presents the test result, and Figs. 12 and 13 show plots comparing the autocorrelations and cross correlations of the observation and simulation data in December, respectively. As seen therein, the differences were insignificant at most of the lags and the major correlation patterns were well captured. Hence, we concluded that the autocorrelations and cross correlations were acceptably reproduced.

For the months of December, January, and February, test results for the auto- and cross correlations of the basin-averaged daily temperatures and daily precipitation (PRCP) amounts. Lag ranged from 1 to 5 for autocorrelation and from −5 to 5 for cross correlation. For examining the difference between the observational and simulated coefficients, we calculate MAE and the number of the lags at which the difference is significant at the 5% level.

Comparison between autocorrelation coefficients up to lag 5 of the observational and simulated basin-averaged daily temperatures. Plots of the simulation and observation data are colored black and red, respectively. The intervals indicate the 95% range of the simulated coefficients. If the interval does not contain the red-colored point, the difference is significant at the 5% level.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

Comparison between autocorrelation coefficients up to lag 5 of the observational and simulated basin-averaged daily temperatures. Plots of the simulation and observation data are colored black and red, respectively. The intervals indicate the 95% range of the simulated coefficients. If the interval does not contain the red-colored point, the difference is significant at the 5% level.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

Comparison between autocorrelation coefficients up to lag 5 of the observational and simulated basin-averaged daily temperatures. Plots of the simulation and observation data are colored black and red, respectively. The intervals indicate the 95% range of the simulated coefficients. If the interval does not contain the red-colored point, the difference is significant at the 5% level.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

Comparison between cross-correlation coefficients up to lag 5 of the observational and simulated basin-averaged daily temperatures and daily precipitation. Plots of the simulation and observation data are colored black and red, respectively. The intervals indicate the 95% range of the simulated coefficients. If the interval does not contain the red-colored point, the difference is significant at the 5% level.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

Comparison between cross-correlation coefficients up to lag 5 of the observational and simulated basin-averaged daily temperatures and daily precipitation. Plots of the simulation and observation data are colored black and red, respectively. The intervals indicate the 95% range of the simulated coefficients. If the interval does not contain the red-colored point, the difference is significant at the 5% level.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

Comparison between cross-correlation coefficients up to lag 5 of the observational and simulated basin-averaged daily temperatures and daily precipitation. Plots of the simulation and observation data are colored black and red, respectively. The intervals indicate the 95% range of the simulated coefficients. If the interval does not contain the red-colored point, the difference is significant at the 5% level.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

#### 2) Test for monthly characteristics

First, we take into consideration monthly/seasonal total amounts of precipitation, monthly/seasonal means of daily average temperature *T*_{avg}, and monthly/seasonal frequencies of dry day as the examined characteristics. Here, *T*_{avg}(*s*, *t*) = [*T*_{max}(*s*, *t*) + *T*_{min}(*s*, *t*)]/2 for site *s* and time *t*. The normality test explained in remark 3 was applied for the statistics. Tables S.3–S.5 in the online supplemental material present *p* values of the test, which are greater than 0.05 in almost all cases. We therefore concluded that the monthly characteristics were also well reproduced.

Additionally, since the basin is very cold during boreal winter, we considered a monthly statistic related to extreme cold events, specifically, the frequency of cold days. Here, a cold day is defined as a day when the daily maximum temperature is less than its lower 10% quantile. (The quantile varies according to the period and station location under consideration.) Note that its distribution not only is too discontinuous to be considered continuous but also needs to be examined in the view of risk management. Thus, we considered (*p* × 100)% quantile of *F*_{i}, *i* ∈ _{obs} and *p* = 0.75, 0.9, that is, the return levels of 4 and 10 years. For testing validity of the simulated return levels, we applied Pearson’s *χ*^{2} test for the number of the observational frequencies of cold days exceeding the simulated return level, where the expected number of excess is equal to |_{obs}| × (1 − *p*) (|_{obs}| is the length of the observational period; see Fig. S.2 in the online supplemental material for illustration of the test). Tables S.6 and S.7 of the online supplemental material present *p* values of the test, which are much larger than 5% for almost all cases. We therefore conclude that the frequency of cold days is suitably reproduced.

### c. Test for informativeness of the hindcast downscaling data

Last, we test the performance of the hindcast downscaling data by measuring how informative its monthly/seasonal characteristics were for predicting actual values. Note that it can be viewed as a preliminary test regarding how effective the operational seasonal climate forecast is for climate impact study, thus, produces the key results of this paper. In section 4a, we verified that LiYang10 index was the more suitable than the others, and POAMA and MME hindcasts were skillful for its prediction. Thus, for the index and hindcasts, we generated 1000 daily scenarios for each year of their hindcast periods in the manner described in remark 4. We took into consideration the monthly/seasonal total amount of precipitation, monthly/seasonal means of daily average temperature, and, monthly/seasonal frequencies of dry and cold days as the examined characteristics. Note that the frequencies of dry and cold days frequently have implication for application sectors such as water resource management and health. The statistics were calculated for each scenario and the observation data in each year; then, for each year of the hindcast periods, we had 1000 simulated values of the monthly statistics, which produced their distributions. Fig. S.3 in the online supplemental material shows the distributions (boxplots) of December mean of daily average temperature based on 1-month-lead POAMA hindcast with actual observational values (red points) for some stations. The informativeness of the distributions was measured by AMSE reduction presented in Eq. (3.4).

Figure 14 shows the spreads of AMSE ratios of the 14 stations for monthly and seasonal totals of daily precipitation amounts and means of daily average temperature of the POAMA hindcast downscaling data. In the aspects of the monthly statistics, the hindcast downscaling data are more informative in December than in the other months, specifically, the AMSE reductions of the 1-month-lead December total precipitation and mean temperature are around 10% and 15% on average in the stations, respectively. Moreover, it is notable that the informativeness do not critically decrease along the lead time. Meanwhile, in the aspects of the seasonal statistics, the temperature data are fairly informative over all of the lead times as the AMSE reduction is around 10% on average in the stations, but the precipitation data are much less or not. Figure 15 shows the results for monthly and seasonal frequencies of dry and cold days of the same downscaling data. The data are informative in terms of December frequency of cold days over all of the lead times; meanwhile, the 1-month-lead data are significantly informative in terms of seasonal frequency of dry days. However, the AMSE reductions are less than 10% on station average, which are smaller than in the cases of monthly/seasonal total precipitation and mean temperature.

The spreads (boxplots) of AMSE ratios of the 14 stations for monthly and seasonal totals of the (top) daily precipitation amount and (bottom) means of daily average temperature, based on POAMA hindcast.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

The spreads (boxplots) of AMSE ratios of the 14 stations for monthly and seasonal totals of the (top) daily precipitation amount and (bottom) means of daily average temperature, based on POAMA hindcast.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

The spreads (boxplots) of AMSE ratios of the 14 stations for monthly and seasonal totals of the (top) daily precipitation amount and (bottom) means of daily average temperature, based on POAMA hindcast.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

As in Fig. 14, but for frequencies of (top) dry and (bottom) cold days.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

As in Fig. 14, but for frequencies of (top) dry and (bottom) cold days.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

As in Fig. 14, but for frequencies of (top) dry and (bottom) cold days.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

Figures 16 and 17 show results that are based on the MME hindcast. Its 1- and 2-month-lead downscaling data are prominently informative in terms of December mean of daily average temperature (the AMSE reductions are more than 15% and 10% on station average, respectively) and frequency of cold days (the AMSE reductions are around 10% and 5% on station average, respectively), particularly, the 1-month-lead data are also in terms of DJF mean temperature and frequency of cold days as the AMSE reductions are around 10% and more than 5% on station average, respectively. Moreover, the downscaling data reveal a little but persistent informativeness in terms of December total precipitation over all of the lead times (the AMSE reductions are around 5% on station average), while they do not in terms of the frequency of dry days overall.

As in Fig. 14, but based on MME hindcast.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

As in Fig. 14, but based on MME hindcast.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

As in Fig. 14, but based on MME hindcast.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

As in Fig. 15, but based on MME hindcast.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

As in Fig. 15, but based on MME hindcast.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

As in Fig. 15, but based on MME hindcast.

Citation: Journal of Applied Meteorology and Climatology 59, 10; 10.1175/JAMC-D-19-0282.1

## 5. Conclusions

This paper proposed a weather generator–based downscaling scheme of the EAWM strength prediction into the climate of a basin in the Korean Peninsula during boreal winter. Its performance was investigated in terms of the reproducibility of climate characteristics and informativeness using the operational hindcast. Our findings related to the tercile prediction of the EAWM strength and the weather generator model can be summarized as follows:

Among the EAWM indices listed in Table 2, the LiYang10 index was selected for the downscaling not only because it was highly related to the basin climate but also because the skills of its tercile predictions based on POAMA and MME hindcast were relatively good.

The proposed weather generator was verified to well reproduce prominent climate characteristics for each tercile category of EAWM strength quantified by the LiYang10 index.

The informativeness test of the hindcast downscaling data produces the key results of the proposed method since it examines how well the data deliver the operational climate forecast information to application sectors. The key findings are summarized as follows:

The downscaling data of POAMA and 1-month-lead MME hindcast were the most informative in terms of December and DJF mean temperature. Particularly, the informativeness of the former was fairly retained until 3-month lead. They are also informative in terms of December total precipitation but not as much as in terms of the monthly mean temperature.

The downscaling data of 1-month-lead POAMA hindcast were significantly informative in terms of the DJF frequency of dry days with relatively large spatial spread. Meanwhile, that of 1-month-lead MME hindcast was fairly informative in terms of the December and DJF frequency of cold days.

Overall, the downscaling data are more informative in terms of monthly and seasonal mean temperature and total precipitation than the frequencies of dry and cold days.

The test results showed the possibility of the proposed method for climate impact study in terms of some monthly and seasonal characteristics. However, it is still unclear that the degree of the informativeness is sufficient for real climate impact studies since the physical process models used in application fields frequently produce biased outputs. Moreover, application sectors are also concerned with various subseasonal characteristics such as dry-spell duration and cold surge. In the future study, the informativeness test needs to be applied for such characteristics, and the weather generator model will have to be improved for capturing the characteristics successfully.

The proposed downscaling scheme was based on the tercile probabilistic prediction of EAWM strength using the original definitions of the EAWM indices. However, there are also available operational seasonal climate forecasts of monthly precipitation and temperature themselves, and, recently, a modified EAWM index was proposed for improving prediction skill (Shin and Moon 2018). It strongly encourages us to compare the proposed scheme with those based on the other type of seasonal climate forecasts and to utilize the modified index for the EAWM strength prediction. Meanwhile, although this paper was motivated to produce the downscaling data for climate impact study, it did not examine how effective the data were for climate impact study in the applicational sectors. In a future work, collaborating with experts in application sectors, we will investigate climate impacts on water resource management and agriculture in the study area based on the operational forecast considered in this paper to explore practical implications.

## Acknowledgments

The authors acknowledge the APCC Multi Model Ensemble (MME) Producing Centers for making their data available for analysis and the APEC Climate Center for collecting and archiving them for organizing APCC MME prediction. The first author thanks Dr. Jihoon Park in the APEC Climate Center for providing the information and references on the study area.

## APPENDIX

### The Technical Details of the Weather Generator Model

The details of the weather generator model and algorithms used for downscaling are provided here. In what follows, *s*_{1}, …, *s*_{d} denote the stations, where *d* is the number of the stations, and *s*_{1}, …, *s*_{d}}; meanwhile, *L*} does the period of the boreal winter season, where *L* is the length of the period (*L* = 92 and 93 for usual and leap years, respectively). Moreover, let *s* ∈ *t* ∈ *i* ∈ *X*, *X*(*s*, *t*) denotes its value at site *s* ∈ *t* ∈ *X*_{i}(*s*, *t*) denotes the observed value in year *i* ∈ *A*, |*A*| denotes the number of elements in *A*.

#### a. Technical details of the precipitation model

##### 1) Dry-spell generation

For modeling dry-spell length, we considered negative binomial distribution: let *L* be the length; then

where *μ*^{L} > 1 and *ν*^{L} > 0. Here, *μ*^{L} = *E*(*L*) denotes the mean of the length and *ν*^{L} is the shape parameter. If *ν*^{L} = 1, the distribution then reduces to a geometric distribution that dry-spell length follows under first-order Markov model assumption. Thus, we can reduce the lack of fit that may be caused by the Markov modeling [cf. Richardson (1981) and Apipattanavis et al. (2007)]. The parameter values varied month to month, and *μ*^{L} is assumed to associate to EAWM strength. The values were estimated on a monthly basis using maximum likelihood method.

##### 2) Wet-spell generation

For generating wet spells, we consider a three-state, first-order Markov model to simulate the daily wetness sequence as in Apipattanavis et al. (2007). The daily wetness is categorized into three states: letting *q* > 0 be the median of the basin average of the precipitation amounts at all of the stations on wet day,

States 1 and 2 correspond to the cases that the day is mildly wet and very wet, respectively. We assumed that the daily wetness sequence during a wet spell is a first-order Markov chain terminated by a random stopping time: let

be transition probabilities with *p*_{0,1} + *p*_{0,2} = 1 and *k*_{1} = 1, 2, {*S*_{l}:*l* = 0, 1, 2, …} be a Markov chain satisfying

and *m* be a stopping time such that

then, *S*_{1}, …, *S*_{m} constitute a daily wetness sequence of wet spell.

Wet-spell generation was completed by generating precipitation amounts at the stations for each wet day. For site *s*_{j}, *j* = 1, …, *d*, the precipitation amount is assumed to follow a gamma distribution: let *R*(*s*_{1}), …, *R*(*s*_{d}) be the precipitation amounts on wet day, then

where *α*_{j} > 0 and *μ*_{j} > 0 indicate the shape of distribution and precipitation mean intensity, respectively. For modeling spatial dependence among precipitation amounts on a wet day, we adopted a Gaussian-copula-based model: letting Φ denote the cumulative distribution function of standard normal, there exist

such that

where *Z*_{1}, …, *Z*_{d}) given Φ(*Z*_{j}) > *p*_{j} for some *j* = 1, …, *d*.

##### 3) Estimation of the parameters

In what follows, the parameters of the wet-spell-generation model are denoted by

[cf. Eqs. (A.2), (A.3), (A.5), and (A.6)]. The *θ* is assumed to change from month to month and to be associated with EAWM strength as well. For each combination of month and EAWM strength category, the values of *α*_{j} and *μ*_{j}, *j* = 1, …, *d* were estimated by maximum likelihood method using the observations in the month and the years of the category, meanwhile, the breaking point *q* and the transition probabilities *P* were estimated by the empirical counterparts in the same period. The estimation of Σ and *p*_{1}, …, *p*_{d} is dealt with in section S.1 in the online supplemental material.

##### 4) Simulation algorithms

First, the algorithms of wet-spell generation are presented below. Therein, **R***(*t*) = [*R**(*s*_{1}, *t*), …, *R**(*s*_{d}, *t*)] denotes the synthetic precipitation amounts at station *s*_{1}, …, *s*_{d} and day *t*.

Algorithm 1: Suppose that the value of *θ* is priorly determined. Let *S** be a given daily wetness.

Algorithm 2: Let *t*_{0} ∈ *θ* is priorly determined. We generate a wet spell starting from the next day of *t*_{0} through the following steps:

For a while, we consider determining the values of *θ* in Eq. (A.7), and *μ*^{L} and *ν*^{L} in Eq. (A.1). It is natural that the weather characteristics vary smoothly month to month, although they were estimated on a monthly basis. Thus, a mixture between those of adjacent months was taken into consideration. Specifically, in the algorithm presented below, the parameter values were determined in the following manner: let current day *t* ∈ *t*_{1} and *t*_{2} of a month and the next, that is, *t*_{1} ≤ *t* ≤ *t*_{2}, then choose one of the months of day *t*_{1} and day *t*_{2} with probabilities (*t*_{2} − *t*)/(*t*_{2} − *t*_{1}) and (*t* − *t*_{1})/(*t*_{2} − *t*_{1}), respectively. Last, the parameter values are determined by the chosen month and a given EAWM strength category. The simulation algorithm for precipitation is presented below:

Algorithm 3: Let *L*}, where *L* ∈ *t*° to 0 and variable *S* to either “DRY” or “WET,” randomly. We generate **R***(1), …, **R***(*L*) by repeating the following steps:

#### b. Technical details of the temperature model

##### 1) Estimation of the components in Eq. (3.2)

Here, we deal with the estimation for the components in Eq. (3.2). Hereinafter, we drop subscript *T*_{$\diamond $} for notational conciseness. The estimation methods presented below are based on applying local smoothing technique for every station separately; thus, we fixed *s* ∈ _{0} = {*t*_{1}, …, *t*_{N+1}} denotes the 10-day sparse subset of *t*_{1} = 1 and *t*_{N+1} = *L* are the first and last day of *t*_{k+1} − *t*_{k} = 10, *k* = 1, …, *N* − 1 (*N* ∈

We begin with estimating *μ*(*s*, *t*). For this, we employed local polynomial regression method: for *t*_{0} ∈ *μ*(*s*, *t*_{0}) was estimated as the value of *β*_{0} obtained from minimizing

where *h* > 0. In this study, we set *h* = 27. In what follows,

Next, we estimate Λ(*s*, *t*) and Δ(*s*, *t*) together in a regression framework where the deviation *t*_{0} ∈ *t*_{0}): = {*t* ∈ *t* − *t*_{0}| ≤ 15}; then, for *i* ∈ _{i}(*s*, *t*) is approximated by

Moreover, the precipitation effect was locally quantified by

where **R** = {[*R*(*s*_{1}, *t*), …, *R*(*s*_{d}, *t*)]:*t* ∈ *λ* and *γ* were obtained by minimizing

where **R**_{i} = {[*R*_{i}(*s*_{1}, *t*), …, *R*_{i}(*s*_{d}, *t*)]:*t* ∈ *i* ∈ *t*_{0}, and, thus, are denoted by

respectively. The final estimates were obtained by linear interpolation: for *t* ∈ *t*_{k}, *t*_{k+1} ∈ _{0} with *t*_{k} ≤ *t* < *t*_{k+1}; then

where *w* = (*t*_{k+1} − *t*)/(*t*_{k+1} − *t*_{k}).

Last, daily standard deviation *σ*(*s*, *t*) and standardized anomaly *ε*(*s*, *t*) are estimated. The standard deviation was modeled in a similar fashion but in logarithm scale: for a given *t*_{0} ∈ _{0},

where *w*_{j} = exp(−||*s* − *s*_{i}||/*λ*). For estimation, we considered objective function

where *t*_{0} and, thus, is denoted by *t* ∈ *t*_{k}, *t*_{k+1} ∈ _{0} with *t*_{k} ≤ *t* < *t*_{k+1}; then

The anomaly was estimated as

##### 2) Anomaly model and simulation

By analyzing the anomaly estimates (not reported here) we recognized that the anomalies followed slightly asymmetric distributions and had a nontrivial correlation structure in space, time, and intervariable. Thus, we needed a model capturing such features of the anomaly. In what follows, *ε*_{$\diamond $}(*s*, *t*) was assumed to follow a skewed normal distribution (Azzalini 2013) whose the continuous probability density function (PDF) is

where *ξ*, *α* ∈ *ω* > 0 are the parameters and *ϕ* and Φ represent continuous PDF and CDF of standard normal, respectively; (*ξ*, *ω, α*) is estimated on a monthly basis by using maximum likelihood method based on the anomaly estimates presented in Eq. (A.12).

Let *H* be the CDF of *ε*_{$\diamond $}(*s*, *t*); we then call *z*_{$\diamond $}(*s*, *t*) = Φ^{−1}{*H*[*ε*_{$\diamond $}(*s*, *t*)]} the *z* score, where *H* is estimated by its empirical distribution based on the anomaly estimates in the month of day *t*. The *z*-score vector is denoted by **z**_{$\diamond $}(*t*) = [*z*_{$\diamond $}(*s*_{1}, *t*), …, *z*_{$\diamond $}(*s*_{d}, *t*)] (*ζ*_{k,$\diamond $}(*t*) = ⟨**z**_{$\diamond $}(*t*), Ψ_{k,$\diamond $}⟩ be the *k*th principal component (PC) for *k* = 1, …, *d*, where **Ψ**_{k,$\diamond $} is the *k*th empirical orthogonal function (EOF) loading vector of **z**_{$\diamond $}(*t*), whose estimate is obtained from the eigenvalue decomposition of the sample covariance matrix of the *z*-score vector. Then, we have

where

were assumed to follow a vector autoregressive (VAR) model of order *p* ∈

where {** E**(

*t*):

*t*∈

**Γ**

_{1}, …,

**Γ**

_{p}were estimated using Yule–Walk method on a monthly basis (cf. Lütkepohl 1991). The resulting estimates were denoted by

*t*, and the estimates of the errors were obtained using equation

*t*=

*p*+ 1, …,

*L*. For simulating the PC vector series, an algorithm is presented below:

Algorithm 4: The simulated series {**X***(*t*): *t* ∈ *t* = −100 + *p*, …, *L* with **X***(−100) = **X***(−101 + *p*) = **0**:

In the above algorithm, *t* = −100 + *p*, …, 0 is a burning period for obtaining suitable initial values of **X***(1 − *p*), …, **X***(0). In this study, we set *p* = 2. Below, we present a simulation algorithm for the anomaly.

Algorithm 5: We generate new anomaly series through the following steps:

##### 3) The simulation algorithm

For downscaling EAWM with respect to temperature, we needed a simulation scheme for generating Λ_{$\diamond $}(*s*, *t*) in Eq. (3.2). An algorithm is presented below. Therein,

denotes the vector of historical low-frequency variations of TMAX and TMIN in year *i*, where *s*_{1}, …, *s*_{d} are the stations.

Algorithm 6: Let _{0} be the collection of years of a given EAWM strength category. Below, *n* = |_{0}| denotes the number of such years.

Algorithm 6 assumed that the low-frequency variation is a function valued random variable with Gaussianity (cf. Bosq 2000). Furthermore, it was designed to generate diverse synthetic variations with the mean and covariance structure of historical ones. Finally, the simulation algorithm with respect to temperature is presented below:

Algorithm 7: Let EAWM strength category be given. Suppose that a precipitation scenario was generated by algorithm 3. We generate a scenario of TMAX and TMIN through the following steps:

## REFERENCES

Adams, H. D., A. P. Williams, C. Xu, S. A. Rauscher, X. Jiang, and N. G. McDowell, 2013: Empirical and process-based approaches to climate-induced forest mortality models.

, 4, 438, https://doi.org/10.3389/fpls.2013.00438.*Front. Plant Sci.*Ahn, J.-B., and H.-J. Kim, 2014: Improvement of 1-month lead predictability of the wintertime AO using a realistically varying solar constant for a CGCM.

, 21, 415–418, https://doi.org/10.1002/met.1372.*Meteor. Appl.*Apipattanavis, S., G. Podestá, B. Rajagopalan, and R. W. Katz, 2007: A semiparametric multivariate and multisite weather generator.

, 43, W11401, https://doi.org/10.1029/2006WR005714.*Water Resour. Res.*Azzalini, A., 2013:

*The Skew-Normal and Related Families.*Vol. 3. Cambridge University Press, 262 pp.Ban, H.-Y., D.-H. Choi, J.-B. Ahn, and B.-W. Lee, 2017: Predicting regional soybean yield using crop growth simulation model.

, 33, 699–708, https://doi.org/10.7780/kjrs.2017.33.5.2.9.*Korean J. Remote Sens.*Bosq, D., 2000:

*Linear Processes in Function Spaces: Theory and Applications*. Lecture Notes in Statistics, Vol. 149, Springer-Verlag, 295 pp.Chan, J. C., and C. Li, 2004: The East Asia winter monsoon.

*East Asian Monsoon*, C. P. Chang, Ed., World Scientific, 54–106.Cobon, D., R. Darbyshire, J. Crean, S. Kodur, M. Simpson, and C. Jarvis, 2020: Valuing seasonal climate forecasts in the northern Australia beef industry.

, 12, 3–14, https://doi.org/10.1175/WCAS-D-19-0018.1.*Wea. Climate Soc.*Eum, H.-I., S. P. Simonovic, and Y.-O. Kim, 2010: Climate change impact assessment using k-nearest neighbor weather generator: Case study of the Nakdong River basin in Korea.

, 15, 772–785, https://doi.org/10.1061/(ASCE)HE.1943-5584.0000251.*J. Hydrol. Eng.*Gong, D.-Y., S.-W. Wang, and J.-H. Zhu, 2001: East Asian winter monsoon and Arctic oscillation.

, 28, 2073–2076, https://doi.org/10.1029/2000GL012311.*Geophys. Res. Lett.*Guo, Q., 1994: Relationship between the variations of East Asian winter monsoon and temperature anomalies in China.

, 5, 218–225.*Quart. J. Appl. Meteor.*Jeong, H., K. Ashok, B. Song, and Y. Min, 2008: Experimental 6-month hindcast and forecast simulation using CCSM3. APEC Climate Center Tech. Rep.

Jones, J. W., J. W. Hansen, F. S. Royce, and C. D. Messina, 2000: Potential benefits of climate forecasting to agriculture.

, 82, 169–184, https://doi.org/10.1016/S0167-8809(00)00225-5.*Agric. Ecosyst. Environ.*Jung, K. Y., K.-L. Lee, T. H. Im, I. J. Lee, S. Kim, K.-Y. Han, and J. M. Ahn, 2016: Evaluation of water quality for the Nakdong River watershed using multivariate analysis.

, 5, 67–82, https://doi.org/10.1016/j.eti.2015.12.001.*Environ. Tech. Innovation*Kanamitsu, M., W. Ebisuzaki, J. Woollen, S.-K. Yang, J. J. Hnilo, M. Fiorino, and G. L. Potter, 2002: NCEP–DOE AMIP-II Reanalysis (R-2).

, 83, 1631–1644, https://doi.org/10.1175/BAMS-83-11-1631.*Bull. Amer. Meteor. Soc.*Kim, M., Y.-B. Yhang, and C.-M. Lim, 2019: Gaussian copula method for bias correction of daily precipitation generated by a dynamical model.

, 58, 269–289, https://doi.org/10.1175/JAMC-D-18-0089.1.*J. Appl. Meteor. Climatol.*Kim, S. T., S.-J. Sohn, and J.-S. Kug, 2017: Winter temperatures over the Korean Peninsula and East Asia: Development of a new index and its application to seasonal forecast.

, 49, 1567–1581, https://doi.org/10.1007/s00382-016-3402-2.*Climate Dyn.*Korea Water Resources Corporation, 2006:

*Development of Information System for Drought Management*(in Korean). Korea Water Resources Corporation.Lee, J., H. Cho, M. Choi, and D. Kim, 2017: Development of land surface model for Soyang River basin.

, 50, 837–847, https://doi.org/10.3741/JKWRA.2017.50.12.837.*J. Korea Water Resour. Assoc.*Li, Y., and S. Yang, 2010: A dynamical index for the East Asian winter monsoon.

, 23, 4255–4262, https://doi.org/10.1175/2010JCLI3375.1.*J. Climate*Lim, C.-H., W.-K. Lee, Y. Song, and K.-C. Eom, 2015: Assessing the EPIC model for estimation of future crops yield in South Korea.

, 6, 21–31, https://doi.org/10.15531/KSCCR.2015.6.1.21.*J. Climate Change Res.*Lim, E.-P., H. H. Hendon, S. Langford, O. Alves, and K. A. Day, 2012: Improvements in POAMA2 for the prediction of major climate drivers and south eastern Australian rainfall. Centre for Australian Weather and Climate Research Tech. Rep. 051, 37 pp.

Lütkepohl, H., 1991:

. 2nd ed. Springer-Verlag, 556 pp.*Introduction to Multiple Time Series Analysis*Merryfield, W. J., and Coauthors, 2013: The Canadian Seasonal to Interannual Prediction System. Part I: Models and initialization.

, 141, 2910–2945, https://doi.org/10.1175/MWR-D-12-00216.1.*Mon. Wea. Rev.*Min, Y.-M., V. N. Kryjov, S. M. Oh, and H.-J. Lee, 2017: Skill of real-time operational forecasts with the APCC multi-model ensemble prediction system during the period 2008–2015.

, 49, 4141–4156, https://doi.org/10.1007/s00382-017-3576-2.*Climate Dyn.*Putman, W. M., and M. Suarez, 2011: Cloud-system resolving simulations with the NASA Goddard Earth Observing System global atmospheric model (GEOS-5).

, 38, L16809, https://doi.org/10.1029/2011GL048438.*Geophys. Res. Lett.*Qiao, S., M. Zou, H. N. Cheung, W. Zhou, Q. Li, G. Feng, and W. Dong, 2020: Predictability of the wintertime 500 hPa geopotential height over Ural-Siberia in the NCEP climate forecast system.

, 54, 1591–1606, https://doi.org/10.1007/s00382-019-05074-8.*Climate Dyn.*Racsko, P., L. Szeidl, and M. Semenov, 1991: A serial approach to local stochastic weather models.

, 57, 27–41, https://doi.org/10.1016/0304-3800(91)90053-4.*Ecol. Modell.*Richardson, C. W., 1981: Stochastic simulation of daily precipitation, temperature, and solar radiation.

, 17, 182–190, https://doi.org/10.1029/WR017i001p00182.*Water Resour. Res.*Risbey, J. S., and D. Entekhabi, 1996: Observed Sacramento basin streamflow response to precipitation and temperature changes and its relevance to climate impact studies.

, 184, 209–223, https://doi.org/10.1016/0022-1694(95)02984-2.*J. Hydrol.*Saha, S., and Coauthors, 2014: The NCEP Climate Forecast System version 2.

, 27, 2185–2208, https://doi.org/10.1175/JCLI-D-12-00823.1.*J. Climate*Shi, N., 1996: Features of the East Asian winter monsoon intensity on multiple time scale in recent 40 years and their relation to climate.

, 7, 175–182.*J. Appl. Meteor. Sci.*Shin, S.-H., and J.-Y. Moon, 2018: Prediction skill for the East Asian winter monsoon based on APCC multi-models.

, 9, 300, https://doi.org/10.3390/atmos9080300.*Atmosphere*Tian, B., and K. Fan, 2020: Different prediction skill for the East Asian winter monsoon in the early and late winter season.

, 54, 1523–1538, https://doi.org/10.1007/s00382-019-05068-6.*Climate Dyn.*Wang, L., and W. Chen, 2014: An intensity index for the East Asian winter monsoon.

, 27, 2361–2374, https://doi.org/10.1175/JCLI-D-13-00086.1.*J. Climate*Wang, L., W. Chen, W. Zhou, and R. Huang, 2009: Interannual variations of East Asian trough axis at 500 hPa and its association with the East Asian winter monsoon pathway.

, 22, 600–614, https://doi.org/10.1175/2008JCLI2295.1.*J. Climate*Weigel, A. P., M. A. Liniger, and C. Appenzeller, 2007: The discrete Brier and ranked probability skill scores.

, 135, 118–124, https://doi.org/10.1175/MWR3280.1.*Mon. Wea. Rev.*Wilby, R. L., and T. Wigley, 1997: Downscaling general circulation model output: A review of methods and limitations.

, 21, 530–548, https://doi.org/10.1177/030913339702100403.*Prog. Phys. Geogr.*Williams, J. W., and S. T. Jackson, 2007: Novel climates, no-analog communities, and ecological surprises.

, 5, 475–482, https://doi.org/10.1890/070037.*Front. Ecol. Environ.*Wu, B., and J. Wang, 2002: Winter Arctic Oscillation, Siberian high and East Asian winter monsoon.

, 29, 1897, https://doi.org/10.1029/2002GL015373.*Geophys. Res. Lett.*Xu, S., and J. Ji, 1965: The climate and weather features during the outbreak period of China’s winter monsoon (in Chinese

*). Proc. Geographical Symp. on Arid Areas*, Vol. 9, Beijing, China, Chinese Geographical Society, 85–101.Zhao, S., M. F. Stuecker, F.-F. Jin, J. Feng, H.-L. Ren, W. Zhang, and J. Li, 2020: Improved predictability of the Indian Ocean dipole using a stochastic dynamical model compared to the North American Multimodel Ensemble Forecast.

, 35, 379–399, https://doi.org/10.1175/WAF-D-19-0184.1.*Wea. Forecasting*