Enhancing Regional Climate Downscaling through Advances in Machine Learning

Neelesh Rampal aNational Institute of Water and Atmospheric Research, Auckland, New Zealand
bClimate Change Research Centre and ARC Centre of Excellence for Climate Extremes, University of New South Wales Sydney, Sydney, New South Wales, Australia

Search for other papers by Neelesh Rampal in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0001-9801-9348
,
Sanaa Hobeichi bClimate Change Research Centre and ARC Centre of Excellence for Climate Extremes, University of New South Wales Sydney, Sydney, New South Wales, Australia
cUNSW AI Institute and Data Science Hub, University of New South Wales Sydney, Sydney, New South Wales, Australia

Search for other papers by Sanaa Hobeichi in
Current site
Google Scholar
PubMed
Close
,
Peter B. Gibson dNational Institute of Water and Atmospheric Research, Wellington, New Zealand

Search for other papers by Peter B. Gibson in
Current site
Google Scholar
PubMed
Close
,
Jorge Baño-Medina eInstituto de Física de Cantabria, CSIC–Universidad de Cantabria, Santander, Spain
fCenter for Western Weather and Water Extremes, Scripps Institution of Oceanography, University of California, San Diego, La Jolla, California

Search for other papers by Jorge Baño-Medina in
Current site
Google Scholar
PubMed
Close
,
Gab Abramowitz bClimate Change Research Centre and ARC Centre of Excellence for Climate Extremes, University of New South Wales Sydney, Sydney, New South Wales, Australia

Search for other papers by Gab Abramowitz in
Current site
Google Scholar
PubMed
Close
,
Tom Beucler gFaculty of Geosciences and Environment, University of Lausanne, Lausanne, Vaud, Switzerland
hExpertise Center for Climate Extremes, University of Lausanne, Lausanne, Vaud, Switzerland

Search for other papers by Tom Beucler in
Current site
Google Scholar
PubMed
Close
,
Jose González-Abad eInstituto de Física de Cantabria, CSIC–Universidad de Cantabria, Santander, Spain

Search for other papers by Jose González-Abad in
Current site
Google Scholar
PubMed
Close
,
William Chapman iNational Center for Atmospheric Research, Boulder, Colorado

Search for other papers by William Chapman in
Current site
Google Scholar
PubMed
Close
,
Paula Harder jFraunhofer ITWM, Kaiserslautern, Germany

Search for other papers by Paula Harder in
Current site
Google Scholar
PubMed
Close
, and
José Manuel Gutiérrez eInstituto de Física de Cantabria, CSIC–Universidad de Cantabria, Santander, Spain

Search for other papers by José Manuel Gutiérrez in
Current site
Google Scholar
PubMed
Close
Open access

Abstract

Despite the sophistication of global climate models (GCMs), their coarse spatial resolution limits their ability to resolve important aspects of climate variability and change at the local scale. Both dynamical and empirical methods are used for enhancing the resolution of climate projections through downscaling, each with distinct advantages and challenges. Dynamical downscaling is physics based but comes with a large computational cost, posing a barrier for downscaling an ensemble of GCMs large enough for reliable uncertainty quantification of climate risks. In contrast, empirical downscaling, which encompasses statistical and machine learning techniques, provides a computationally efficient alternative to downscaling GCMs. Empirical downscaling algorithms can be developed to emulate the behavior of dynamical models directly, or through frameworks such as perfect prognosis in which relationships are established between large-scale atmospheric conditions and local weather variables using observational data. However, the ability of empirical downscaling algorithms to apply their learned relationships out of distribution into future climates remains uncertain, as is their ability to represent certain types of extreme events. This review covers the growing potential of machine learning methods to address these challenges, offering a thorough exploration of the current applications and training strategies that can circumvent certain issues. Additionally, we propose an evaluation framework for machine learning algorithms specific to the problem of climate downscaling as needed to improve transparency and foster trust in climate projections.

Significance Statement

This review offers a significant contribution to our understanding of how machine learning can offer a transformative change in climate downscaling. It serves as a guide to navigate recent advances in machine learning and how these advances can be better aligned toward inherent challenges in climate downscaling. In this review, we provide an overview of these recent advances with a critical discussion of their advantages and limitations. We also discuss opportunities to refine existing machine learning methods alongside new approaches for the generation of large ensembles of high-resolution climate projections.

© 2024 American Meteorological Society. This published article is licensed under the terms of the default AMS reuse license. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Publisher’s Note: This article was revised on 12 April 2024 to correct a production error in the display of Fig. 4 that appeared when originally published.

Corresponding author: Neelesh Rampal, neelesh.rampal@niwa.co.nz

Abstract

Despite the sophistication of global climate models (GCMs), their coarse spatial resolution limits their ability to resolve important aspects of climate variability and change at the local scale. Both dynamical and empirical methods are used for enhancing the resolution of climate projections through downscaling, each with distinct advantages and challenges. Dynamical downscaling is physics based but comes with a large computational cost, posing a barrier for downscaling an ensemble of GCMs large enough for reliable uncertainty quantification of climate risks. In contrast, empirical downscaling, which encompasses statistical and machine learning techniques, provides a computationally efficient alternative to downscaling GCMs. Empirical downscaling algorithms can be developed to emulate the behavior of dynamical models directly, or through frameworks such as perfect prognosis in which relationships are established between large-scale atmospheric conditions and local weather variables using observational data. However, the ability of empirical downscaling algorithms to apply their learned relationships out of distribution into future climates remains uncertain, as is their ability to represent certain types of extreme events. This review covers the growing potential of machine learning methods to address these challenges, offering a thorough exploration of the current applications and training strategies that can circumvent certain issues. Additionally, we propose an evaluation framework for machine learning algorithms specific to the problem of climate downscaling as needed to improve transparency and foster trust in climate projections.

Significance Statement

This review offers a significant contribution to our understanding of how machine learning can offer a transformative change in climate downscaling. It serves as a guide to navigate recent advances in machine learning and how these advances can be better aligned toward inherent challenges in climate downscaling. In this review, we provide an overview of these recent advances with a critical discussion of their advantages and limitations. We also discuss opportunities to refine existing machine learning methods alongside new approaches for the generation of large ensembles of high-resolution climate projections.

© 2024 American Meteorological Society. This published article is licensed under the terms of the default AMS reuse license. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Publisher’s Note: This article was revised on 12 April 2024 to correct a production error in the display of Fig. 4 that appeared when originally published.

Corresponding author: Neelesh Rampal, neelesh.rampal@niwa.co.nz

1. Introduction

Climate projections are crucial for addressing the challenges of climate change and making informed decisions about mitigation and adaptation (Maraun 2016; Marotzke et al. 2017; Wilby and Wigley 1997). Climate projections are produced by global climate models (GCMs), which simulate the global climate response to different forcings, namely, greenhouse gas emissions scenarios (Meinshausen et al. 2020; Moss et al. 2010). The Coupled Model Intercomparison Project (CMIP) initiative produces multi-GCM ensembles of centennial climate projections that constitute the main source of information for climate change studies. The two latest ensembles available are CMIP5 (Taylor et al. 2012) and CMIP6 (Eyring et al. 2016) with typical horizontal grid resolutions of around 100–200 km resolution in the atmosphere. However, the coarse spatial resolution of GCMs often limits their ability to accurately simulate climate changes at regional and local scales, where the impacts of climate change are experienced (Benestad 2004, 2010; Fowler et al. 2007; Maraun 2016; Wilby and Wigley 1997). To enhance the spatial resolution of climate projections, known as climate downscaling, two techniques are frequently employed: empirical downscaling, involving statistical and machine learning (ML) methods, and dynamical downscaling.

Dynamical downscaling refers to running a regional climate model (RCM), which is typically nested using the lateral boundary conditions from a “host” GCM. RCMs aim to simulate physical processes that are not properly resolved at the resolution of the GCMs over a particular domain of interest (Feser et al. 2011; Xu et al. 2019) and develop their own regional convection and mesoscale circulation independent of the GCMs (Giorgi et al. 1994; Jones et al. 1995). Through better resolving key terrain features such as mountain ranges, valleys, and coastal boundaries, they generally improve the representation of important local and mesoscale circulation features (Gensini et al. 2023; Hoogewind et al. 2017; Liu et al. 2017; Prein et al. 2015), as illustrated in Fig. 1a. While there is ongoing discussion around the extent to which projections from RCMs improve or “add value” to projections by the GCMs (Lloyd et al. 2021), many studies have presented examples where RCMs have greatly reduced biases relative to GCM outputs (Careto et al. 2022; Di Virgilio et al. 2019, 2020; Feser et al. 2011; Rummukainen 2016). Furthermore, through the inclusion of local feedbacks, RCMs may produce unique climate change signals not captured by their host GCMs (Boé et al. 2020; Giorgi et al. 2016; Lloyd et al. 2021; Taranu et al. 2023). Prominent international initiatives like the Coordinated Regional Downscaling Experiment (CORDEX) are paramount in setting standards for regional climate downscaling efforts, fostering collaboration, consistency, and improving data access (Diez-Sierra et al. 2022; Giorgi et al. 2009; Giorgi and Gutowski 2015).

Fig. 1.
Fig. 1.

(a) The spatial resolution of precipitation from a typical CMIP6 GCM (∼100 km; left plot) vs the detail level required (12 km; right plot) for climate impact research from an RCM over the New Zealand region. Shaded values are daily precipitation amount (mm day−1), and contours are mean sea level pressure (hPa). (b) Illustration of the concept of uncertainty in climate projections by comparing two scenarios; the small ensemble (green) is a random subset of the large ensemble (black). (c) An illustration of how the future climate can be outside the distribution of the present-day climate, using a comparison of specific humidity at 850 hPa (g kg−1), area averaged over the New Zealand region, extending from 150°E to 200°W longitude and 25° to 50°S latitude. The red distribution represents historical ERA5 (1975–2014), and the blue one depicts results for the future climate scenario from the Australian Community Climate and Earth System Simulator (ACCESS-CM2) for SSP5.85 (2060–99).

Citation: Artificial Intelligence for the Earth Systems 3, 2; 10.1175/AIES-D-23-0066.1

Very recently, a push toward more explicitly resolving convection in RCMs has been made possible by computational advances. As such, some RCMs with very high resolution (1–4 km) can now more directly capture the effects of climate change on local extreme weather phenomena, including flash floods and hailstorms (Adinolfi et al. 2023; Coppola et al. 2020; Kendon et al. 2023; Prein et al. 2015). However, resolutions closer to ≫100 m are still needed to resolve shallow convection fully explicitly, and uncertainties remain regarding the assumptions of other parameterizations (e.g., microphysics) in the push for higher-resolution simulations (Solman et al. 2021).

Despite these advances there are many ongoing challenges for RCMs. The primary hurdle is their high computational cost, which results in most CORDEX-type experiments today being performed at a “gray zone” resolution (12–25 km), a range over which convection is parameterized, leading to considerable uncertainty, particularly for simulating precipitation extremes and related flooding events (Ban et al. 2014; Coppola et al. 2020). The high computational cost of running RCMs also typically results in a small sample of GCMs being dynamically downscaled (Schär et al. 2020). Furthermore, typically only a single initial condition ensemble member is downscaled per GCM, meaning that an assessment of internal variability uncertainty is generally not possible, despite the known importance of this uncertainty on regional scales (Deser et al. 2012; Deser and Phillips 2023; Gibson et al. 2024), as illustrated in Fig. 1b. Additionally, the climate change signal can be sensitive to different parameterization schemes or the specific RCM used for downscaling, which is not often properly accounted for due to computational limits (Giorgi and Gutowski 2015; Gutiérrez et al. 2019; Jacob et al. 2020; Prein et al. 2015). Another significant issue is that dynamical downscaling simulations can, in some cases, amplify biases present in the GCMs (Diaconescu et al. 2007).

Empirical downscaling algorithms are several orders of magnitude more computationally efficient than RCMs, enabling the downscaling of a much larger ensemble of GCMs (Wilby and Wigley 1997). Empirical downscaling algorithms (encompassing both statistical and ML) have been shown to add value and reduce biases present in GCMs (Baño-Medina et al. 2021, 2022; Benestad 2004, 2010; Benestad et al. 2007; Boé et al. 2007; Gutiérrez et al. 2019; Hannachi et al. 2007; Schmidli et al. 2006; Themeßl et al. 2012; Vrac et al. 2007). In this review, we divide empirical downscaling strategies into two main subsets: observational downscaling and RCM emulation. Observational downscaling, often also referred to as empirical statistical downscaling, refers to training a downscaling algorithm in which the target is observational data. In contrast, an RCM emulator aims to replicate the functionality of a physics-based RCM and can be trained on simulations from historical and future climates. In both observational downscaling and RCM emulation, there has been a recent shift from traditional statistical methods to modern ML approaches, including deep learning. Deep learning is a subset of ML that specifically focuses on ML algorithms (e.g., neural networks) with multiple hidden layers (LeCun et al. 2015).

While deep learning–based downscaling algorithms have often outperformed traditional statistical algorithms (e.g., Baño-Medina et al. 2020; Gaitan et al. 2014; Hobeichi et al. 2023; Nishant et al. 2023; Rampal et al. 2022a; Tang et al. 2016), they also inherit many longstanding challenges that have been well documented for statistical downscaling algorithms (e.g., Benestad 2004; Maraun et al. 2015). These include the issue of domain adaption (or extrapolation), in which future climate projections are outside the training distribution (out of distribution) for observational downscaling algorithms, which are trained exclusively on observational data (Baño-Medina et al. 2021; Gutiérrez et al. 2019; Hernanz et al. 2022a,b; Kumar et al. 2012; Maraun 2016; Wang et al. 2018; Wilby and Wigley 1997) as illustrated in Fig. 1c. Similarly, emulators are often trained on a specific or limited set of RCM simulations and are required to adapt to other unseen climate projections from a different GCM (Chadwick et al. 2011; Doury et al. 2023).

Modern ML and deep learning downscaling algorithms also present their own unique challenges that need to be addressed, for example, ML algorithms are often perceived as “black boxes” where their complexity makes it difficult to decipher details of their decision-making process (McGovern et al. 2019). This lack of transparency can reduce trust in the predictions (Baño-Medina 2020; González-Abad et al. 2023a; Linardatos et al. 2020; McGovern et al. 2019; Rampal et al. 2022a).

Although existing review papers cover certain aspects of empirical downscaling algorithms (Benestad 2004; Maraun 2016; Teutschbein and Seibert 2012; Wilby and Wigley 1997) and ML advancements across climate science and weather forecasting more generally (Chen et al. 2023; de Burgh-Day and Leeuwenburg 2023; Irrgang et al. 2021; Materia et al. 2023; McGovern et al. 2023; Molina et al. 2023; Watson-Parris 2021) including explainable artificial intelligence (XAI; McGovern et al. 2019), none specifically addresses the context and unique challenges of recent advances in ML for climate downscaling.

This review highlights the utility and recent advances of ML in empirical downscaling, identifying key research gaps and emerging opportunities. It primarily focuses on how recent ML advances can enhance climate downscaling methods in the context of specific research questions:

  • Can empirical downscaling algorithms adequately reproduce the present-day climate, including its extremes?

  • How effectively can empirical downscaling algorithms adapt outside its training distribution to unobserved historical and future climates?

  • What techniques can be implemented to enhance the usability, transparency, and interpretability of empirical downscaling algorithms?

An overview of the key topics discussed in this review is illustrated in Fig. 2. In section 2, this review will first provide an overview of empirical downscaling. In section 3, we will introduce computer-vision-based climate downscaling algorithms. Then, in sections 4 and 5 it will discuss recent ML advances in observational downscaling and RCM emulation respectively. Section 6 discusses strategies to enhance the out-of-distribution performance of empirical downscaling algorithms. Following this, section 7 outlines various future research directions. Section 8 presents an evaluation framework for ML-based empirical downscaling algorithms, and, last, section 9 discusses how machine learning can more effectively contribute to collaborative initiatives like CORDEX.

Fig. 2.
Fig. 2.

An overview of the topics concerning climate downscaling that are discussed in this review.

Citation: Artificial Intelligence for the Earth Systems 3, 2; 10.1175/AIES-D-23-0066.1

2. An overview of climate downscaling

This section will first introduce several well-established observational downscaling frameworks (section 2a) and then provide a brief overview of traditional statistical and ML approaches within its extensive body of research (section 2b). Last, it will introduce RCM emulators (section 2c), noting that RCM emulators studies are an emerging area of research.

a. Observational downscaling frameworks

Many different observational downscaling (or empirical statistical downscaling) techniques have been studied and reviewed for their utility in climate downscaling (Benestad 2004; Benestad et al. 2007; Gutiérrez et al. 2019; Maraun 2016; Yiou 2014). These can be grouped into four main families: Perfect prognosis (PP), superresolution (SR), weather generators (WGs), and model output statistics (MOS). Although this review emphasizes ML advancements in PP and SR, we will provide a short historical background of each field for context.

1) PP

PP involves training an algorithm (e.g., multiple linear regression) to establish associations between coarse-resolution prognostic variables (e.g., variables fundamental to the equations of motion) at the GCM scale (∼100 km) and surface-level local-scale climate variables (e.g., rainfall observations), as illustrated in Fig. 3a. In PP, large-scale predictor fields usually comprise a mix of thermodynamic (e.g., temperature) and dynamic variables (e.g., zonal wind) across multiple atmospheric levels. PP is trained using observational products such as atmospheric reanalyses, based on the assumption that the relationships are “perfect,” and that these learned relationships can be applied to GCM fields after training (Glahn and Lowry 1972; Wilby and Wigley 1997). The trained PP algorithms are then applied to predictors simulated by GCMs in both historical and future climates, providing climate projections at a finer resolution than the host GCM (Fowler et al. 2007). This concept of applying a trained algorithm on observational datasets to GCM predictor fields is known as domain adaption, which is discussed further in sections 4 and 5. PP algorithms, which downscale surface-level variables such as temperature or rainfall, often operate without using the targeted surface variable as a coarse-scale predictor. For instance, a PP algorithm downscaling a variable such as precipitation might not incorporate precipitation as a predictor field.

Fig. 3.
Fig. 3.

(a) Comparison of PP (left plot) and SR (right plot) approaches for climate downscaling in the New Zealand region. In PP, an ML algorithm is trained to map large-scale circulation fields from reanalyses at the resolution of a typical CMIP6 GCM (∼100 km) onto a high-resolution target. In SR downscaling, an algorithm is trained to map from a coarsened target (∼100 km) onto a high-resolution target. The high-resolution target can either be a dataset of high-resolution observational data [as in (a)] or be a simulated quantity in an RCM [as in (b)], both of which can serve as training data. The SR approaches can also include additional predictor variables. (b) A comparison between the perfect and imperfect training frameworks for RCM emulation. The perfect framework uses coarsened RCM fields as predictor variables for an ML algorithm, whereas the imperfect framework uses the GCM fields directly. Here, the target variables are high-resolution RCM fields (i.e., precipitation).

Citation: Artificial Intelligence for the Earth Systems 3, 2; 10.1175/AIES-D-23-0066.1

2) SR

Differing from traditional PP approaches, we also discuss SR downscaling in this review, which has received a lot of attention in the ML community. SR downscaling can be viewed as a special case of PP in which a coarse resolution of the surface field, such as precipitation, is itself alone used to predict a high-resolution counterpart as illustrated in Fig. 3a.

3) WGs

WGs are based on stochastic statistical algorithms designed to mimic the distribution and temporal dependence of meteorological variables (Langousis et al. 2016; Langousis and Kaleris 2014; Wilby et al. 2002; Yiou 2014). They are primarily focused on accurately replicating spatiotemporal dynamics, correlation structures, weather persistence and natural variability of the variables of interest. Contrary to PP methods, which predict local conditions from large-scale fields, weather generators formulate synthetic weather scenarios from the statistical properties of the data itself.

4) MOS

MOS is a method to directly bias correct GCM output fields by applying a statistical linking function. This function is developed based on the relationship between coarse-scale fields and the local-scale observed data—generally using a distributional relationship such as empirical quantile mapping (as in most MOS applications there is a lack of day-to-day correspondence between coarse-scale observations and observed data). A major difference from PP is that MOS adjustments are made on a per-GCM (model specific) basis and are therefore not transferable between GCMs. PP is therefore more flexible, generalizable, and closely aligned with the task of a dynamical RCM.

b. Traditional observational downscaling algorithms

Traditional PP methods usually focus on downscaling variables to one location or “site” at a time, such as at a station-based observation. In these “site specific” algorithms (Fig. 4a), the input feature vector typically consists of prognostic variables (e.g., zonal wind) extracted from a single coarse-resolution grid point or a collection of spatially coherent grid cells (Bedia et al. 2020) such as neighboring grid cells or empirical orthogonal functions of spatial patterns (Renwick et al. 2009). There have been a wide variety of site-specific implementations for observational downscaling including multiple linear regression (e.g., Rampal et al. 2022a; Renwick et al. 2009; Sharifi et al. 2019), random forest (Hutengs and Vohland 2016), and generalized linear models (Baño-Medina et al. 2020). However, there are also site-specific downscaling techniques that use deep learning architectures such as multilayer perceptron (e.g., Cannon 2008; Hobeichi et al. 2023; Rampal et al. 2022a; Vandal et al. 2019) and long short-term memory (LSTM; Bittner et al. 2023; Nourani et al. 2023).

Fig. 4.
Fig. 4.

An illustration of three ML algorithms used for climate downscaling. (a) First, a random forest algorithm is shown, in which an individual algorithm is developed for each grid, based on climate features extracted at that specific grid location. (b) Second, a CNN architecture, capable of automatically extracting features from all grid cells within the spatial domain as a one-dimensional vector, is shown. (c) Third, an end-to-end CNN architecture known as U-Net is illustrated, which has a contractive path that reduces the spatial resolution of the input image by a factor of 4.

Citation: Artificial Intelligence for the Earth Systems 3, 2; 10.1175/AIES-D-23-0066.1

Site-specific downscaling algorithms are often simple, interpretable, and generally require less training data than more complex ML algorithms. Their simplicity and interpretability facilitate a clear understanding of their decision-making and can easily be adapted to the requirements of different end users. Complex algorithms [e.g., consisting of millions of parameters such as in convolutional neural networks (CNNs)] are not always required for satisfactory performance in these downscaling tasks. In some instances, simpler downscaling algorithms can often yield better out-of-distribution performance when applied to future unseen climates (Gutiérrez et al. 2019).

However, a notable disadvantage of these site-specific algorithms is that they generally do not utilize “nonlocal” information (i.e., from the neighboring grid points) unless manually included as additional predictors. This aspect becomes particularly important when downscaling nonlinear variables such as rainfall, which are often dependent on spatial gradients of atmospheric variables (Benestad 2010; Benestad et al. 2007; Rampal et al. 2022a; Renwick et al. 2009). Additionally, applying site-specific algorithms on a point-by-point basis over large gridded or station-based datasets can become computationally expensive, albeit still much more efficient than running RCMs. Furthermore, the selection of predictors used in traditional observational downscaling algorithms can significantly change future climate projections and in particular the climate change signal (Balmaceda-Huarte and Bettolli 2022; Benestad 2004; Legasa et al. 2023; Manzanas et al. 2020; Maraun 2016).

c. RCM emulators

The aim of an RCM emulator is to act as a computationally efficient surrogate of an RCM. Training an RCM emulator closely mirrors the process of most observational downscaling algorithms, differing primarily in the source of the input variables. Instead of using reanalysis and observational data, training an RCM emulator involves using simulated quantities from an RCM–GCM at a coarse resolution as input, and high-resolution predictand variables from an RCM. Consequently, for RCM emulation, methodologies like PP (including SR) and MOS, which are generally described in an observational downscaling context, can be more broadly applied.

Training an RCM emulator offers significant advantages over observational downscaling algorithms. First, they eliminate the need for observational data, which can be a major limiting factor in observational-sparse regions. Second, RCM emulators can generate very high-resolution output that is constrained only by the resolution of the RCM output, as opposed to the resolution of gridded observational data. It is important to emphasize that a trained emulator cannot be expected to overcome biases and limitations present in the RCMs (Giorgi et al. 2009; Prein et al. 2015; Xu et al. 2019). Third, RCM emulators can be trained on simulations from both historical and future periods, unlike observational downscaling algorithms. An RCM emulator also has several practical advantages over using an RCM. RCMs often require specific variables from GCMs at certain pressure levels and temporal frequency, which may not be publicly accessible or restricted, potentially limiting the choice of GCMs for downscaling. Conversely, an RCM emulator can be trained more flexibly to bypass these input data requirements, enabling them to be more broadly applicable across the GCM–RCM matrix.

There are two important training frameworks in RCM emulation, known as the “perfect” and “imperfect” model frameworks (Baño-Medina et al. 2023; Boé et al. 2023; Doury et al. 2023; van der Meer et al. 2023), as illustrated in Fig. 3b. In the imperfect model framework, an emulator is trained to directly map from GCM to RCM resolution, closely resembling the true function of an RCM. Here, the imperfect model framework serves a dual purpose: it downscales the data and accounts for discrepancies between the GCM and RCM outputs. Here, the emulator will learn a relationship that is unique to that particular GCM–RCM pair (Boé et al. 2023) and thus is a MOS technique. In contrast, the perfect framework involves coarsening the RCM resolution to that of a GCM and subsequently training an algorithm to map between the coarsened RCM and the RCM itself.

The rationale for the perfect model framework lies in the typically weak correlation and certain degree of “independence” between RCM and/or GCM fields in the imperfect framework (Bartók et al. 2017; Boé et al. 2020; Sørland et al. 2018). Training in the perfect framework is more closely aligned with PP, because it only learns general relationships between low- and high-resolution RCM pairs (Boé et al. 2023). The consistency between the low- and high-resolution RCM pairs when training in the perfect framework simplifies the emulator’s training in comparison with the imperfect framework.

3. Climate downscaling with computer-vision algorithms

This section aims to provide an overview of recent computer-vision-based downscaling algorithms, which are applicable to both observational downscaling and RCM emulation. This review focuses on CNNs (section 3a) and generative downscaling algorithms (section 3b).

Today, the application of computer-vision algorithms has become increasingly prominent in the field of climate downscaling. Notably, a single algorithm can now be trained to downscale the entire regional domain seamlessly, such as a continent. In many traditional observational downscaling approaches, one single algorithm is trained per grid point, meaning that computer-vision-based downscaling approaches can simplify and improve the efficiency of the downscaling process. The algorithms most employed in this domain are CNNs and generative adversarial networks (GANs).

a. Regression-based CNNs

CNNs have been increasingly used for climate downscaling. CNNs have been particularly successful due to its “spatial awareness” from its convolutional operation, and the ability to automatically learn and extract complex and nonlinear spatial features relevant to a task (LeCun et al. 2015). Like other neural networks, CNNs are also able to predict multiple outputs simultaneously (Fig. 4b), meaning that they can efficiently downscale gridded high-resolution fields.

CNNs generally consist of multiple layers, where each layer consists of a series of kernels that are trained to extract features from the predictors (LeCun et al. 2015). During training, the kernel weights are optimized based on an objective function (e.g., mean square error), such that the features extracted from the large-scale predictor fields is “optimal” for the task of downscaling, without any need for manually deciding which information should be incorporated to capture spatial dependencies, as required in traditional site-specific algorithms.

In this review we discuss two commonly used regression-based CNN architectures: 1) traditional/standard CNNs architectures with dense/fully connected layers (Fig. 4b) and 2) end-to-end convolutional architectures such as U-Net (Ronneberger et al. 2015). A U-Net architecture slightly differs from the CNN architecture shown in Fig. 4c and is distinctive for its U-shaped design. It consists of a contracting path for feature extraction (reducing the dimensionality of the input to a latent space vector) and an expansive path in which the high-resolution image is generated from the extracted features (latent space vector). Rather than using a neural network to predict the final high-resolution output, the U-Net uses convolutional layers (often with upsampling layers to increase the spatial resolution) for downscaling. A key difference between U-Net architectures and other end-to-end CNN architectures are its long-range skip connections. Here, intermediate outputs at multiple stages in the contracting path are mixed (or concatenated) with the expansive path.

In addition to the two commonly used CNN architectures discussed above, we briefly also mention vision transformers as a related approach. Vision transformers replace convolutional layers with self-attention layers that allow them to weigh the different parts of the spatial domain relative to each other, and often outperform architectures such as U-Nets in SR-related tasks (Alerskans et al. 2022). However, they often require vast amounts of data (Khan et al. 2022) and to our knowledge are yet to be implemented in a downscaling context. As such, vision transformers are not discussed further in this review.

b. Generative downscaling algorithms

In this review, we discuss two types of generative downscaling algorithms: GANs and diffusion models. These algorithms incorporate stochastic noise into training and inference, allowing them to generate an ensemble of predictions for a set of predictor variables.

1) GANs

GANs are a recent development in machine learning and have been widely adopted in many areas of computer vision (Isola et al. 2017; Goodfellow et al. 2014), medical imaging (e.g., Iqbal and Ali 2018), and, more recently, downscaling (J. Wang et al. 2021). In particular, there is a variation of GAN known as conditional GANs (c-GAN) that has been particularly successful in producing high-resolution images of exceptional perceptual quality (realistic looking) from low-resolution input images (the condition) a task in which other CNN architectures (e.g., U-Net) can fall short (Mirza and Osindero 2014).

In a downscaling context, the c-GAN architecture consists of two main components: a generator that aims to create a high-resolution climate field from a “low resolution” climate field as an input (the condition), and a discriminator whose goal is to classify whether the generated image is real (ground-truth high-resolution simulations) or fake (synthetic high-resolution fields generated by the generator). Here, the generator and discriminator components of a GAN are generally CNNs (in which the generator is often a U-Net). Put more simply, the training objective for GANs is to train a generator to create highly “realistic” images that can fool the discriminator into believing that they are genuine. Unlike traditional neural networks, which often optimize for loss functions such as the mean square error (MSE) or mean absolute error (MAE), GANs add an additional term often known as the adversarial loss, which adds an inherent “realism” constraint to the algorithm’s output. It is important to note that the adversarial loss in a c-GAN encourages perceptual realism over physical consistency. In the context of downscaling, this means the resulting downscaled predictions may not always adhere to physical laws.

2) Diffusion models

Diffusion models are generative models relying on Markov chains to model the incremental transition of low- to high-resolution image pairs (Ho et al. 2021). They have emerged as a promising technique for superresolution applications that are more focused on quality than inference/prediction time. Consistent with their success in the physical sciences (Wan et al. 2023), and weather forecasting (Gao et al. 2023; Hatanaka et al. 2023; Leinonen et al. 2023) they have already been applied to downscale precipitation (Addison et al. 2022), winds, temperature, and radar reflectivity (Mardani et al. 2023).

Training a diffusion model (which often has a U-Net architecture) begins with a dataset comprising low- and high-resolution image pairs. The low-resolution data are initially obscured with progressively increasing noise levels. The model’s objective is to predict the next step in the diffusion process based on the current noisy representation. Diffusion models are a promising research avenue thanks to their training stability in comparison with GANs, which have been known to suffer from an issue known as “mode collapse” (Goodfellow et al. 2014). Diffusion models have also outperformed GANs in image synthesis (generating images of high perceptual quality) (Dhariwal and Nichol 2021; Ho et al. 2021). However, important challenges remain, such as long training and inference times.

4. Recent advances in observational downscaling algorithms

This section begins by evaluating the performance of computer-vision algorithms against traditional observational downscaling methods (section 4a) and evaluating the performance across different computer-vision architectures (section 4b). It then highlights recent advances in loss function and constraint development and its impact on algorithm performance (section 4c). Last, it discusses recent research concerning the out-of-distribution performance of computer-vision downscaling algorithms in both historical and future climate scenarios (section 4d).

a. Evaluation against traditional downscaling methods under present-day climate conditions

Recent applications of computer-vision algorithms in a PP context have shown clear advantages over traditional ML and statistical downscaling methods, in terms of reducing historical biases over an independent observational period (Baño-Medina et al. 2020; Miao et al. 2019; Pan et al. 2019; Quesada-Chacón et al. 2022; Rampal et al. 2022a; Sun and Lan 2021). CNNs have shown benefits over traditional empirical downscaling methods across North America (Pan et al. 2019), South America (Balmaceda-Huarte et al. 2023; Balmaceda-Huarte and Bettolli 2022), Europe (Baño-Medina et al. 2020; Quesada-Chacón et al. 2022), China (Miao et al. 2019; Sun and Lan 2021), and New Zealand (Rampal et al. 2022a). Overall, this highlights that ML algorithms are capable of learning complex regional climate processes and improving on traditional downscaling approaches in wide variety of climatic regions.

Similarly, SR downscaling algorithms have also been shown to outperform traditional observational downscaling approaches such as bias correction and spatial disaggregation (BCSD) over an independent observational period (Oyama et al. 2023; Sha et al. 2020; Vandal et al. 2017). Since being introduced by Vandal et al. (2017), there have been numerous SR downscaling implementations across different regions of the world (Dujardin and Lehning 2022; Iotti et al. 2022; Jiang et al. 2021; G. Liu et al. 2023; Liu et al. 2022; Liu et al. 2020; Mishra Sharma and Mitra 2022; Sha et al. 2020; Stengel et al. 2020; Wu et al. 2021) and for downscaling different variables, including temperature (e.g., Sha et al. 2020), solar radiation (Stengel et al. 2020), and wind speed (Höhlein et al. 2020; Stengel et al. 2020). There are several commonly used strategies in which SR downscaling algorithms can be trained. The first involves training distinct algorithms, each designed to incrementally double the resolution (Vandal et al. 2017). The second is where a single algorithm is trained to increase resolution of the climate field by a desired factor (e.g., Harris et al. 2022; Price and Rasp 2022; Vosper et al. 2023).

b. Performance evaluation across different computer-vision algorithms

In both PP and SR settings, some algorithms have demonstrated more success and skill than others. In a PP setting, end-to-end CNN architectures such as U-Net (Fig. 4c) consisting of skip connection layers appear to have similar (González-Abad et al. 2023a) or better performance (e.g., mean absolute error metrics) when benchmarked against standard CNN architectures (Fig. 4b) (Quesada-Chacón et al. 2022) under present-day climate conditions. In a SR setting, several studies have found better performance when using end-to-end CNN architectures with residual layers or blocks and skip connections than those that do not (e.g., Izumi et al. 2022; Liu et al. 2020; Mishra Sharma and Mitra 2022). Using residual layers and skip connections can contribute to improved algorithm learning and generalization by effectively addressing issues like vanishing gradients, especially in deeper networks (He et al. 2016; Ronneberger et al. 2015). End-to-end CNN architectures (e.g., U-Net) can be improved further by incorporating recurrent layers that enable the algorithms to learn climate relationships across multitemporal and spatial scales (Adewoyin et al. 2021; Leinonen et al. 2021).

Recently, there have been several studies that have implemented c-GANs in the context of SR downscaling (Iotti et al. 2022; Izumi et al. 2022; Oyama et al. 2023; Saha and Ravela 2022; Vosper et al. 2023), though only a few studies perform comparisons with other nongenerative computer-vision algorithms (e.g., U-Nets). For example, Vosper et al. (2023) highlighted that c-GANs can more accurately capture extreme events and their spatial structure for variables such as precipitation, when compared with a U-Net architecture of similar complexity to the c-GANs generator. Izumi et al. (2022) found that while c-GANs often show poorer performance in metrics like root-mean-square error, they excel in metrics related to perceptual quality as compared with other end-to-end CNN architectures of similar complexity. In addition, in several weather forecasting downscaling applications, c-GANs have outperformed nongenerative CNN architectures when evaluated against commonly use forecasting metrics (Harris et al. 2022; Leinonen et al. 2021; Price and Rasp 2022; Ravuri et al. 2021).

c. Incorporating constraints and customized loss functions

In addition to the algorithm and architecture used, the selection of a loss function appears crucial in capturing extremes and local-scale variability. One approach is to infer parametric distributions of a target variable conditioned to a particular atmospheric state (Baño-Medina et al. 2020; Cannon 2008; Carreau and Vrac 2011; Rampal et al. 2022a; Sun and Lan 2021). This has been found to be particularly relevant for precipitation, in which the log-likelihood of Bernoulli–gamma (BG) distribution can be used as a loss function. For example, Rampal et al. (2022a) found a 30% lower error in the 90th quantile of rainfall by using the log-likelihood of a BG distribution relative to conventional mean-square-error functions in a CNN.

There are several other approaches that can enhance the prediction ability of these downscaling algorithms to capture the tails of the distribution (i.e., extreme events). For example, data augmentation artificially oversamples extreme events in the training data to generally improve downscaled predictions of extreme events (F. Wang et al. 2021). Also, techniques such as customized loss functions to emphasize extreme events (e.g., Lopez-Gomez et al. 2023; Bailie et al. 2024) and utilizing variable transformations beyond the BG distribution (e.g., Boulaguiem et al. 2022) also contribute to this improvement.

The loss function and algorithm architecture (e.g., layers) can be tailored to include physical or statistical constraints, such as conservation equations, further enhancing their adaptability and performance in downscaling tasks (Geiss et al. 2022; Geiss and Hardin 2023; Harder et al. 2023; Hess et al. 2022). Incorporating statistical and physical constraints into ML algorithms involves modifying the learning process to ensure adherence to known physical laws or statistical relationships, thereby improving the algorithms performance and interpretability in various applications. While challenging, this has proven increasingly beneficial in many areas of climate science such as the development of parameterizations to represent subgrid processes in climate models (Beucler et al. 2021; Brenowitz et al. 2020; Brenowitz and Bretherton 2018; Rasp et al. 2018; Yuval and O’Gorman 2020) and in hydrological modeling (Shen et al. 2023; Kashinath et al. 2021; Feng et al. 2023).

There are two commonly used strategies to incorporate statistical or physical constraints, typically referred to as “soft” or “hard” constraints (Harder et al. 2022). Soft constraints are incorporated into the loss function during training of the ML algorithm (Beucler et al. 2021). For example, the loss functions can be customized to conserve spatial averages, allowing the ML algorithm to produce high-resolution predictions consistent with the corresponding low-resolution data (e.g., Lempitsky et al. 2018). This constraint may be useful in the context of SR downscaling as it ensures consistency between high-resolution and coarse-resolution fields (e.g., precipitation) and may enable the ML algorithm’s outputs to preserve the climate change signal from its parent GCM.

On the other hand, hard constraints are integrated into ML algorithm architectures (Harder et al. 2022) and have been effectively applied to downscale atmospheric water content (Harder et al. 2023) and atmospheric chemistry simulations using SR, improving training time, performance, and physical consistency (Geiss et al. 2022; Geiss and Hardin 2023). It is also possible to develop more bespoke hard constraints. Hess et al. (2022) used a multiplicative layer conserving annual precipitation amount, in the context of bias correcting CMIP6 GCM. Here, the constraint enhanced the algorithm’s ability to extrapolate to future climate scenarios. In addition, ML algorithm’s output may lack a consistently positive diurnal temperature range, sometimes resulting in instances where the downscaled daily temperature minimum exceeds the daily temperature maximum. Enforcing hard physical constraints has been suggested to improve the preservation of the diurnal temperature range (González-Abad et al. 2023b).

d. Out-of-distribution evaluation in historical and future climates

Despite many performance and architectural improvements of computer-vision algorithms, most studies focus on evaluating in current climate conditions only, using reanalysis data for cross validation (e.g., Baño-Medina et al. 2020; Miao et al. 2019; Pan et al. 2019; Rampal et al. 2022a), which is often described as insufficient condition for an algorithm to adapt to an unobserved climate scenario from a GCM (historical and future climates) (Maraun et al. 2015).

As introduced earlier, domain adaption assesses the algorithm’s capacity to apply relationships learned from observational data to unobserved climates within a GCM (e.g., historical and future simulations). In many cases, the distributions of climate variables in future climate scenarios are often out of distribution relative to the observational data the algorithm is trained on (Maraun et al. 2010).

When evaluating the out-of-distribution performance of computer-vision algorithms over the historical period of simulation driven by free-running GCMs, it is only possible to compare with historical observational data using distribution-based statistics/indices such as the climatological mean (Balmaceda-Huarte et al. 2023; Baño-Medina et al. 2021, 2022; González-Abad et al. 2023a). As for out-of-distribution evaluation in future climates, a commonly used metric is to compare its downscaled climate change signals with those generated by GCMs or an ensemble of different RCMs (see section 8 for more information). Here, the climate change signal is often defined as the absolute or percentage change in a variable of interest in a future period (e.g., 2070–99) relative to a historical period (1985–2014). The climate change signal is generally computed on a seasonal, monthly, or annual basis. Section 8 provides a detailed discussion on the evaluation methods for observational downscaling algorithms.

Several studies have highlighted that bias adjusting the GCM predictors prior to performing downscaling in PP context leads to better performance over the historical period of simulation (Balmaceda-Huarte et al. 2023; Baño-Medina et al. 2021, 2022; Vrac and Ayar 2017). Bias adjustment matches the seasonal cycle of the predictor fields in the GCM with the observational reference dataset while preserving the GCMs future climate change signal (see Vrac and Ayar 2017). After bias adjustment, the CNN algorithm’s downscaled output has smaller biases that traditional observational downscaling algorithms in its climatological mean, as well as in other rainfall and temperature characteristics, when compared with observational data during the historical simulation period. When evaluated under future climate conditions, CNNs have been found to generalize better than state-of-the-art linear models to climate change scenarios for temperature (Balmaceda-Huarte et al. 2023; Baño-Medina et al. 2021) and precipitation (Baño-Medina et al. 2021) as they have stronger agreement with the climate change signal within the parent GCM. This better generalization of CNNs is thought to arise from their improved capacity to learn robust and climate-invariant relationships from data as pose to traditional observational downscaling algorithms (Baño-Medina et al. 2021).

It is important to note that SR downscaling approaches have not yet been properly assessed for robustness across historical and future climate conditions from GCMs. This is particularly important as SR downscaling approaches tend to rely solely on coarse-resolution surface-level fields from GCMs as predictors. These fields can have large biases relative to observations (e.g., Dai 2006), and may not properly account for future changes in circulation from the host GCM, both of which may negatively impact the reliability of future projections.

5. Recent advances in regional climate model emulation

This section first discusses recent ML advances in RCM emulator algorithms, and their ability to resolve local climate extremes (section 5a). Then, it describes research concerning out-of-distribution evaluation of RCM emulators (section 5b).

a. Evaluation of RCM emulation algorithms

A wide variety of different algorithms have been tested for RCM emulation including multiple linear regression (Holden et al. 2015), multilayer perceptron (Chadwick et al. 2011; Hobeichi et al. 2023; Nishant et al. 2023), statistical analogs (Boé et al. 2023), and normalizing flows (Groenke et al. 2020). Several studies using relatively simple statistical and ML algorithms have effectively emulated RCMs, showing skill in reproducing future climate projections when applied out of distribution (Boé et al. 2023; Chadwick et al. 2011). These studies highlight that complexity is not always a prerequisite for effectiveness, and the interpretability and flexibility of these methods can sometimes surpass more complex alternatives. Most important, these experiments not only provide benchmarks for evaluating more sophisticated algorithms but also offer insights into training strategies that could enhance the efficiency of deep learning algorithms [discussed in sections 6a–6c], which typically require more computational resources.

Recently, there has been a transition to using computer-vision algorithms for RCM emulation. These include CNNs (Babaousmail et al. 2021; Baño-Medina et al. 2023) and CNN-based architectures such as U-Net (Doury et al. 2023; van der Meer et al. 2023), c-GANs (Miralles et al. 2022; J. Wang et al. 2021), and recently diffusion models (Addison et al. 2022; Mardani et al. 2023). For the above implementations, various variables have been emulated including seasonal near-surface temperatures, precipitation, and evapotranspiration.

Using the imperfect framework, Mardani et al. (2023) successfully trained a diffusion model to downscale ERA5 reanalysis data (25 km) to a 2.2-km resolution, by training on ERA5-forced simulations from the Weather and Research and Forecasting (WRF) Model over Taiwan. They demonstrated that diffusion models surpassed traditional ML algorithms like random forests and U-Net architectures in performance, measured by mean absolute error, in comparison with ground-truth dynamical downscaled simulations. In addition, J. Wang et al. (2021) found that while c-GANs may underperform in comparison with end-to-end CNNs for instantaneous metrics (i.e., mean absolute error), they excel in accurately predicting extreme weather events. Note that other studies have not evaluated computer-vision algorithms against traditional ML algorithms, insights from similar analyses in PP and SR downscaling contexts, as discussed in section 4a, suggest that computer-vision architectures such as c-GANs and U-Nets will likely outperform traditional ML algorithms in RCM emulation context.

Overall, similar to recent advances in PP and SR downscaling approaches, c-GANs and diffusion models appear to show promise in an RCM emulation context, particularly in their ability to capture fine details and extreme events (Mardani et al. 2023; Miralles et al. 2022; J. Wang et al. 2021).

b. Out-of-distribution evaluation of RCM emulators

1) Training on historical and future climates

An RCM emulator needs to be generalizable enough to downscale across a wider range of climate scenarios (across the GCM–RCM matrix). Recent research emphasizes the importance of training across diverse climates to improve future climate extrapolation and enhance the reproduction of the climate change signal (Babaousmail et al. 2021; Boé et al. 2023; Chadwick et al. 2011; Doury et al. 2023). Doury et al. (2023) found that an emulator trained on only the historical period of the European domain of CORDEX (EURO-CORDEX) simulations (CNRM-ALADIN63 RCM) for near-surface temperature at 12 km underestimated end-of-century warming (2080–99) by 1.3°C when applied out of distribution to a representative concentration pathway (RCP) 4.5 scenario. Further research, including detailed studies on extreme events and their driving processes (e.g., cyclones; Vosper et al. 2023) is needed to further explore the extent to which RCM emulators can capture climate change signals as effectively as dynamical RCMs.

2) Imperfect and perfect framework

It remains uncertain which framework is optimal for applying broadly across the GCM–RCM matrix, as the choice of training framework can affect the algorithm’s out-of-distribution performance (Boé et al. 2023). Several studies have shown the imperfect framework outperforms the perfect framework in accurately capturing future climate signals in out-of-distribution RCM simulations for variables such as surface ice mass balance (van der Meer et al. 2023), temperature and rainfall (Boé et al. 2023). However, the choice of GCM–RCM simulations pair used for training can affect the performance of the emulator and its ability to reproduce the climate change signal (Baño-Medina et al. 2023; Boé et al. 2023).

In comparison, an emulator trained in the perfect framework shows little dependence on the driving GCM it is trained on (Baño-Medina et al. 2023; Boé et al. 2023). When applied directly to GCMs, they appear to better capture the climate change signal in the host GCM rather than ground-truth RCM simulations (Boé et al. 2023; Doury et al. 2023). However, several studies have highlighted that when emulators are trained in the perfect framework and applied out of distribution to GCM predictor fields, they can underestimate future climate change signals relative to ground-truth RCM simulations (Doury et al. 2023; van der Meer et al. 2023).

One prominent issue with training in the imperfect framework potential transferability/portability issues, as it may be only useful for emulating the GCM it was trained on. Additionally, the imperfect model framework—though skilled at reproducing certain variables like end-of-century surface ice mass balance (van der Meer et al. 2023)—may be challenging to train for other variables such as rainfall or wind (Miralles et al. 2022). One example of such a challenge lies in the configuration of the RCM simulation like spectral nudging—which can relax the adherence of the RCM synoptic fields to the host GCM to affect the spatiotemporal consistency between RCM and GCM simulations (Evans et al. 2014; Gibson et al. 2023). On the other hand, while training in the perfect framework is portable it operates in a different capacity to a true RCM.

Overall, there are advantages and disadvantages in both training frameworks, and it is possible that the out-of-distribution performance (across the GCM–RCM matrix) of each framework may depend on factors such as the downscaled variable, region, and RCM configuration and its biases (Boé et al. 2023).

6. Strategies for enhancing out-of-distribution performance

a. Observational downscaling

This section first introduces pseudoreality experiments for out-of-distribution testing of observational downscaling algorithms [section 6a(1)]. Then it discusses how techniques such as transfer learning can improve out-of-distribution performance [section 6a(2)], particularly when there is limited observational data.

1) Pseudoreality experiments for out-of-distribution testing

Our discussion has emphasized the numerous challenges in ensuring that observational downscaling algorithms can adapt outside its training distribution to future unobserved climates. While we explored strategies like bias adjustment of GCM predictor fields to enhance extrapolation, these methods alone are insufficient to overcome the limitations of ML algorithms in future climate scenarios. The difficulty in assessing the out-of-distribution performance of observational downscaling algorithms in a climate change context lies in the inability to directly test against the future.

Although direct future assessment is not feasible using observations, it is possible to conduct out-of-distribution tests or experiments utilizing GCM–RCMs simulations instead of observations, which have access to both future and past climates. Such experiments are known as pseudoreality experiments, which can aid in detecting potential issues in downscaling algorithms trained on observational data (Maraun et al. 2015) and are an active area of research (Balmaceda-Huarte et al. 2023; Boé et al. 2023; Charles et al. 1999; Dayon et al. 2015; Dixon et al. 2016; Gutiérrez et al. 2013; Hernanz et al. 2022a; Lanzante et al. 2018; Legasa et al. 2023; Manzanas et al. 2020; Schmith et al. 2021; Teutschbein and Seibert 2013; Vrac et al. 2007). The majority of these experiments are performed using traditional statistical approaches with only a few focusing on using computer-vision algorithms (Balmaceda-Huarte et al. 2023). Although, there are several relevant studies in an RCM emulation context (Baño-Medina et al. 2023; Doury et al. 2023; van der Meer et al. 2023).

With a specific pseudoreality simulation, diverse idealized experiments can be conducted to probe various issues; examples of such experiments are outlined in Maraun et al. (2015). Pseudoreality experiments, also known as “perfect model” or “model as truth,” include stationarity tests or domain adaption experiments, using the “pseudo-observed” predictands and predictors derived from the RCM–GCM in the pseudoreality. Note that the feasibility of these experiments depends on the availability of a broad range of simulations for training, which might not be possible in some regions, especially for RCM simulations. However, conducting similar experiments may also be possible using high-resolution GCM simulations such as the High Resolution Model Intercomparison Project (HighResMIP; Haarsma et al. 2016).

Performing a stationarity test on a PP or SR algorithm can be quite like the perfect model framework, specifically pertaining to RCM emulation. In this test, an ML algorithm is trained to map from a coarsened RCM simulation to the RCM itself as “pseudo-observations.” For example, an SR algorithm downscaling precipitation would only use the coarsened RCM precipitation as a predictor, whereas for PP, large-scale circulation fields (e.g., zonal wind) are selected as predictors for precipitation. It is important to highlight that this training process should closely resemble that of training an observationally based PP/SR algorithm. The trained “pseudo” PP/SR algorithm is then applied to future projections from the same GCM and tested against the corresponding downscaled RCM output, allowing “ground truth” reference data in a future climate (Maraun et al. 2015). Discrepancies in downscaled outputs can shed light on adaptability conditions, as outlined in the context of RCM emulators in both Chadwick et al. (2011) and Doury et al. (2023). Their results highlighted that training only on historical simulations was insufficient in reproducing some aspects of future climate change signals.

Idealized experiments of domain adaptability are like stationarity tests but focus more on out-of-distribution adaptability across a wide range of combinations across the GCM–RCM matrix. For example, a pseudo PP/SR algorithm could be trained in the perfect or imperfect model framework on an RCM reanalysis-forced or historical simulation run from publicly available datasets such as CORDEX. To assess its adaptability, the algorithm can be applied to a range of coarsened GCMs and their corresponding RCMs, with evaluation carried out against the “ground truth” derived from RCM simulations.

2) Enhancing out-of-distribution performance with transfer learning

There are several challenges in using observational data. First, some observational datasets are not sufficiently long in length, which can affect algorithm performance (Rampal et al. 2022a). Second, as mentioned in section 1, the predictor space of future climate scenarios is often significantly out of distribution from the observational training data.

To tackle the issue of limited observational data, we can use a hybrid downscaling strategy that integrates training on both physical simulations and observational data (Materia et al. 2023). One example of a hybrid approach is through a technique known as transfer learning, which combines learnings or knowledge from two different but related tasks (Pan and Yang 2010; Weiss et al. 2016). Not only can transfer learning improve algorithm accuracy, but it also promotes efficiency, robustness, and generalizability (Goodfellow et al. 2016; Weiss et al. 2016). Transfer learning techniques have been successful in weather and seasonal forecasting (Gibson et al. 2021; Ham et al. 2019; Nguyen et al. 2023; Rasp and Thuerey 2021). Recent advancements of transfer learning include the ClimaX foundation algorithm (Nguyen et al. 2023), which was trained on both ERA5 reanalysis (Hersbach et al. 2020) and CMIP6 climate model projections (Eyring et al. 2016) for various predictive tasks. The ClimaX pretrained foundational algorithm was subsequently fine-tuned on more specific tasks, including observational downscaling.

Drawing insights from ClimaX (Nguyen et al. 2023), RCM simulations can serve as an effective preconditioning stage for ML-based observational downscaling algorithms. First, an algorithm can be “pretrained” to perform “pseudo” PP/SR using predictor and predictand (pseudo-observations) variables from publicly available datasets such as CORDEX. Here, the algorithm has been exposed to a very large training dataset including simulations from historical and future climates. The second phase, known as fine-tuning, retrains the algorithm on reanalysis or observational datasets in the historical period, using either a PP or SR approach. Here, the weights of the foundational algorithm are often frozen, serving as initial states for the second phase of training. This pretraining approach can enhance the algorithm’s performance and future extrapolative capacity by exposing it to a larger and more diverse range of weather events than those available in the historical record. Ham et al. (2019) found better performance in forecasting El Niño–Southern Oscillation when the algorithm was first pretrained on CMIP5 simulations, with subsequent fine-tuning on observations. They found that the skill of the algorithm increased with the amount of CMIP5 data used for pretraining.

b. RCM emulation

1) A hybrid training framework

As discussed in section 2c, two training frameworks exist for RCM emulators—the “perfect” (coarsened RCM–RCM relationship) and “imperfect” (GCM–RCM relationship). Though the perfect model framework is simpler to train, it falls short of mimicking an RCM’s functionality as closely as the imperfect model does. However, the imperfect model may be too challenging for ML to learn directly, due to atmospheric field inconsistencies between GCMs and RCMs (Baño-Medina et al. 2023; Doury et al. 2023; Miralles et al. 2022). Potential training strategies like transfer learning can reduce the complexity of the problem by dividing it into several simpler tasks. Miralles et al. (2022) introduced a two-stage training process to emulate high-resolution wind fields over Switzerland at a resolution of 1.1 km from ERA5 predictor fields (25 km). They first pretrained their emulator in the perfect model framework, which is simpler problem for an ML algorithm to learn. Then, they subsequently fine-tuned their emulator in the imperfect framework.

2) Improving the generalizability of RCM emulators

The generalizability of an RCM emulator refers to whether an emulator trained on one GCM simulation can be applied effectively to downscale other GCMs, including those with a notably different equilibrium climate sensitivity, across different regions, variables, and RCMs. Certain challenges such as resolution disparities among GCMs and different internal model physics may further complicate this task.

One potential strategy to improve the generalizability of RCM emulators in both frameworks, is to train/fine-tune on multiple RCM simulations (or a reanalysis forced simulation), each forced by different GCM. Existing literature tends to focus on training only on simulations forced by the same GCM. In particular, for the imperfect training framework, training an emulator on simulations forced by multiple GCMs could reduce its “model specific” dependency that occurs when it is trained on one GCM only, and thus improve its generalizability. However, it is also possible that dedicated emulators may be required for downscaling each GCM, if there is too strong of a model dependency in the imperfect framework. Additionally, the perfect framework with or without bias correction may also continue to be a viable emulation framework.

In addition, a strategy proposed by Hobeichi et al. (2023), involves a hybrid approach that combines dedicated GCM-specific emulators with dynamical downscaling using RCMs. This hybrid downscaling strategy involves choosing 10 representative years from historical and future periods in GCM simulations. The climate variability within the representative years needs to be diverse and indicative of the entire simulation. These selected years are then dynamically downscaled using an RCM and are subsequently used as training data for an RCM emulator. If the emulator proves effective, it can be applied to the remainder of the simulation, which was not dynamically downscaled. This hybrid method could cut computational costs by using dynamical downscaling selectively and sparingly. Dedicated emulators can also be used in a different capacity to analyze the role of internal variability in climate projections (see also section 7). As proposed by Baño-Medina et al. (2023), an RCM emulator can be trained on dynamical downscaled simulations from a single initial condition ensemble member forced by a GCM (e.g., EC-Earth3). The emulator can then be subsequently applied across other ensemble members, allowing us to analyze the role of internal variability in climate projections (refer to section 7). Dedicated emulators, being GCM specific, bypass the need for considering generalizability.

c. Shared strategies for RCM emulation and observational downscaling

1) Improving the representation of extreme events

While this review has highlighted that algorithm architectures such as c-GANs and innovative loss functions show promise in better resolving local detail and capturing extreme events, further research is still required. One area for improvement in downscaling is accurately capturing the “memory” of a sequence of events. This is especially important for weather phenomena that span multiple time steps, a consideration often overlooked in current literature. Examples of important multiday weather phenomena include atmospheric rivers and cyclones. Most existing downscaling approaches train algorithms to downscale each time step independently (i.e., daily), rather than downscaling a sequence of time steps in a single instance. Incorporating the memory effect in downscaling can be achieved using recurrent architectures, which are capable of capturing temporal relationships (e.g., LSTM networks) or spatiotemporal relationships (e.g., computer-vision algorithms combined with LSTM layers) (Adewoyin et al. 2021; Leinonen et al. 2021), or by using the previous time step as a predictor in an autoregressive manner, a technique that is applied in computer-vision applications (e.g., Chen et al. 2018).

2) Physical consistencies of downscaled outputs

ML algorithms do not often adhere to physical laws, leading to inconsistencies in their outputs. This issue of physical consistency can be problematic in downscaling, both in terms of building trust in the algorithm (see section 8c) and because in some cases end users may need data that adhere to physical laws. Within a climate change context, an algorithm whose output maintains physical/statistical consistency and aligns with the climate change signal of its host GCM’s climate signal may be more beneficial than complex architectures that excel in our present-day climate. In the context of downscaling, it is often challenging to develop physical constraints due to the high dimensionality of the training data. However, alternative strategies such as the development of soft and hard statistical constraints may both be promising avenues of research. Another possibility is multitask learning, which was found to improve interpretability and performance by sharing algorithm parameters between tasks (Wang et al. 2023). Multitask learning involves training an algorithm to perform two tasks sharing the same latent space (Ruder 2017), as illustrated in Fig. 5. Through this multitask learning, it is possible to enforce a constraint for one task (e.g., conservation law), even if the constraint is not applied to the other. Such constraints have been proposed in a weather forecasting context to improve algorithm stability (Watt-Meyer et al. 2023). Multitask learning can also be implemented with hard constraints.

Fig. 5.
Fig. 5.

Two possible deep learning training techniques: (a) a single-task CNN and (b) a multitask CNN. In a single-task CNN, an algorithm is simply trained to perform one task, such as downscaling rainfall as implemented in Rampal et al. (2022a). In a multitask CNN, the algorithm is trained and optimized to perform two tasks simultaneously, where both tasks share the same latent space. In this case, the CNN is being trained to be both an autoencoder (i) and a regression in which it downscales rainfall (ii). The brown squares represent pooling of feature maps with convolution and ReLU, leading to a flatter layer.

Citation: Artificial Intelligence for the Earth Systems 3, 2; 10.1175/AIES-D-23-0066.1

7. Improving our understanding of uncertainty quantification in climate projections

The aim of this section is to outline future research opportunities using modern ML techniques to deepen our understanding of how different uncertainties impact projections from climate downscaling. The section begins by examining the three main forms of uncertainty in future climate projections (section 7a) and proceeds to discuss internal RCM physics uncertainty (section 7b). Last, it focuses on convection permitting RCM emulators and their importance for resolving local-climate extremes and their corresponding uncertainty (section 7c).

a. Initial condition, model, and scenario uncertainty

Uncertainty in future climate projections arises from three main sources: initial condition uncertainty, model uncertainty and scenario uncertainty (Hawkins and Sutton 2009, 2011).

Initial condition uncertainty generated by GCMs is generally not accounted for in CORDEX-type downscaling efforts. Typically, because of computational limitations, it is common practice to only downscale a single GCM member. Arguably, this is a major shortcoming of RCM projections, given the need to quantify initial condition uncertainty when determining trends in extreme events at regional scales (e.g., Deser et al. 2012; Perkins-Kirkpatrick et al. 2017).

The heavily reduced computational costs of running empirical methods for climate downscaling opens a substantial opportunity for ML-based empirical downscaling approaches to operate on various large initial condition ensembles (Gibson et al. 2021, 2023; Kay et al. 2015; Maher et al. 2019; Simpson et al. 2023) and quantify initial condition uncertainty. A wealth of GCM outputs now exists for this purpose, thanks to the large number of initial condition ensembles available through CMIP6 and various other recent initiatives (Maher et al. 2021). This opportunity seems underutilized to date by the empirical downscaling community, with ML-based empirical downscaling applications typically downscaling a single ensemble member from each GCM (e.g., Di Virgilio et al. 2020; Evans et al. 2014; Gibson et al. 2023; Jacob et al. 2020; Xu et al. 2019).

In addition to initial condition uncertainty, the high computational cost of RCMs pose challenges in evaluating the contribution of model uncertainty and scenario uncertainty in downscaled climate projections. Here, model uncertainty refers to the spread in downscaled future climate projections across multiple GCMs, while scenario uncertainty is the spread in due to different shared socioeconomic pathways (SSPs). Each SSP corresponds to a different set of assumptions about future greenhouse gas emissions (e.g., high- and low-emission scenarios) that is prescribed to the GCM. However, ML-based empirical downscaling methods, being computationally efficient and adaptable to varying GCM requirements (as discussed in section 2c), can effectively navigate the combined model and scenario uncertainty space, making them more versatile across a broad spectrum of GCMs relative to RCMs.

b. RCM uncertainty

Another important element of model uncertainty, in addition to GCM model uncertainty, is quantifying the internal uncertainty of the RCM parameterization schemes (i.e., uncertainty of physical processes within the RCM). The choice of parameterization schemes in dynamical RCMs can greatly influence future projections of extreme rainfall and climate change signals (Sexton et al. 2021; Yamazaki et al. 2021). However, because of computational constraints, producing perturbed physics RCM ensembles, like those commonly produced in CMIP6 for GCMs (Eyring et al. 2016), is not currently common in CORDEX-type experiments (Giorgi et al. 2009; Giorgi and Gutowski 2015). One promising avenue would involve the generation of relatively short time-slice RCM simulations, each based on different physics schemes, followed by training separate RCM emulators for each scheme. Provided the emulator function is sufficiently transferable to other GCMs, each of these emulators could then be applied to future climate projections across different GCMs. This ensemble of future projections, associated with different parameterization schemes, would enable us to better understand uncertainty in future projections. This could serve as a valuable tool for extending uncertainty quantification, especially in local-level decision-making related to future extreme weather events.

c. Convection-permitting regional climate model emulators

Convection-permitting RCMs (CP-RCMs) provide added value by resolving convective processes more directly, this often translates to better representation of local-scale extremes such as flooding events (Adinolfi et al. 2023; Coppola et al. 2020; Kendon et al. 2023; Prein et al. 2015). However, to properly estimate the uncertainty of extreme events requires a large ensemble of CP-RCMs, and given the enormous computational cost, only a few studies have undergone such research with dynamical downscaling (Kendon et al. 2023). ML advancements hold promise for realistically emulating high-resolution details from the output of CP-RCMs directly. Although the reliability of their out-of-distribution performance of RCM emulators remains an open research question, recent advances in ML algorithms (Addison et al. 2022; Mardani et al. 2023; Miralles et al. 2022) can potentially resolve high-resolution detail in a similar capacity to CP-RCMs. This provides a significant opportunity to significantly enhance our understanding of local climate extremes and the associated uncertainty under climate change conditions. A small but growing number of CP-RCM datasets have been made available and could serve as useful datasets for training CP-RCM emulators (Rasmussen and Liu 2017; Rasmussen et al. 2023).

8. Climate downscaling evaluation

This section first focuses on covering two aspects of historical evaluation: evaluation using an independent observational test set separate from the training data [section 8a(1)] and during the historical simulation period [section 8a(2)]. Then it proceeds to discuss future evaluation strategies (section 8b). Last, it also introduces XAI evaluation techniques (section 8c).

While there exists a large amount of literature and several frameworks discussing evaluation techniques for empirical downscaling algorithms (Gutiérrez et al. 2019; Maraun et al. 2015), this section is aimed at highlighting key evaluation aspects relevant to both observational downscaling (PP and SR) and RCM emulation. An example evaluation framework is presented in Fig. 6, which provides an overview of how classical evaluation approaches (e.g., Maraun et al. 2015) can be combined with machine learning–based evaluation techniques such as XAI, which is discussed in section 8c.

Fig. 6.
Fig. 6.

An illustration of (a) historical and (b) future evaluation strategies for assessing the performance of downscaled simulations from an ML algorithm. Historical validation strategies focus on two datasets: one over an independent observational period on which it was trained and one consisting of historical simulations from a GCM.

Citation: Artificial Intelligence for the Earth Systems 3, 2; 10.1175/AIES-D-23-0066.1

a. Evaluation in the historical climate

1) Cross validation with observational data

The first evaluation phase of observational downscaling methods consists of a cross validation based on the training data (reanalysis and observations) to assess the intrinsic performance of the method. Cross validation involves training with slightly different subsets or folds of the observational dataset, while reserving a small independent portion for testing each time or using only a single portion of the dataset (typically multiple years at the end of the time series) for testing. This way, the evaluation becomes more cost-effective and offers insights into the algorithms ability to extrapolate into the very near future, at a point in time where the impact of climate change has not yet shifted most of the predictors and the predictand outside their calibration range. The intrinsic performance is measured by evaluation metrics that focus on different aspects of the downscaled variable (marginal, temporal, spatial, extremes, process based; see Maraun et al. 2015), and directly compute these against distributions from observational data. Other key frequency-based metrics used in downscaling include power spectral density (Geiss and Hardin 2020; Reddy et al. 2023; Vosper et al. 2023) and in the context of downscaling rainfall evaluating its peak intensity across a region (Vosper et al. 2023).

Therefore, these observationally based out-of-distribution testing approaches can also be used to understand the degree to which training on one metric or time scale for a given variable, provides benefits in others (so-called metric transitivity or temporal transitivity) (Abramowitz et al. 2019). For RCM emulators, cross validation can be performed on independent subsets of the simulation(s) used in training.

2) Validation with historical GCMs simulations

The next phase involves applying the downscaling algorithm (which are typically retrained using the whole observational period prior to being used for downscaling purposes, to maximize data constraint) to GCM historical inputs and evaluating the algorithm’s outputs. As discussed in section 4d, the downscaled outputs are not directly comparable to observations since the outputs themselves are derived from GCMs (whose internal variability is not synchronized with observations), and thus only statistical magnitudes of the validation metrics (e.g., climatologies or biases) can be used to make comparisons with observations. Several studies have measured the added value that an RCM provides over the host GCM (e.g., Careto et al. 2022; Di Virgilio et al. 2020; Rummukainen 2016). This involves measuring the reduction of bias across various metrics (e.g., ETCCDI indices from Zhang et al. (2011), a set of 27 indices that are used to measure different aspects of climate extremes, such as temperature, precipitation, and drought) and comparing the improvement, that is, the added value, provided by empirical downscaling in comparison with the host GCM or/and an ensemble of RCMs. Similarly, the benchmarking framework proposed by Isphording et al. (2024) offers a comparable approach to assessing added value. It provides well-documented methodologies for deriving similar metrics, acting as a valuable resource for evaluating downscaling strategies, especially in the context of rainfall.

For RCM emulators, direct historical evaluation is possible [see evaluation metrics in section 8a(1)] if ground-truth RCM simulations are available. Otherwise, the mentioned approach above can also be employed for RCM emulation historical evaluation.

b. Evaluation in the future climate

Given the unavailability of knowledge from the future, direct evaluation in future climates is not possible for observational downscaling algorithms. Some studies assess future out-of-distribution performance of empirical algorithms by comparing the climate change signal of a variable of interest from the empirical model with that of a GCM or an ensemble of RCMs (Baño-Medina et al. 2021, 2022; Hernanz et al. 2022b; Legasa et al. 2023). The basis of this comparison assumes that both GCM and downscaled climate signals follow the host GCM’s broad atmospheric patterns, and thus only minor downscaling deviations are anticipated (Baño-Medina et al. 2022; Maraun et al. 2015; Themeßl et al. 2012). This assumption lacks a strong foundation in physical principles and thus this method should only provide a low-level future performance evaluation. Alternatively, the “perfect model”/“pseudoreality” experiments as discussed in section 6a serves as our best proxy for a legitimate out-of-distribution test for future climate scenarios for observational downscaling algorithms. Note, that for RCM emulators, direct future evaluation of the climate change signal is possible. Here, downscaled outputs from RCM emulators can be evaluated against ground-truth simulations that were not used in training.

For instance, an example strategy as implemented in Baño-Medina et al. (2022) compared the downscaled climate change signal output from an empirical algorithm with those of other RCMs from CORDEX runs (or the GCM directly). If the ML algorithm can accurately replicate similar climate change patterns, it would provide a promising indication of its extrapolation capabilities. Conversely, a significant deviation in climate change signal from RCM–GCMs often suggests inconsistency, further investigation is typically warranted. It is important to note that a complete consensus across all methodologies, including RCM and GCM comparisons, is not always expected (Boé et al. 2020). When evaluating the climate change signal, variables such as temperature often show consensus among RCMs, GCMs, and empirical algorithms, whereas for rainfall less so (Boé et al. 2020). In the context of future evaluation, studies focus on end-of-century (2080–2100), it may be also beneficial to perform an evaluation in the near future (current–2040), which could help us identify extrapolation boundaries for ML-based empirical downscaling algorithms.

The method for computing the climate change signal can be extended to encompass extreme indices, some of which are ETCCDI indices (e.g., wettest rainfall day of the year defined as RX1Day). Extreme indices may provide a more extensive out-of-distribution evaluation for ML algorithm. For example, one could assess whether the ML algorithm’s output can reproduce the end-of-century RX1Day trends in GCM and RCM simulations, which in some regions exceed 7% per kelvin of warming (Norris et al. 2019), a value often inferred from the Clausius–Clapeyron equation.

Other important evaluation measures for empirical downscaling algorithms include event-based evaluation (applicable to historical and future climates). For example, one could assess the skill of an algorithm’s downscaled output as a function of different weather regimes in the GCM (e.g., clustered through geopotential height). This could provide insight as to whether the model is producing physically plausible outcomes or predictions. Other examples could include evaluating performance during extreme events such as cyclones or atmospheric rivers and could involve using atmospheric phenomena-based tracking algorithms on GCM or RCM outputs (e.g., Ullrich and Zarzycki 2017). Additionally, if an RCM emulator is capable of emulating variables such as specific humidity, winds, and mean sea level pressure, it may be possible to apply tracking algorithms to its outputs directly to assess cyclones and atmospheric rivers. Similar analyses have also been performed for dynamical downscaling (Gibson et al. 2023).

c. XAI

1) Examples of XAI in climate science and downscaling

Despite the ongoing success of deep learning algorithms in climate science research, the decisions and relationships learned by these algorithms often remain unclear due to their black box nature (Doshi-Velez and Kim 2017; McGovern et al. 2019; Rampal et al. 2022b). XAI has emerged as a powerful tool for enhancing transparency of ML algorithms in climate science (Ebert-Uphoff and Hilburn 2020; Mamalakis et al. 2018; McGovern et al. 2019) and has been successfully applied to aspects of extreme weather events (Dikshit and Pradhan 2021; Hobeichi et al. 2023; Gagne et al. 2019; Rampal et al. 2022a) and better understanding and predicting climate phenomena (Clare et al. 2022; Gibson et al. 2021; Gagne et al. 2019; Y. Liu et al. 2023; Molina et al. 2021; Pegion et al. 2022; Sonnewald and Lguensat 2021; Toms et al. 2021).

Recent applications of XAI in empirical downscaling have demonstrated efficacy in identifying the most relevant large-scale features (coarse resolution) when predicting downscaled temperature and precipitation (Baño-Medina et al. 2020). Rampal et al. (2022a) used a gradient-based XAI technique to understand the spatial locations influencing the prediction of extreme precipitation events in the context of downscaling complex climate phenomena such as atmospheric rivers and cyclones. González-Abad et al. (2023a), Baño-Medina et al. (2023), and Balmaceda-Huarte et al. (2023) used XAI to identify potential biases and detect spurious or nonphysical relationships within ML climate downscaling algorithms, which provided a measure of its extrapolative capabilities [see Fig. 7 for a schematic view of the metrics developed in González-Abad et al. (2023a)].

Fig. 7.
Fig. 7.

Schematic view of XAI-based diagnostics, adapted from González-Abad et al. (2023a) with permission. Saliency maps, which are interpretations generated using XAI techniques, are computed across all grid points in the predictand space, for each observation (day). Two days (start and end of a given period) are shown in columns, and three grid points representing north, central, and south locations are depicted in the predictand space. The resulting saliency maps are aggregated, yielding two diagnostics that provide insights into different aspects of the ML algorithm [for detailed explanations, refer to González-Abad et al. (2023a)]. Note that the accumulated saliency and saliency dispersion diagnostics are applied over the predictor and predictand domains, respectively.

Citation: Artificial Intelligence for the Earth Systems 3, 2; 10.1175/AIES-D-23-0066.1

2) XAI-based evaluation for downscaling

These insights from XAI offer a novel offline method for evaluating the extrapolation capabilities of the empirical downscaling algorithm, and could be integrated alongside classical downscaling evaluation techniques, as illustrated in Fig. 6. For example, to assess the stationarity assumption, it may be possible to apply gradient-based XAI techniques to evaluate how the relationships learned by the ML-based empirical downscaling algorithm evolve over time. This could also provide an indication of algorithm stability and facilitate the detection of algorithm drift or performance degradation. Examples of evaluation techniques are provided in González-Abad et al. (2023a), Baño-Medina et al. (2023), and Balmaceda-Huarte et al. (2023).

Despite the promising capabilities of XAI, these techniques have their own limitations (Rudin 2019). For example, some studies have noted inconsistencies in conclusions derived from different XAI techniques and inherent issues related with the nature of these techniques (Bommer et al. 2023; Mamalakis et al. 2022a, 2023). It is important to highlight that XAI techniques may also oversimplify the decision-making process of ML-based empirical downscaling algorithms, and caution should be applied when interpreting results. A few studies have begun to address some aspects of the issues in XAI more broadly in the context of climate science research (Mamalakis et al. 2022b), but it is still an active area of research. While XAI can often fall short in deciphering ML algorithms, interpretable AI, focusing on the design and architecture of inherently transparent algorithms, offers promising prospects, particularly in unexplored downscaling and climate science (Barnes et al. 2022; Rudin 2019).

9. Aligning ML with existing coordinated downscaling efforts

Given the challenges and opportunities presented above, it is worth considering how ML-based empirical downscaling algorithms might fit within existing and future coordinated downscaling efforts, namely, CORDEX. Currently, most of the continental-wide projections produced in the different CORDEX domains rely exclusively on simulations from dynamical RCMs, while most of the projections available from (PP) statistical downscaling methods are typically limited to smaller national domains. The impacts and adaptation communities extensively use RCM projections, and ongoing familiarity and assessment of their merits and limitations foster increasing confidence in their output. However, due to computational resource limitations, the ensembles currently available in most of the CORDEX regions risk under sampling uncertainty stemming from the multiple sources (Diez-Sierra et al. 2022). These include model uncertainties due to sparse GCM–RCM pair selection, limited sampling of RCM parametric uncertainties, and internal variability. Additionally, the current CORDEX downscaling efforts (∼25-km resolution) may also inadequately represent certain impactful storms and extreme events.

As discussed above, with careful application and evaluation, ML algorithms are now well placed to begin addressing a number of these issues, potentially allowing more comprehensive regional ensembles in the different CORDEX domains. Discussion and suggestions for how modern ML methods can be better aligned with CORDEX efforts were recently included in a CORDEX white paper on observational downscaling/empirical statistical downscaling (Gutiérrez et al. 2022). An extension of the experimental CMIP6 downscaling protocol available for dynamical models (CORDEX 2021) would facilitate the intercomparison and adoption of ML methods. Data and infrastructure challenges will be especially important for ongoing successful implementation. This includes adoption (and possible extension) of the CORDEX data reference syntax (DRS) to ensure consistencies in metadata, variable and file naming conventions, experiment and domain specifications, and version control, among others. Development of formal guidance on the best practice setup for different types of ML algorithms would also be useful, analogous to the existing CORDEX guidance provided for the experimental design of downscaling projects using dynamical models (CORDEX 2021). More fundamentally however, community trust and adoption of output produced by ML algorithms will first depend on comprehensive evaluation of its output that is tailored specifically to these types of algorithms (see evaluation framework presented in section 8). Despite several challenges, the road ahead for greater inclusion of ML algorithms looks increasingly promising. Recently, Baño-Medina et al. (2022) made significant progress on this front, providing the first perfect prognosis continental-scale climate projections produced through modern ML approaches (i.e., computer vision) that formally contributed to CORDEX [0.44° resolution over the European CORDEX domain (CORDEX EUR-44)].

10. Conclusions

In this review, we have examined recent advancements in machine learning (ML) for climate downscaling. Our review focused on two specific climate downscaling themes, known as observational downscaling and RCM emulation, each of which serves a distinct purpose with unique challenges and advantages.

Observational downscaling approaches [perfect prognosis (PP) and superresolution (SR)] are trained on observational datasets capturing local-climate complexities. RCM emulators are developed to mirror the function of an RCM and are trained on simulations such open-access datasets from CORDEX. RCM emulators can learn from both future and historical simulations, which is not possible for PP and SR. RCM emulators present an opportunity to mimic high-resolution simulations like CP-RCMs—an opportunity not afforded by observational datasets due to their spatial resolution limitations. Assuming RCM simulations are available, emulators can also be trained in regions that typically have sparse observational data. However, the inherent biases in RCMs may yield projections less realistic for practical decision-making contexts.

Our review critically examines these issues and opportunities of ML for climate downscaling, framed around three pivotal research questions.

a. Can empirical downscaling algorithms adequately reproduce the present-day climate, including its extremes?

Overall, recent ML advancements, notably in GANs and diffusion models, have significantly improved over traditional algorithms for empirical downscaling (both observational downscaling and RCM emulation) in reproducing the present-day climate and its extremes.

Traditional observational downscaling and RCM emulator algorithms are simpler (e.g., multiple linear regression, statistical analogs), more interpretable, and generally require less training data than more complex ML empirical downscaling algorithms (e.g., GANs). However, their main limitation is that they struggle to learn complex spatiotemporal features from climate fields, which is particularly important for predicting dynamical variables such as rainfall and surface wind fields.

There is a consensus in the literature that convolutional neural networks (CNNs) have superior performance over traditional empirical downscaling methods in present-day climates. They are capable of learning complex dynamics from climate fields, which can help them better resolve extreme events, especially for variables such as wind and rainfall. CNNs can also be more readily applied to downscale across extensive regions, such as entire continents with one single algorithm, unlike traditional approaches that often require a separate algorithm for each grid cell. However, CNNs architectures also have limitations. First, due to their large number of training parameters, they often require significant amounts of data to train, which can be problematic when limited training data are available. Additionally, they also can suffer from “regression to the mean” phenomena and underestimate/smooth out extreme events, albeit less so than traditional statistical downscaling algorithms.

GANs and diffusion models have shown improvement relative to CNNs in accurately estimating extreme events, offering more realistic predictions, and better resolving local-scale climate processes. Notable limitations include stability when training GANs, and slow inference times in diffusion models. Similar to CNNs, GANs and diffusion models also require large numbers of training data in comparison with traditional empirical downscaling algorithms. However, innovative strategies in ML such as transfer learning as discussed in section 6a, have potential to overcome limitations in training data.

Although ML advancements are promising, there is a lack of agreed-upon evaluation metrics for algorithms (see section 8 for an evaluation framework), and many studies overlook climate downscaling-specific evaluation measures (e.g., climatology of rainfall). Algorithm evaluation in specific events/case studies, such as during cyclones or atmospheric river events can help in understanding limitations and areas of improvement. Additionally, it is important to recognize factors beyond skill in the present-day climate when selecting an algorithm to downscale future climate projections.

b. How effectively can empirical downscaling algorithms adapt outside its training distribution to unobserved historical and future climates?

The issue of domain adaptation arises when applying an empirical downscaling algorithm that is trained on a specific simulation (e.g., reanalysis/observational or a specific RCM simulation) to predictor fields from an unobserved GCM from training (e.g., high-emissions scenarios), which are often out of distribution relative to training data.

Overall, the issue of domain adaption has received little attention in recent ML advances in observational downscaling. Domain adaption has only been examined in a few PP studies and remains largely overlooked in SR downscaling—despite its prominence in climate downscaling. Additionally, domain adaptability conditions may vary between SR and PP methods, given SR’s reliance on surface-level predictors (e.g., rainfall), which have often been suggested to be unrealistically simulated in GCMs.

Several studies have underscored the need for bias adjustment prior to observational downscaling to improve domain adaptability of PP approaches. Following bias adjustment, PP CNN algorithms are suggested to better adapt to GCM predictor fields across both historical and future simulations when compared with traditional observational downscaling algorithms. This improved adaptability in CNNs is thought to arise from their capacity to learn stable and generalizable relationships from climate fields as opposed to traditional empirical downscaling algorithms, which are often sensitive to feature selection.

Bias adjustment, while important, should not be seen to account for all domain adaptation issues, addressing misalignment in predictor fields rather than out-of-distribution challenges. A potential novel strategy to address out-of-distribution challenges in ML, detailed in section 6a, is transfer learning. Here, transfer learning involves initially “pretraining” the algorithm on RCM simulations, which can be trained on both future and historical scenarios, and subsequent fine-tuning on observational data. By exposing the model to a variety of climate conditions before training, transfer learning potentially enhances the algorithm’s performance and its ability to adapt to out-of-distribution situations. Furthermore, in section 6a we highlighted the importance of pseudoreality experiments to understand the domain adaptability and extrapolation conditions of ML algorithms.

RCM emulators have the advantage of access to training data in both historical and future climate settings, has improved out-of-distribution performance to future climates in certain scenarios. However, RCM emulators also encounter challenges associated with domain adaption, an issue that is independent of algorithm complexity (both traditional and modern ML algorithms). RCM emulators are trained in either the perfect or imperfect training framework. Training an empirical algorithm in the imperfect framework is challenging due to spatiotemporal inconsistencies between low-resolution GCM and high-resolution RCM fields. Conversely, the perfect framework simplifies this problem by coarsening high-resolution RCM fields to a lower resolution, improving correlations, albeit with a function slightly different to an RCMs.

Overall, future out-of-distribution performance in the perfect framework is independent of the RCM simulation used in training, making it portable across the GCM–RCM matrix. However, it functions differently from an RCM, and additional research is needed to assess the reliability of its future projections. Conversely, future out-of-distribution performance in the imperfect framework appears to be dependent on the RCM simulation used in training. This reliance on the training simulation suggests that it is less generalizable and portable across the GCM–RCM matrix than the perfect framework. Further research and evaluation of both frameworks is necessary, with potential adaptability experiments detailed in sections 6a and 6b. Additionally, ML techniques such as transfer learning may offer potential to improve training in the imperfect framework, by splitting training into two simpler tasks. The first task involves pretraining in the simpler perfect framework, with subsequent fine-tuning in the imperfect framework.

c. Which techniques can be implemented to enhance the transparency and interpretability of empirical downscaling algorithms?

More modern ML algorithms (e.g., CNNs and GANs) are generally less interpretable than traditional downscaling methods. This complexity can obscure decision-making processes, posing challenges in the adoption of climate projections generated by ML. To enhance the usability, transparency, and interpretability of ML-based empirical downscaling algorithms, two primary strategies can be pursued: the integration of explainable artificial intelligence (XAI) or interpretable artificial intelligence (where ML algorithms are built on the premise of being interpretable and transparent as a pose to explaining their predictions in XAI) techniques and the implementation of physical/statistical constraints into ML algorithms.

Following this premise, our review places emphasis on the capacity of XAI to improve the transparency and interpretability of downscaling algorithms, detect spurious relationships, and serve as an offline evaluation technique, in addition to classical evaluation techniques to identify limitations in ML-based empirical downscaling. Furthermore, XAI has the potential to measure the model’s extrapolation limits and recognize its limitations from the perspective of key physical processes.

Integrating physical and statistical constraints into ML algorithms remains challenging but has potential for more widespread use. While nontrivial, these may be tailored based on user-specific needs or the variables being downscaled. Examples of useful statistical constraints include conservation of annual rainfall for improving the model’s extrapolative capability into future climates. Additionally, a statistical constraint such as preserving spatial averages, may force the high-resolution outputs from the ML algorithm to align with its corresponding coarse-resolution predictor fields, which could enable the model to preserve aspects such as the climate change signal when applied to GCM outputs.

d. Outlook: Improving uncertainty quantification in climate projections

The most significant advantage of ML-based empirical downscaling algorithms lies in their computational efficiency relative to RCMs. Thus, they can be more easily applied broadly across the RCM–GCM matrix or to a large initial condition ensemble to improve our understanding of the contribution of initial, model, and scenario uncertainty in future climate projections. Furthermore, to accurately assess the uncertainty of local-scale extreme events, a large ensemble of convection-permitting RCMs is necessary. However, modern ML algorithms such as GANs provide a promising alternative and can now emulate RCMs at very high spatial resolutions (<5 km) for fraction of the computational cost of an RCM.

ML currently lacks the capability to generate future simulations like GCMs or contribute directly to physical discoveries (e.g., higher resolution) in the same way as physics-based RCMs. While there are some promising initiatives, such as neural GCMs (Kochkov et al. 2023), carefully designed training approaches with ML, in tandem with better alignment with existing CORDEX frameworks offers a promising path forward for improving the uptake and reliability of climate downscaling products produced by ML.

Acknowledgments.

Authors Rampal, Hobeichi, and Abramowitz acknowledge the support of the Australian Research Council Centre of Excellence for Climate Extremes (CLEX; CE170100023). Rampal and author Gibson received funding from the New Zealand MBIE Endeavour Smart Ideas Fund (C01X2202). The authors declare no conflicts of interest. The National Center for Atmospheric Research is sponsored by the U.S. National Science Foundation.

Data availability statement.

As a review, this paper has produced no data.

REFERENCES

  • Abramowitz, G., and Coauthors, 2019: ESD reviews: Model dependence in multi-model climate ensembles: Weighting, sub-selection and out-of-sample testing. Earth Syst. Dyn., 10, 91105, https://doi.org/10.5194/esd-10-91-2019.

    • Search Google Scholar
    • Export Citation
  • Addison, H., E. Kendon, S. Ravuri, L. Aitchison, and P. A. Watson, 2022: Machine learning emulation of a local-scale UK climate model. arXiv, 2211.16116v1, https://doi.org/10.48550/arXiv.2211.16116.

  • Adewoyin, R. A., P. Dueben, P. Watson, Y. He, and R. Dutta, 2021: TRU-NET: A deep learning approach to high resolution prediction of rainfall. Mach. Learn., 110, 20352062, https://doi.org/10.1007/s10994-021-06022-6.

    • Search Google Scholar
    • Export Citation
  • Adinolfi, M., M. Raffa, A. Reder, and P. Mercogliano, 2023: Investigation on potential and limitations of ERA5 reanalysis downscaled on Italy by a convection-permitting model. Climate Dyn., 61, 43194342, https://doi.org/10.1007/s00382-023-06803-w.

    • Search Google Scholar
    • Export Citation
  • Alerskans, E., J. Nyborg, M. Birk, and E. Kaas, 2022: A transformer neural network for predicting near-surface temperature. Meteor. Appl., 29, e2098, https://doi.org/10.1002/met.2098.

    • Search Google Scholar
    • Export Citation
  • Babaousmail, H., R. Hou, G. T. Gnitou, and B. Ayugi, 2021: Novel statistical downscaling emulator for precipitation projections using deep convolutional autoencoder over northern Africa. J. Atmos. Sol.-Terr. Phys., 218, 105614, https://doi.org/10.1016/j.jastp.2021.105614.

    • Search Google Scholar
    • Export Citation
  • Bailie, T., Y. S. Koh, N. Rampal, and P. B. Gibson, 2024: Quantile-regression-ensemble: A deep learning algorithm for downscaling extreme precipitation. Proc. AAAI Conf. on Artificial Intelligence, Vancouver, BC, Canada, AAAI, 21 91421 922, https://doi.org/10.1609/aaai.v38i20.30193.

  • Balmaceda-Huarte, R., and M. L. Bettolli, 2022: Assessing statistical downscaling in Argentina: Daily maximum and minimum temperatures. Int. J. Climatol., 42, 84238445, https://doi.org/10.1002/joc.7733.

    • Search Google Scholar
    • Export Citation
  • Balmaceda-Huarte, R., J. Baño-Medina, M. E. Olmo, and M. L. Bettolli, 2023: On the use of convolutional neural networks for downscaling daily temperatures over southern South America in a climate change scenario. Climate Dyn., 62, 383397, https://doi.org/10.1007/s00382-023-06912-6.

    • Search Google Scholar
    • Export Citation
  • Ban, N., J. Schmidli, and C. Schär, 2014: Evaluation of the convection-resolving regional climate modeling approach in decade-long simulations. J. Geophys. Res. Atmos., 119, 78897907, https://doi.org/10.1002/2014JD021478.

    • Search Google Scholar
    • Export Citation
  • Baño-Medina, J., 2020: Understanding deep learning decisions in statistical downscaling models. Proc. 10th Int. Conf. on Climate Informatics, Online, Association for Computing Machinery, 79–85, https://doi.org/10.1145/3429309.3429321.

  • Baño-Medina, J., R. Manzanas, and J. M. Gutiérrez, 2020: Configuration and intercomparison of deep learning neural models for statistical downscaling. Geosci. Model Dev., 13, 21092124, https://doi.org/10.5194/gmd-13-2109-2020.

    • Search Google Scholar
    • Export Citation
  • Baño-Medina, J., R. Manzanas, and J. M. Gutiérrez, 2021: On the suitability of deep convolutional neural networks for continental-wide downscaling of climate change projections. Climate Dyn., 57, 29412951, https://doi.org/10.1007/s00382-021-05847-0.

    • Search Google Scholar
    • Export Citation
  • Baño-Medina, J., R. Manzanas, E. Cimadevilla, J. Fernández, J. González-Abad, A. S. Cofiño, and J. M. Gutiérrez, 2022: Downscaling multi-model climate projection ensembles with deep learning (DeepESD): Contribution to CORDEX EUR-44. Geosci. Model Dev., 15, 67476758, https://doi.org/10.5194/gmd-15-6747-2022.

    • Search Google Scholar
    • Export Citation
  • Baño-Medina, J., M. Iturbide, J. Fernandez, and J. M. Gutierrez, 2023: Transferability and explainability of deep learning emulators for regional climate model projections: Perspectives for future applications. arXiv, 2311.03378v1, https://doi.org/10.48550/arXiv.2311.03378.

  • Barnes, E. A., R. J. Barnes, Z. K. Martin, and J. K. Rader, 2022: This looks like that there: Interpretable neural networks for image tasks when location matters. Artif. Intell. Earth Syst., 1, e220001, https://doi.org/10.1175/AIES-D-22-0001.1.

    • Search Google Scholar
    • Export Citation
  • Bartók, B., and Coauthors, 2017: Projected changes in surface solar radiation in CMIP5 global climate models and in EURO-CORDEX regional climate models for Europe. Climate Dyn., 49, 26652683, https://doi.org/10.1007/s00382-016-3471-2.

    • Search Google Scholar
    • Export Citation
  • Bedia, J., and Coauthors, 2020: Statistical downscaling with the downscaleR package (v3.1.0): Contribution to the VALUE intercomparison experiment. Geosci. Model Dev., 13, 17111735, https://doi.org/10.5194/gmd-13-1711-2020.

    • Search Google Scholar
    • Export Citation
  • Benestad, R. E., 2004: Empirical–statistical downscaling in climate modeling. Eos, Trans. Amer. Geophys. Union, 85, 417422, https://doi.org/10.1029/2004EO420002.

    • Search Google Scholar
    • Export Citation
  • Benestad, R. E., 2010: Downscaling precipitation extremes: Correction of analog models through PDF predictions. Theor. Appl. Climatol., 100 (1–2), 121, https://doi.org/10.1007/s00704-009-0158-1.

    • Search Google Scholar
    • Export Citation
  • Benestad, R. E., I. Hanssen-Bauer, and E. J. Førland, 2007: An evaluation of statistical models for downscaling precipitation and their ability to capture long-term trends. Int. J. Climatol., 27, 649665, https://doi.org/10.1002/joc.1421.

    • Search Google Scholar
    • Export Citation
  • Beucler, T., M. Pritchard, S. Rasp, J. Ott, P. Baldi, and P. Gentine, 2021: Enforcing analytic constraints in neural networks emulating physical systems. Phys. Rev. Lett., 126, 098302, https://doi.org/10.1103/PhysRevLett.126.098302.

    • Search Google Scholar
    • Export Citation
  • Bittner, M., S. Hobeichi, M. Zawish, S. Diatta, R. Ozioko, S. Xu, and A. Jantsch, 2023: An LSTM-based downscaling framework for Australian precipitation projections. NeurIPS 2023 Workshop on Tackling Climate Change with Machine Learning, New Orleans, LA, NeurIPS, https://www.climatechange.ai/papers/neurips2023/46.

  • Boé, J., L. Terray, F. Habets, and E. Martin, 2007: Statistical and dynamical downscaling of the Seine basin climate for hydro-meteorological studies. Int. J. Climatol., 27, 16431655, https://doi.org/10.1002/joc.1602.

    • Search Google Scholar
    • Export Citation
  • Boé, J., S. Somot, L. Corre, and P. Nabat, 2020: Large discrepancies in summer climate change over Europe as projected by global and regional climate models: Causes and consequences. Climate Dyn., 54, 29813002, https://doi.org/10.1007/s00382-020-05153-1.

    • Search Google Scholar
    • Export Citation
  • Boé, J., A. Mass, and J. Deman, 2023: A simple hybrid statistical–dynamical downscaling method for emulating regional climate models over western Europe. Evaluation, application, and role of added value? Climate Dyn., 61, 271294, https://doi.org/10.1007/s00382-022-06552-2.

    • Search Google Scholar
    • Export Citation
  • Bommer, P., M. Kretschmer, A. Hedström, D. Bareeva, and M. M.-C. Höhne, 2023: Finding the right XAI method—A guide for the evaluation and ranking of explainable AI methods in climate science. arXiv, 2303.00652v1, https://doi.org/10.48550/arXiv.2303.00652.

  • Boulaguiem, Y., J. Zscheischler, E. Vignotto, K. van der Wiel, and S. Engelke, 2022: Modeling and simulating spatial extremes by combining extreme value theory with generative adversarial networks. Environ. Data Sci., 1, e5, https://doi.org/10.1017/eds.2022.4.

    • Search Google Scholar
    • Export Citation
  • Brenowitz, N. D., and C. S. Bretherton, 2018: Prognostic validation of a neural network unified physics parameterization. Geophys. Res. Lett., 45, 62896298, https://doi.org/10.1029/2018GL078510.

    • Search Google Scholar
    • Export Citation
  • Brenowitz, N. D., T. Beucler, M. Pritchard, and C. S. Bretherton, 2020: Interpreting and stabilizing machine-learning parametrizations of convection. J. Atmos. Sci., 77, 43574375, https://doi.org/10.1175/JAS-D-20-0082.1.

    • Search Google Scholar
    • Export Citation
  • Cannon, A. J., 2008: Probabilistic multisite precipitation downscaling by an expanded Bernoulli–gamma density network. J. Hydrometeor., 9, 12841300, https://doi.org/10.1175/2008JHM960.1.

    • Search Google Scholar
    • Export Citation
  • Careto, J. A. M., P. M. M. Soares, R. M. Cardoso, S. Herrera, and J. M. Gutiérrez, 2022: Added value of EURO-CORDEX high-resolution downscaling over the Iberian Peninsula revisited—Part 1: Precipitation. Geosci. Model Dev., 15, 26352652, https://doi.org/10.5194/gmd-15-2635-2022.

    • Search Google Scholar
    • Export Citation
  • Carreau, J., and M. Vrac, 2011: Stochastic downscaling of precipitation with neural network conditional mixture models. Water Resour. Res., 47, W10502, https://doi.org/10.1029/2010WR010128.

    • Search Google Scholar
    • Export Citation
  • Chadwick, R., E. Coppola, and F. Giorgi, 2011: An artificial neural network technique for downscaling GCM outputs to RCM spatial scale. Nonlinear Processes Geophys., 18, 10131028, https://doi.org/10.5194/npg-18-1013-2011.

    • Search Google Scholar
    • Export Citation
  • Charles, S., B. C. Bates, P. Whetton, and J. Hughes, 1999: Validation of downscaling models for changed climate conditions: Case study of southwestern Australia. Climate Res., 12, 114, https://doi.org/10.3354/cr012001.

    • Search Google Scholar
    • Export Citation
  • Chen, L., B. Han, X. Wang, J. Zhao, W. Yang, and Z. Yang, 2023: Machine learning methods in weather and climate applications: A survey. Appl. Sci., 13, 12019, https://doi.org/10.3390/app132112019.

    • Search Google Scholar
    • Export Citation
  • Chen, X. I., N. Mishra, M. Rohaninejad, and P. Abbeel, 2018: PixelSNAIL: An improved autoregressive generative model. Proc. 35th Int. Conf. on Machine Learning, Stockholm, Sweden, PMLR, 864–872, https://proceedings.mlr.press/v80/chen18h.html.

  • Clare, M. C. A., M. Sonnewald, R. Lguensat, J. Deshayes, and V. Balaji, 2022: Explainable artificial intelligence for Bayesian neural networks: Toward trustworthy predictions of ocean dynamics. J. Adv. Model. Earth Syst., 14, e2022MS003162, https://doi.org/10.1029/2022MS003162.

    • Search Google Scholar
    • Export Citation
  • Coppola, E., and Coauthors, 2020: A first-of-its-kind multi-model convection permitting ensemble for investigating convective phenomena over Europe and the Mediterranean. Climate Dyn., 55, 334, https://doi.org/10.1007/s00382-018-4521-8.

    • Search Google Scholar
    • Export Citation
  • CORDEX, 2021: CORDEX experiment design for dynamical downscaling of CMIP6. 8 pp., https://cordex.org/wp-content/uploads/2021/05/CORDEX-CMIP6_exp_design_RCM.pdf.

  • Dai, A., 2006: Precipitation characteristics in eighteen coupled climate models. J. Climate, 19, 46054630, https://doi.org/10.1175/JCLI3884.1.

    • Search Google Scholar
    • Export Citation
  • Dayon, G., J. Boé, and E. Martin, 2015: Transferability in the future climate of a statistical downscaling method for precipitation in France. J. Geophys. Res. Atmos., 120, 10231043, https://doi.org/10.1002/2014JD022236.

    • Search Google Scholar
    • Export Citation
  • de Burgh-Day, C. O., and T. Leeuwenburg, 2023: Machine learning for numerical weather and climate modelling: A review. Geosci. Model Dev., 16, 64336477, https://doi.org/10.5194/gmd-16-6433-2023.

    • Search Google Scholar
    • Export Citation
  • Deser, C., and A. S. Phillips, 2023: A range of outcomes: The combined effects of internal variability and anthropogenic forcing on regional climate trends over Europe. Nonlinear Processes Geophys., 30, 6384, https://doi.org/10.5194/npg-30-63-2023.

    • Search Google Scholar
    • Export Citation
  • Deser, C., A. S. Phillips, V. Bourdette, and H. Teng, 2012: Uncertainty in climate change projections: The role of internal variability. Climate Dyn., 38, 527546, https://doi.org/10.1007/s00382-010-0977-x.

    • Search Google Scholar
    • Export Citation
  • Dhariwal, P., and A. Nichol, 2021: Diffusion models beat GANs on image synthesis. Advances in Neural Information Processing Systems 34 (NeurIPS 2021), M. Ranzato et al., Eds., Curran Associates, 8780–8794, https://proceedings.neurips.cc/paper_files/paper/2021/hash/49ad23d1ec9fa4bd8d77d02681df5cfa-Abstract.html.

  • Diaconescu, E. P., R. Laprise, and L. Sushama, 2007: The impact of lateral boundary data errors on the simulated climate of a nested regional climate model. Climate Dyn., 28, 333350, https://doi.org/10.1007/s00382-006-0189-6.

    • Search Google Scholar
    • Export Citation
  • Diez-Sierra, J., and Coauthors, 2022: The worldwide C3S CORDEX grand ensemble: A major contribution to assess regional climate change in the IPCC AR6 Atlas. Bull. Amer. Meteor. Soc., 103, E2804E2826, https://doi.org/10.1175/BAMS-D-22-0111.1.

    • Search Google Scholar
    • Export Citation
  • Dikshit, A., and B. Pradhan, 2021: Interpretable and explainable AI (XAI) model for spatial drought prediction. Sci. Total Environ., 801, 149797, https://doi.org/10.1016/j.scitotenv.2021.149797.

    • Search Google Scholar
    • Export Citation
  • Di Virgilio, G., and Coauthors, 2019: Evaluating reanalysis-driven CORDEX regional climate models over Australia: Model performance and errors. Climate Dyn., 53, 29853005, https://doi.org/10.1007/s00382-019-04672-w.

    • Search Google Scholar
    • Export Citation
  • Di Virgilio, G., J. P. Evans, A. Di Luca, M. R. Grose, V. Round, and M. Thatcher, 2020: Realised added value in dynamical downscaling of Australian climate change. Climate Dyn., 54, 46754692, https://doi.org/10.1007/s00382-020-05250-1.

    • Search Google Scholar
    • Export Citation
  • Dixon, K. W., J. R. Lanzante, M. J. Nath, K. Hayhoe, A. Stoner, A. Radhakrishnan, V. Balaji, and C. F. Gaitán, 2016: Evaluating the stationarity assumption in statistically downscaled climate projections: Is past performance an indicator of future results? Climatic Change, 135, 395408, https://doi.org/10.1007/s10584-016-1598-0.

    • Search Google Scholar
    • Export Citation
  • Doshi-Velez, F., and B. Kim, 2017: Towards a rigorous science of interpretable machine learning. arXiv, 1702.08608v2, https://doi.org/10.48550/arXiv.1702.08608.

  • Doury, A., S. Somot, S. Gadat, A. Ribes, and L. Corre, 2023: Regional climate model emulator based on deep learning: Concept and first evaluation of a novel hybrid downscaling approach. Climate Dyn., 60, 17511779, https://doi.org/10.1007/s00382-022-06343-9.

    • Search Google Scholar
    • Export Citation
  • Dujardin, J., and M. Lehning, 2022: Wind-Topo: Downscaling near-surface wind fields to high-resolution topography in highly complex terrain with deep learning. Quart. J. Roy. Meteor. Soc., 148, 13681388, https://doi.org/10.1002/qj.4265.

    • Search Google Scholar
    • Export Citation
  • Ebert-Uphoff, I., and K. Hilburn, 2020: Evaluation, tuning, and interpretation of neural networks for working with images in meteorological applications. Bull. Amer. Meteor. Soc., 101, E2149E2170, https://doi.org/10.1175/BAMS-D-20-0097.1.

    • Search Google Scholar
    • Export Citation
  • Evans, J. P., F. Ji, C. Lee, P. Smith, D. Argüeso, and L. Fita, 2014: Design of a regional climate modelling projection ensemble experiment—NARCliM. Geosci. Model Dev., 7, 621629, https://doi.org/10.5194/gmd-7-621-2014.

    • Search Google Scholar
    • Export Citation
  • Eyring, V., S. Bony, G. A. Meehl, C. A. Senior, B. Stevens, R. J. Stouffer, and K. E. Taylor, 2016: Overview of the Coupled Model Intercomparison Project Phase 6 (CMIP6) experimental design and organization. Geosci. Model Dev., 9, 19371958, https://doi.org/10.5194/gmd-9-1937-2016.

    • Search Google Scholar
    • Export Citation
  • Feng, D., H. Beck, K. Lawson, and C. Shen, 2023: The suitability of differentiable, physics-informed machine learning hydrologic models for ungauged regions and climate change impact assessment. Hydrol. Earth Syst. Sci., 27, 23572373, https://doi.org/10.5194/hess-27-2357-2023.

    • Search Google Scholar
    • Export Citation
  • Feser, F., B. Rockel, H. von Storch, J. Winterfeldt, and M. Zahn, 2011: Regional climate models add value to global model data: A review and selected examples. Bull. Amer. Meteor. Soc., 92, 11811192, https://doi.org/10.1175/2011BAMS3061.1.

    • Search Google Scholar
    • Export Citation
  • Fowler, H. J., S. Blenkinsop, and C. Tebaldi, 2007: Linking climate change modelling to impacts studies: Recent advances in downscaling techniques for hydrological modelling. Int. J. Climatol., 27, 15471578, https://doi.org/10.1002/joc.1556.

    • Search Google Scholar
    • Export Citation
  • Gagne, D. J. II, S. E. Haupt, D. W. Nychka, and G. Thompson, 2019: Interpretable deep learning for spatial analysis of severe hailstorms. Mon. Wea. Rev., 147, 28272845, https://doi.org/10.1175/MWR-D-18-0316.1.

    • Search Google Scholar
    • Export Citation
  • Gaitan, C. F., W. W. Hsieh, and A. J. Cannon, 2014: Comparison of statistically downscaled precipitation in terms of future climate indices and daily variability for southern Ontario and Quebec, Canada. Climate Dyn., 43, 32013217, https://doi.org/10.1007/s00382-014-2098-4.

    • Search Google Scholar
    • Export Citation
  • Gao, Z., and Coauthors, 2023: PreDiff: Precipitation nowcasting with latent diffusion models. arXiv, 2307.10422v1, https://doi.org/10.48550/arXiv.2307.10422.

  • Geiss, A., and J. C. Hardin, 2020: Radar super resolution using a deep convolutional neural network. J. Atmos. Oceanic Technol., 37, 21972207, https://doi.org/10.1175/JTECH-D-20-0074.1.

    • Search Google Scholar
    • Export Citation
  • Geiss, A., and J. C. Hardin, 2023: Strictly enforcing invertibility and conservation in CNN-based super resolution for scientific datasets. Artif. Intell. Earth Syst., 2, e210012, https://doi.org/10.1175/AIES-D-21-0012.1.

    • Search Google Scholar
    • Export Citation
  • Geiss, A., S. J. Silva, and J. C. Hardin, 2022: Downscaling atmospheric chemistry simulations with physically consistent deep learning. Geosci. Model Dev., 15, 66776694, https://doi.org/10.5194/gmd-15-6677-2022.

    • Search Google Scholar
    • Export Citation
  • Gensini, V. A., A. M. Haberlie, and W. S. Ashley, 2023: Convection-permitting simulations of historical and possible future climate over the contiguous United States. Climate Dyn., 60, 109126, https://doi.org/10.1007/s00382-022-06306-0.

    • Search Google Scholar
    • Export Citation
  • Gibson, P. B., W. E. Chapman, A. Altinok, L. Delle Monache, M. J. DeFlorio, and D. E. Waliser, 2021: Training machine learning models on climate model output yields skillful interpretable seasonal precipitation forecasts. Commun. Earth Environ., 2, 159, https://doi.org/10.1038/s43247-021-00225-4.

    • Search Google Scholar
    • Export Citation
  • Gibson, P. B., D. Stone, M. Thatcher, A. Broadbent, S. Dean, S. M. Rosier, S. Stuart, and A. Sood, 2023: High-resolution CCAM simulations over New Zealand and the South Pacific for the detection and attribution of weather extremes. J. Geophys. Res. Atmos., 128, e2023JD038530, https://doi.org/10.1029/2023JD038530.

    • Search Google Scholar
    • Export Citation
  • Gibson, P. B., N. Rampal, S. M. Dean, and O. Morgenstern, 2024: Storylines for future projections of precipitation over New Zealand in CMIP6 models. J. Geophys. Res. Atmos., 129, e2023JD039664, https://doi.org/10.1029/2023JD039664.

    • Search Google Scholar
    • Export Citation
  • Giorgi, F., and W. J. Gutowski Jr., 2015: Regional dynamical downscaling and the CORDEX initiative. Annu. Rev. Environ. Resour., 40, 467490, https://doi.org/10.1146/annurev-environ-102014-021217.

    • Search Google Scholar
    • Export Citation
  • Giorgi, F., C. S. Brodeur, and G. T. Bates, 1994: Regional climate change scenarios over the United States produced with a nested regional climate model. J. Climate, 7, 375399, https://doi.org/10.1175/1520-0442(1994)007<0375:RCCSOT>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Giorgi, F., C. Jones, and G. R. Asrar, 2009: Addressing climate information needs at the regional level: The CORDEX framework. WMO Bull., 58, 175–183.

    • Search Google Scholar
    • Export Citation
  • Giorgi, F., C. Torma, E. Coppola, N. Ban, C. Schär, and S. Somot, 2016: Enhanced summer convective rainfall at Alpine high elevations in response to climate warming. Nat. Geosci., 9, 584589, https://doi.org/10.1038/ngeo2761.

    • Search Google Scholar
    • Export Citation
  • Glahn, H. R., and D. A. Lowry, 1972: The use of model output statistics (MOS) in objective weather forecasting. J. Appl. Meteor., 11, 12031211, https://doi.org/10.1175/1520-0450(1972)011<1203:TUOMOS>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • González-Abad, J., J. Baño-Medina, and J. M. Gutiérrez, 2023a: Using explainability to inform statistical downscaling based on deep learning beyond standard validation approaches. J. Adv. Model. Earth Syst., 15, e2023MS003641, https://doi.org/10.1029/2023MS003641.

    • Search Google Scholar
    • Export Citation
  • González-Abad, J., Á. Hernández-García, P. Harder, D. Rolnick, and J. M. Gutiérrez, 2023b: Multi-variable hard physical constraints for climate model downscaling. arXiv, 2308.01868v1, https://doi.org/10.48550/arXiv.2308.01868.

  • Goodfellow, I., J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, 2014: Generative adversarial nets. Commun. ACM, 63, 139144, https://doi.org/10.1145/3422622.

    • Search Google Scholar
    • Export Citation
  • Goodfellow, I., Y. Bengio, and A. Courville, 2016: Deep Learning. MIT Press, 800 pp.

  • Groenke, B., L. Madaus, and C. Monteleoni, 2020: ClimAlign: Unsupervised statistical downscaling of climate variables via normalizing flows. Proc. 10th Int. Conf. on Climate Informatics, Online, Association for Computing Machinery, 60–66, https://doi.org/10.1145/3429309.3429318.

  • Gutiérrez, J. M., D. San-Martín, S. Brands, R. Manzanas, and S. Herrera, 2013: Reassessing statistical downscaling techniques for their robust application under climate change conditions. J. Climate, 26, 171188, https://doi.org/10.1175/JCLI-D-11-00687.1.

    • Search Google Scholar
    • Export Citation
  • Gutiérrez, J. M., and Coauthors, 2019: An intercomparison of a large ensemble of statistical downscaling methods over Europe: Results from the VALUE perfect predictor cross-validation experiment. Int. J. Climatol., 39, 37503785, https://doi.org/10.1002/joc.5462.

    • Search Google Scholar
    • Export Citation
  • Gutiérrez, J. M., and Coauthors, 2022: The future scientific challenges for CORDEX: Empirical statistical downscaling (ESD). 11 pp., https://cordex.org/wp-content/uploads/2022/08/White-Paper-ESD.pdf.

  • Haarsma, R. J., and Coauthors, 2016: High Resolution Model Intercomparison Project (HighResMIP v1.0) for CMIP6. Geosci. Model Dev., 9, 41854208, https://doi.org/10.5194/gmd-9-4185-2016.

    • Search Google Scholar
    • Export Citation
  • Ham, Y.-G., J.-H. Kim, and J.-J. Luo, 2019: Deep learning for multi-year ENSO forecasts. Nature, 573, 568572, https://doi.org/10.1038/s41586-019-1559-7.

    • Search Google Scholar
    • Export Citation
  • Hannachi, A., I. T. Jolliffe, and D. B. Stephenson, 2007: Empirical orthogonal functions and related techniques in atmospheric science: A review. Int. J. Climatol., 27, 11191152, https://doi.org/10.1002/joc.1499.

    • Search Google Scholar
    • Export Citation
  • Harder, P., D. Watson-Parris, P. Stier, D. Strassel, N. R. Gauger, and J. Keuper, 2022: Physics-informed learning of aerosol microphysics. Environ. Data Sci., 1, e20, https://doi.org/10.1017/eds.2022.22.

    • Search Google Scholar
    • Export Citation
  • Harder, P., V. Ramesh, A. Hernandez-Garcia, Q. Yang, P. Sattigeri, D. Szwarcman, C. Watson, and D. Rolnick, 2023: Physics-constrained deep learning for climate downscaling. arXiv, 2208.05424v1, https://doi.org/10.48550/arXiv.2208.05424.

  • Harris, L., A. T. T. McRae, M. Chantry, P. D. Dueben, and T. N. Palmer, 2022: A generative deep learning approach to stochastic downscaling of precipitation forecasts. arXiv, 2204.02028v1, https://doi.org/10.48550/arXiv.2204.02028.

  • Hatanaka, Y., Y. Glaser, G. Galgon, G. Torri, and P. Sadowski, 2023: Diffusion models for high-resolution solar forecasts. arXiv, 2302.00170v1, https://doi.org/10.48550/arXiv.2302.00170.

  • Hawkins, E., and R. Sutton, 2009: The potential to narrow uncertainty in regional climate predictions. Bull. Amer. Meteor. Soc., 90, 10951108, https://doi.org/10.1175/2009BAMS2607.1.

    • Search Google Scholar
    • Export Citation
  • Hawkins, E., and R. Sutton, 2011: The potential to narrow uncertainty in projections of regional precipitation change. Climate Dyn., 37, 407418, https://doi.org/10.1007/s00382-010-0810-6.

    • Search Google Scholar
    • Export Citation
  • He, K., X. Zhang, S. Ren, and J. Sun, 2016: Deep residual learning for image recognition. 2016 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, Institute of Electrical and Electronics Engineers, 770–778, https://doi.org/10.1109/CVPR.2016.90.

  • Hernanz, A., J. A. García-Valero, M. Domínguez, and E. Rodríguez-Camino, 2022a: A critical view on the suitability of machine learning techniques to downscale climate change projections: Illustration for temperature with a toy experiment. Atmos. Sci. Lett., 23, e1087, https://doi.org/10.1002/asl.1087.

    • Search Google Scholar
    • Export Citation
  • Hernanz, A., J. A. García-Valero, M. Domínguez, and E. Rodríguez-Camino, 2022b: Evaluation of statistical downscaling methods for climate change projections over Spain: Future conditions with pseudo reality (transferability experiment). Int. J. Climatol., 42, 39874000, https://doi.org/10.1002/joc.7464.

    • Search Google Scholar
    • Export Citation
  • Hersbach, H., and Coauthors, 2020: The ERA5 global reanalysis. Quart. J. Roy. Meteor. Soc., 146, 19992049, https://doi.org/10.1002/qj.3803.

    • Search Google Scholar
    • Export Citation
  • Hess, P., M. Drüke, S. Petri, F. M. Strnad, and N. Boers, 2022: Physically constrained generative adversarial networks for improving precipitation fields from Earth system models. arXiv, 2209.07568v1, https://doi.org/10.48550/arXiv.2209.07568.

  • Ho, J., C. Saharia, W. Chan, D. J. Fleet, M. Norouzi, and T. Salimans, 2021: Cascaded diffusion models for high fidelity image generation. arXiv, 2106.15282v3, https://doi.org/10.48550/arXiv.2106.15282.

  • Hobeichi, S., N. Nishant, Y. Shao, G. Abramowitz, A. Pitman, S. Sherwood, C. Bishop, and S. Green, 2023: Using machine learning to cut the cost of dynamical downscaling. Earth’s Future, 11, e2022EF003291, https://doi.org/10.1029/2022EF003291.

    • Search Google Scholar
    • Export Citation
  • Höhlein, K., M. Kern, T. Hewson, and R. Westermann, 2020: A comparative study of convolutional neural network models for wind field downscaling. Meteor. Appl., 27, e1961, https://doi.org/10.1002/met.1961.

    • Search Google Scholar
    • Export Citation
  • Holden, P. B., N. R. Edwards, P. H. Garthwaite, and R. D. Wilkinson, 2015: Emulation and interpretation of high-dimensional climate model outputs. J. Appl. Stat., 42, 20382055, https://doi.org/10.1080/02664763.2015.1016412.

    • Search Google Scholar
    • Export Citation
  • Hoogewind, K. A., M. E. Baldwin, and R. J. Trapp, 2017: The impact of climate change on hazardous convective weather in the United States: Insight from high-resolution dynamical downscaling. J. Climate, 30, 10 08110 100, https://doi.org/10.1175/JCLI-D-16-0885.1.

    • Search Google Scholar
    • Export Citation
  • Hutengs, C., and M. Vohland, 2016: Downscaling land surface temperatures at regional scales with random forest regression. Remote Sens. Environ., 178, 127141, https://doi.org/10.1016/j.rse.2016.03.006.

    • Search Google Scholar
    • Export Citation
  • Iotti, M., P. Davini, J. von Hardenberg, and G. Zappa, 2022: Downscaling of precipitation over the Taiwan region by a conditional generative adversarial network. International Symposium on Grids & Clouds 2022 (ISGC2022), Vol. 415, Proceedings of Science, 004, https://doi.org/10.22323/1.415.0004.

  • Iqbal, T., and H. Ali, 2018: Generative adversarial network for medical images (MI-GAN). J. Med. Syst., 42, 231, https://doi.org/10.1007/s10916-018-1072-9.

    • Search Google Scholar
    • Export Citation
  • Irrgang, C., N. Boers, M. Sonnewald, E. A. Barnes, C. Kadow, J. Staneva, and J. Saynisch-Wagner, 2021: Towards neural Earth system modelling by integrating artificial intelligence in Earth system science. Nat. Mach. Intell., 3, 667674, https://doi.org/10.1038/s42256-021-00374-3.

    • Search Google Scholar
    • Export Citation
  • Isola, P., J. -Y. Zhu, T. Zhou, and A. A. Efros, 2017: Image-to-image translation with conditional adversarial networks. 2017 IEEE Conf. on Computer Vision and Pattern Recognition, Honolulu, HI, IEEE, 59675976, https://doi.org/10.1109/CVPR.2017.632.

  • Isphording, R. N., L. V. Alexander, M. Bador, D. Green, J. P. Evans, and S. Wales, 2024: A standardized benchmarking framework to assess downscaled precipitation simulations. J. Climate, 37, 10891110, https://doi.org/10.1175/JCLI-D-23-0317.1.

    • Search Google Scholar
    • Export Citation
  • Izumi, T., M. Amagasaki, K. Ishida, and M. Kiyama, 2022: Super-resolution of sea surface temperature with convolutional neural network- and generative adversarial network-based methods. J. Water Climate Change, 13, 16731683, https://doi.org/10.2166/wcc.2022.291.

    • Search Google Scholar
    • Export Citation
  • Jacob, D., and Coauthors, 2020: Regional climate downscaling over Europe: Perspectives from the EURO-CORDEX community. Reg. Environ. Change, 20, 51, https://doi.org/10.1007/s10113-020-01606-9.

    • Search Google Scholar
    • Export Citation
  • Jiang, Y., K. Yang, C. Shao, X. Zhou, L. Zhao, Y. Chen, and H. Wu, 2021: A downscaling approach for constructing high-resolution precipitation dataset over the Tibetan Plateau from ERA5 reanalysis. Atmos. Res., 256, 105574, https://doi.org/10.1016/j.atmosres.2021.105574.

    • Search Google Scholar
    • Export Citation
  • Jones, R. G., J. M. Murphy, and M. Noguer, 1995: Simulation of climate change over Europe using a nested regional-climate model. I: Assessment of control climate, including sensitivity to location of lateral boundaries. Quart. J. Roy. Meteor. Soc., 121, 14131449, https://doi.org/10.1002/qj.49712152610.

    • Search Google Scholar
    • Export Citation
  • Kashinath, K., and Coauthors, 2021: Physics-informed machine learning: Case studies for weather and climate modelling. Philos. Trans. Roy. Soc., A379, 20200093, https://doi.org/10.1098/rsta.2020.0093.

    • Search Google Scholar
    • Export Citation
  • Kay, J. E., and Coauthors, 2015: The Community Earth System Model (CESM) Large Ensemble Project: A community resource for studying climate change in the presence of internal climate variability. Bull. Amer. Meteor. Soc., 96, 13331349, https://doi.org/10.1175/BAMS-D-13-00255.1.

    • Search Google Scholar
    • Export Citation
  • Kendon, E. J., E. M. Fischer, and C. J. Short, 2023: Variability conceals emerging trend in 100yr projections of UK local hourly rainfall extremes. Nat. Commun., 14, 1133, https://doi.org/10.1038/s41467-023-36499-9.

    • Search Google Scholar
    • Export Citation
  • Khan, S., M. Naseer, M. Hayat, S. W. Zamir, F. S. Khan, and M. Shah, 2022: Transformers in vision: A survey. ACM Comput. Surv., 54 (10s), 141, https://doi.org/10.1145/3505244.

    • Search Google Scholar
    • Export Citation
  • Kochkov, D., and Coauthors, 2023: Neural general circulation models. arXiv, 2311.07222v2, https://doi.org/10.48550/arXiv.2311.07222.

  • Kumar, A., M. Chen, L. Zhang, W. Wang, Y. Xue, C. Wen, L. Marx, and B. Huang, 2012: An analysis of the nonstationarity in the bias of sea surface temperature forecasts for the NCEP Climate Forecast System (CFS) version 2. Mon. Wea. Rev., 140, 30033016, https://doi.org/10.1175/MWR-D-11-00335.1.

    • Search Google Scholar
    • Export Citation
  • Langousis, A., and V. Kaleris, 2014: Statistical framework to simulate daily rainfall series conditional on upper-air predictor variables. Water Resour. Res., 50, 39073932, https://doi.org/10.1002/2013WR014936.

    • Search Google Scholar
    • Export Citation
  • Langousis, A., A. Mamalakis, R. Deidda, and M. Marrocu, 2016: Assessing the relative effectiveness of statistical downscaling and distribution mapping in reproducing rainfall statistics based on climate model results. Water Resour. Res., 52, 471494, https://doi.org/10.1002/2015WR017556.

    • Search Google Scholar
    • Export Citation
  • Lanzante, J. R., K. W. Dixon, M. J. Nath, C. E. Whitlock, and D. Adams-Smith, 2018: Some pitfalls in statistical downscaling of future climate. Bull. Amer. Meteor. Soc., 99, 791803, https://doi.org/10.1175/BAMS-D-17-0046.1.

    • Search Google Scholar
    • Export Citation
  • LeCun, Y., Y. Bengio, and G. Hinton, 2015: Deep learning. Nature, 521, 436444, https://doi.org/10.1038/nature14539.

  • Legasa, M. N., S. Thao, M. Vrac, and R. Manzanas, 2023: Assessing three perfect prognosis methods for statistical downscaling of climate change precipitation scenarios. Geophys. Res. Lett., 50, e2022GL102525, https://doi.org/10.1029/2022GL102525.

    • Search Google Scholar
    • Export Citation
  • Leinonen, J., D. Nerini, and A. Berne, 2021: Stochastic super-resolution for downscaling time-evolving atmospheric fields with a generative adversarial network. IEEE Trans. Geosci. Remote Sens., 59, 72117223, https://doi.org/10.1109/TGRS.2020.3032790.

    • Search Google Scholar
    • Export Citation
  • Leinonen, J., U. Hamann, D. Nerini, U. Germann, and G. Franch, 2023: Latent diffusion models for generative precipitation nowcasting with accurate uncertainty quantification. arXiv, 2304.12891v1, https://doi.org/10.48550/arXiv.2304.12891.

  • Lempitsky, V., A. Vedaldi, and D. Ulyanov, 2018: Deep image prior. 2018 IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, Institute of Electrical and Electronics Engineers, 9446–9454, https://doi.ieeecomputersociety.org/10.1109/CVPR.2018.00984.

  • Linardatos, P., V. Papastefanopoulos, and S. Kotsiantis, 2020: Explainable AI: A review of machine learning interpretability methods. Entropy, 23, 18, https://doi.org/10.3390/e23010018.

    • Search Google Scholar
    • Export Citation
  • Liu, C., and Coauthors, 2017: Continental-scale convection-permitting modeling of the current and future climate of North America. Climate Dyn., 49, 7195, https://doi.org/10.1007/s00382-016-3327-9.

    • Search Google Scholar
    • Export Citation
  • Liu, G., R. Zhang, R. Hang, L. Ge, C. Shi, and Q. Liu, 2023: Statistical downscaling of temperature distributions in southwest China by using terrain-guided attention network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 16, 16781690, https://doi.org/10.1109/JSTARS.2023.3239109.

    • Search Google Scholar
    • Export Citation
  • Liu, J., Y. Sun, K. Ren, Y. Zhao, K. Deng, and L. Wang, 2022: A spatial downscaling approach for WindSat satellite sea surface wind based on generative adversarial networks and dual learning scheme. Remote Sens., 14, 769, https://doi.org/10.3390/rs14030769.

    • Search Google Scholar
    • Export Citation
  • Liu, Y., A. R. Ganguly, and J. Dy, 2020: Climate downscaling using YNet: A deep convolutional network with skip connections and fusion. Proc. 26th ACM SIGKDD Int. Conf. on Knowledge Discovery & Data Mining, Online, Association for Computing Machinery, 31453153, https://doi.org/10.1145/3394486.3403366.

  • Liu, Y., K. Duffy, J. G. Dy, and A. R. Ganguly, 2023: Explainable deep learning for insights in El Niño and river flows. Nat. Commun., 14, 339, https://doi.org/10.1038/s41467-023-35968-5.

    • Search Google Scholar
    • Export Citation
  • Lloyd, E. A., M. Bukovsky, and L. O. Mearns, 2021: An analysis of the disagreement about added value by regional climate models. Synthese, 198, 11 64511 672, https://doi.org/10.1007/s11229-020-02821-x.

    • Search Google Scholar
    • Export Citation
  • Lopez-Gomez, I., A. McGovern, S. Agrawal, and J. Hickey, 2023: Global extreme heat forecasting using neural weather models. Artif. Intell. Earth Syst., 2, e220035, https://doi.org/10.1175/AIES-D-22-0035.1.

    • Search Google Scholar
    • Export Citation
  • Maher, N., and Coauthors, 2019: The Max Planck Institute Grand Ensemble: Enabling the exploration of climate system variability. J. Adv. Model. Earth Syst., 11, 20502069, https://doi.org/10.1029/2019MS001639.

    • Search Google Scholar
    • Export Citation
  • Maher, N., S. Milinski, and R. Ludwig, 2021: Large ensemble climate model simulations: Introduction, overview, and future prospects for utilising multiple types of large ensemble. Earth Syst. Dyn., 12, 401418, https://doi.org/10.5194/esd-12-401-2021.

    • Search Google Scholar
    • Export Citation
  • Mamalakis, A., J.-Y. Yu, J. T. Randerson, A. AghaKouchak, and E. Foufoula-Georgiou, 2018: A new interhemispheric teleconnection increases predictability of winter precipitation in southwestern US. Nat. Commun., 9, 2332, https://doi.org/10.1038/s41467-018-04722-7.

    • Search Google Scholar
    • Export Citation
  • Mamalakis, A., E. A. Barnes, and I. Ebert-Uphoff, 2022a: Investigating the fidelity of explainable artificial intelligence methods for applications of convolutional neural networks in geoscience. Artif. Intell. Earth Syst., 1, e220012, https://doi.org/10.1175/AIES-D-22-0012.1.

    • Search Google Scholar
    • Export Citation
  • Mamalakis, A., I. Ebert-Uphoff, and E. A. Barnes, 2022b: Neural network attribution methods for problems in geoscience: A novel synthetic benchmark dataset. Environ. Data Sci., 1, e8, https://doi.org/10.1017/eds.2022.7.

    • Search Google Scholar
    • Export Citation
  • Mamalakis, A., E. A. Barnes, and I. Ebert-Uphoff, 2023: Carefully choose the baseline: Lessons learned from applying XAI attribution methods for regression tasks in geoscience. Artif. Intell. Earth Syst., 2, e220058, https://doi.org/10.1175/AIES-D-22-0058.1.

    • Search Google Scholar
    • Export Citation
  • Manzanas, R., L. Fiwa, C. Vanya, H. Kanamaru, and J. M. Gutiérrez, 2020: Statistical downscaling or bias adjustment? A case study involving implausible climate change projections of precipitation in Malawi. Climatic Change, 162, 14371453, https://doi.org/10.1007/s10584-020-02867-3.

    • Search Google Scholar
    • Export Citation
  • Maraun, D., 2016: Bias correcting climate change simulations—A critical review. Curr. Climate Change Rep., 2, 211220, https://doi.org/10.1007/s40641-016-0050-x.

    • Search Google Scholar
    • Export Citation
  • Maraun, D., and Coauthors, 2010: Precipitation downscaling under climate change: Recent developments to bridge the gap between dynamical models and the end user. Rev. Geophys., 48, RG3003, https://doi.org/10.1029/2009RG000314.

    • Search Google Scholar
    • Export Citation
  • Maraun, D., and Coauthors, 2015: VALUE: A framework to validate downscaling approaches for climate change studies. Earth’s Future, 3 (1), 114, https://doi.org/10.1002/2014EF000259.

    • Search Google Scholar
    • Export Citation
  • Mardani, M., and Coauthors, 2023: Generative residual diffusion modeling for km-scale atmospheric downscaling. arXiv, 2309.15214v3, https://doi.org/10.48550/arXiv.2309.15214.

  • Marotzke, J., and Coauthors, 2017: Climate research must sharpen its view. Nat. Climate Change, 7, 8991, https://doi.org/10.1038/nclimate3206.

    • Search Google Scholar
    • Export Citation
  • Materia, S., and Coauthors, 2023: Artificial intelligence for prediction of climate extremes: State of the art, challenges and future perspectives. arXiv, 2310.01944v1, https://doi.org/10.48550/arXiv.2310.01944.

  • McGovern, A., R. Lagerquist, D. J. Gagne II, G. E. Jergensen, K. L. Elmore, C. R. Homeyer, and T. Smith, 2019: Making the black box more transparent: Understanding the physical implications of machine learning. Bull. Amer. Meteor. Soc., 100, 21752199, https://doi.org/10.1175/BAMS-D-18-0195.1.

    • Search Google Scholar
    • Export Citation
  • McGovern, A., R. J. Chase, M. Flora, D. J. Gagne II, R. Lagerquist, C. K. Potvin, N. Snook, and E. Loken, 2023: A review of machine learning for convective weather. Artif. Intell. Earth Syst., 2, e220077, https://doi.org/10.1175/AIES-D-22-0077.1.

    • Search Google Scholar
    • Export Citation
  • Meinshausen, M., and Coauthors, 2020: The shared socio-economic pathway (SSP) greenhouse gas concentrations and their extensions to 2500. Geosci. Model Dev., 13, 35713605, https://doi.org/10.5194/gmd-13-3571-2020.

    • Search Google Scholar
    • Export Citation
  • Miao, Q., B. Pan, H. Wang, K. Hsu, and S. Sorooshian, 2019: Improving monsoon precipitation prediction using combined convolutional and long short term memory neural network. Water, 11, 977, https://doi.org/10.3390/w11050977.

    • Search Google Scholar
    • Export Citation
  • Miralles, O., D. Steinfeld, O. Martius, and A. C. Davison, 2022: Downscaling of historical wind fields over Switzerland using generative adversarial networks. Artif. Intell. Earth Syst., 1, e220018, https://doi.org/10.1175/AIES-D-22-0018.1.

    • Search Google Scholar
    • Export Citation
  • Mirza, M., and S. Osindero, 2014: Conditional generative adversarial nets. arXiv, 1411.1784v1, https://doi.org/10.48550/arXiv.1411.1784.

  • Mishra Sharma, S. C., and A. Mitra, 2022: ResDeepD: A residual super-resolution network for deep downscaling of daily precipitation over India. Environ. Data Sci., 1, e19, https://doi.org/10.1017/eds.2022.23.

    • Search Google Scholar
    • Export Citation
  • Molina, M. J., D. J. Gagne, and A. F. Prein, 2021: A benchmark to test generalization capabilities of deep learning methods to classify severe convective storms in a changing climate. Earth Space Sci., 8, e2020EA001490, https://doi.org/10.1029/2020EA001490.

    • Search Google Scholar
    • Export Citation
  • Molina, M. J., and Coauthors, 2023: A review of recent and emerging machine learning applications for climate variability and weather phenomena. Artif. Intell. Earth Syst., 2, 220086, https://doi.org/10.1175/AIES-D-22-0086.1.

    • Search Google Scholar
    • Export Citation
  • Moss, R. H., and Coauthors, 2010: The next generation of scenarios for climate change research and assessment. Nature, 463, 747756, https://doi.org/10.1038/nature08823.

    • Search Google Scholar
    • Export Citation
  • Nguyen, T., J. Brandstetter, A. Kapoor, J. K. Gupta, and A. Grover, 2023: ClimaX: A foundation model for weather and climate. arXiv, 2301.10343v5, https://doi.org/10.48550/arXiv.2301.10343.

  • Nishant, N., S. Hobeichi, S. Sherwood, G. Abramowitz, Y. Shao, C. Bishop, and A. Pitman, 2023: Comparison of a novel machine learning approach with dynamical downscaling for Australian precipitation. Environ. Res. Lett., 18, 094006, https://doi.org/10.1088/1748-9326/ace463.

    • Search Google Scholar
    • Export Citation
  • Norris, J., G. Chen, and J. D. Neelin, 2019: Thermodynamic versus dynamic controls on extreme precipitation in a warming climate from the Community Earth System Model Large Ensemble. J. Climate, 32, 10251045, https://doi.org/10.1cdel175/JCLI-D-18-0302.1.

    • Search Google Scholar
    • Export Citation
  • Nourani, V., K. Khodkar, A. H. Baghanam, S. A. Kantoush, and I. Demir, 2023: Uncertainty quantification of deep learning–based statistical downscaling of climatic parameters. J. Appl. Meteor. Climatol., 62, 12231242, https://doi.org/10.1175/JAMC-D-23-0057.1.

    • Search Google Scholar
    • Export Citation
  • Oyama, N., N. N. Ishizaki, S. Koide, and H. Yoshida, 2023: Deep generative model super-resolves spatially correlated multiregional climate data. Sci. Rep., 13, 5992, https://doi.org/10.1038/s41598-023-32947-0.

    • Search Google Scholar
    • Export Citation
  • Pan, B., K. Hsu, A. AghaKouchak, and S. Sorooshian, 2019: Improving precipitation estimation using convolutional neural network. Water Resour. Res., 55, 23012321, https://doi.org/10.1029/2018WR024090.

    • Search Google Scholar
    • Export Citation
  • Pan, S. J., and Q. Yang, 2010: A survey on transfer learning. IEEE Trans. Knowl. Data Eng., 22, 13451359, https://doi.org/10.1109/TKDE.2009.191.

    • Search Google Scholar
    • Export Citation
  • Pegion, K., E. J. Becker, and B. P. Kirtman, 2022: Understanding predictability of daily southeast U.S. precipitation using explainable machine learning. Artif. Intell. Earth Syst., 1, e220011, https://doi.org/10.1175/AIES-D-22-0011.1.

    • Search Google Scholar
    • Export Citation
  • Perkins-Kirkpatrick, S. E., E. M. Fischer, O. Angélil, and P. B. Gibson, 2017: The influence of internal climate variability on heatwave frequency trends. Environ. Res. Lett., 12, 044005, https://doi.org/10.1088/1748-9326/aa63fe.

    • Search Google Scholar
    • Export Citation
  • Prein, A. F., and Coauthors, 2015: A review on regional convection-permitting climate modeling: Demonstrations, prospects, and challenges. Rev. Geophys., 53, 323361, https://doi.org/10.1002/2014RG000475.

    • Search Google Scholar
    • Export Citation
  • Price, I., and S. Rasp, 2022: Increasing the accuracy and resolution of precipitation forecasts using deep generative models. Proc. 25th Int. Conf. on Artificial Intelligence and Statistics, Valencia, Spain, PMLR, 10 555–10 571, https://proceedings.mlr.press/v151/price22a.html.

  • Quesada-Chacón, D., K. Barfus, and C. Bernhofer, 2022: Repeatable high-resolution statistical downscaling through deep learning. Geosci. Model Dev., 15, 73537370, https://doi.org/10.5194/gmd-15-7353-2022.

    • Search Google Scholar
    • Export Citation
  • Rampal, N., P. B. Gibson, A. Sood, S. Stuart, N. C. Fauchereau, C. Brandolino, B. Noll, and T. Meyers, 2022a: High-resolution downscaling with interpretable deep learning: Rainfall extremes over New Zealand. Wea. Climate Extremes, 38, 100525, https://doi.org/10.1016/j.wace.2022.100525.

    • Search Google Scholar
    • Export Citation
  • Rampal, N., T. Shand, A. Wooler, and C. Rautenbach, 2022b: Interpretable deep learning applied to rip current detection and localization. Remote Sens., 14, 6048, https://doi.org/10.3390/rs14236048.

    • Search Google Scholar
    • Export Citation
  • Rasmussen, R., and C. Liu, 2017: High resolution WRF simulations of the current and future climate of North America. National Center for Atmospheric Research Computational and Information Systems Laboratory Research Data Archive, accessed 17 December 2023, https://doi.org/10.5065/D6V40SXP.

  • Rasmussen, R., and Coauthors, 2023: CONUS404: The NCAR–USGS 4-km long-term regional hydroclimate reanalysis over the CONUS. Bull. Amer. Meteor. Soc., 104, E1382E1408, https://doi.org/10.1175/BAMS-D-21-0326.1.

    • Search Google Scholar
    • Export Citation
  • Rasp, S., and N. Thuerey, 2021: Data-driven medium-range weather prediction with a Resnet pretrained on climate simulations: A new model for WeatherBench. J. Adv. Model. Earth Syst., 13, e2020MS002405, https://doi.org/10.1029/2020MS002405.

    • Search Google Scholar
    • Export Citation
  • Rasp, S., M. S. Pritchard, and P. Gentine, 2018: Deep learning to represent subgrid processes in climate models. Proc. Natl. Acad. Sci., 115, 9684–9689, https://doi.org/10.1073/pnas.1810286115.

  • Ravuri, S., and Coauthors, 2021: Skilful precipitation nowcasting using deep generative models of radar. Nature, 597, 672677, https://doi.org/10.1038/s41586-021-03854-z.

    • Search Google Scholar
    • Export Citation
  • Reddy, P. J., R. Matear, J. Taylor, M. Thatcher, and M. Grose, 2023: A precipitation downscaling method using a super-resolution deconvolution neural network with step orography. Environ. Data Sci., 2, e17, https://doi.org/10.1017/eds.2023.18.

    • Search Google Scholar
    • Export Citation
  • Renwick, J. A., A. B. Mullan, and A. Porteous, 2009: Statistical downscaling of New Zealand climate. Wea. Climate, 29, 2444, https://doi.org/10.2307/26169704.

    • Search Google Scholar
    • Export Citation
  • Ronneberger, O., P. Fischer, and T. Brox, 2015: U-Net: Convolutional networks for biomedical image segmentation. Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, N. Navab et al., Eds., Springer, 234–241, https://doi.org/10.1007/978-3-319-24574-4_28.

  • Ruder, S., 2017: An overview of multi-task learning in deep neural networks. arXiv, 1706.05098v1, https://doi.org/10.48550/arXiv.1706.05098.

  • Rudin, C., 2019: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1, 206215, https://doi.org/10.1038/s42256-019-0048-x.

    • Search Google Scholar
    • Export Citation
  • Rummukainen, M., 2016: Added value in regional climate modeling. Wiley Interdiscip. Rev.: Climate Change, 7, 145159, https://doi.org/10.1002/wcc.378.

    • Search Google Scholar
    • Export Citation
  • Saha, A., and S. Ravela, 2022: Downscaling extreme rainfall using physical-statistical generative adversarial learning. arXiv, 2212.01446v1, https://doi.org/10.48550/arXiv.2212.01446.

  • Schär, C., and Coauthors, 2020: Kilometer-scale climate models: Prospects and challenges. Bull. Amer. Meteor. Soc., 101, E567E587, https://doi.org/10.1175/BAMS-D-18-0167.1.

    • Search Google Scholar
    • Export Citation
  • Schmidli, J., C. Frei, and P. L. Vidale, 2006: Downscaling from GCM precipitation: A benchmark for dynamical and statistical downscaling methods. Int. J. Climatol., 26, 679689, https://doi.org/10.1002/joc.1287.

    • Search Google Scholar
    • Export Citation
  • Schmith, T., and Coauthors, 2021: Identifying robust bias adjustment methods for European extreme precipitation in a multi-model pseudo-reality setting. Hydrol. Earth Syst. Sci., 25, 273290, https://doi.org/10.5194/hess-25-273-2021.

    • Search Google Scholar
    • Export Citation
  • Sexton, D. M. H., and Coauthors, 2021: A perturbed parameter ensemble of HadGEM3-GC3.05 coupled model projections: Part 1: Selecting the parameter combinations. Climate Dyn., 56, 33953436, https://doi.org/10.1007/s00382-021-05709-9.

    • Search Google Scholar
    • Export Citation
  • Sha, Y., D. J. Gagne II, G. West, and R. Stull, 2020: Deep-learning-based gridded downscaling of surface meteorological variables in complex terrain. Part II: Daily precipitation. J. Appl. Meteor. Climatol., 59, 20752092, https://doi.org/10.1175/JAMC-D-20-0058.1.

    • Search Google Scholar
    • Export Citation
  • Sharifi, E., B. Saghafian, and R. Steinacker, 2019: Downscaling satellite precipitation estimates with multiple linear regression, artificial neural networks, and spline interpolation techniques. J. Geophys. Res. Atmos., 124, 789805, https://doi.org/10.1029/2018JD028795.

    • Search Google Scholar
    • Export Citation
  • Shen, C., and Coauthors, 2023: Differentiable modelling to unify machine learning and physical models for geosciences. Nat. Rev. Earth Environ., 4, 552567, https://doi.org/10.1038/s43017-023-00450-9.

    • Search Google Scholar
    • Export Citation
  • Simpson, I. R., and Coauthors, 2023: The CESM2 single-forcing large ensemble and comparison to CESM1: Implications for experimental design. J. Climate, 36, 56875711, https://doi.org/10.1175/JCLI-D-22-0666.1.

    • Search Google Scholar
    • Export Citation
  • Solman, S., D. Jacob, A. Frigon, C. Teichmann, and M. Rixen, 2021: The future scientific challenges for CORDEX. CORDEX, 11 pp., https://cordex.org/wp-content/uploads/2021/05/The-future-of-CORDEX-MAY-17-2021-1.pdf.

  • Sonnewald, M., and R. Lguensat, 2021: Revealing the impact of global heating on North Atlantic circulation using transparent machine learning. J. Adv. Model. Earth Syst., 13, e2021MS002496, https://doi.org/10.1029/2021MS002496.

    • Search Google Scholar
    • Export Citation
  • Sørland, S. L., C. Schär, D. Lüthi, and E. Kjellström, 2018: Bias patterns and climate change signals in GCM-RCM model chains. Environ. Res. Lett., 13, 074017, https://doi.org/10.1088/1748-9326/aacc77.

    • Search Google Scholar
    • Export Citation
  • Stengel, K., A. Glaws, D. Hettinger, and R. N. King, 2020: Adversarial super-resolution of climatological wind and solar data. Proc. Natl. Acad. Sci. USA, 117, 16 80516 815, https://doi.org/10.1073/pnas.1918964117.

    • Search Google Scholar
    • Export Citation
  • Sun, L., and Y. Lan, 2021: Statistical downscaling of daily temperature and precipitation over China using deep learning neural models: Localization and comparison with other methods. Int. J. Climatol., 41, 11281147, https://doi.org/10.1002/joc.6769.

    • Search Google Scholar
    • Export Citation
  • Tang, J., X. Niu, S. Wang, H. Gao, X. Wang, and J. Wu, 2016: Statistical downscaling and dynamical downscaling of regional climate in China: Present climate evaluations and future climate projections. J. Geophys. Res. Atmos., 121, 21102129, https://doi.org/10.1002/2015JD023977.

    • Search Google Scholar
    • Export Citation
  • Taranu, I. S., S. Somot, A. Alias, J. Boé, and C. Delire, 2023: Mechanisms behind large-scale inconsistencies between regional and global climate model-based projections over Europe. Climate Dyn., 60, 38133838, https://doi.org/10.1007/s00382-022-06540-6.

    • Search Google Scholar
    • Export Citation
  • Taylor, K. E., R. J. Stouffer, and G. A. Meehl, 2012: An overview of CMIP5 and the experiment design. Bull. Amer. Meteor. Soc., 93, 485498, https://doi.org/10.1175/BAMS-D-11-00094.1.

    • Search Google Scholar
    • Export Citation
  • Teutschbein, C., and J. Seibert, 2012: Bias correction of regional climate model simulations for hydrological climate-change impact studies: Review and evaluation of different methods. J. Hydrol., 456–457, 1229, https://doi.org/10.1016/j.jhydrol.2012.05.052.

    • Search Google Scholar
    • Export Citation
  • Teutschbein, C., and J. Seibert, 2013: Is bias correction of regional climate model (RCM) simulations possible for non-stationary conditions? Hydrol. Earth Syst. Sci., 17, 50615077, https://doi.org/10.5194/hess-17-5061-2013.

    • Search Google Scholar
    • Export Citation
  • Themeßl, M. J., A. Gobiet, and G. Heinrich, 2012: Empirical-statistical downscaling and error correction of regional climate models and its impact on the climate change signal. Climatic Change, 112, 449468, https://doi.org/10.1007/s10584-011-0224-4.

    • Search Google Scholar
    • Export Citation
  • Toms, B. A., E. A. Barnes, and J. W. Hurrell, 2021: Assessing decadal predictability in an Earth-system model using explainable neural networks. Geophys. Res. Lett., 48, e2021GL093842, https://doi.org/10.1029/2021GL093842.

    • Search Google Scholar
    • Export Citation
  • Ullrich, P. A., and C. M. Zarzycki, 2017: TempestExtremes: A framework for scale-insensitive pointwise feature tracking on unstructured grids. Geosci. Model Dev., 10, 10691090, https://doi.org/10.5194/gmd-10-1069-2017.

    • Search Google Scholar
    • Export Citation
  • Vandal, T., E. Kodra, S. Ganguly, A. Michaelis, R. Nemani, and A. R. Ganguly, 2017: DeepSD: Generating high resolution climate change projections through single image super-resolution. arXiv, 1703.03126v1, https://doi.org/10.48550/arXiv.1703.03126.

  • Vandal, T., E. Kodra, and A. R. Ganguly, 2019: Intercomparison of machine learning methods for statistical downscaling: The case of daily and extreme precipitation. Theor. Appl. Climatol., 137, 557570, https://doi.org/10.1007/s00704-018-2613-3.

    • Search Google Scholar
    • Export Citation
  • van der Meer, M., S. de Roda Husman, and S. Lhermitte, 2023: Deep learning regional climate model emulators: A comparison of two downscaling training frameworks. J. Adv. Model. Earth Syst., 15, e2022MS003593, https://doi.org/10.1029/2022MS003593.

    • Search Google Scholar
    • Export Citation
  • Vosper, E., P. Watson, L. Harris, A. McRae, R. Santos-Rodriguez, L. Aitchison, and D. Mitchell, 2023: Deep learning for downscaling tropical cyclone rainfall to hazard-relevant spatial scales. J. Geophys. Res. Atmos., 128, e2022JD038163, https://doi.org/10.1029/2022JD038163.

    • Search Google Scholar
    • Export Citation
  • Vrac, M., and P. V. Ayar, 2017: Influence of bias correcting predictors on statistical downscaling models. J. Appl. Meteor. Climatol., 56, 526, https://doi.org/10.1175/JAMC-D-16-0079.1.

    • Search Google Scholar
    • Export Citation
  • Vrac, M., M. L. Stein, K. Hayhoe, and X.-Z. Liang, 2007: A general method for validating statistical downscaling methods under future climate change. Geophys. Res. Lett., 34, L18701, https://doi.org/10.1029/2007GL030295.

    • Search Google Scholar
    • Export Citation
  • Wan, Z. Y., R. Baptista, Y.-f. Chen, J. Anderson, A. Boral, F. Sha, and L. Zepeda-Núñez, 2023: Debias coarsely, sample conditionally: Statistical downscaling through optimal transport and probabilistic diffusion models. arXiv, 2305.15618v2, https://doi.org/10.48550/arXiv.2305.15618.

  • Wang, F., D. Tian, L. Lowe, L. Kalin, and J. Lehrter, 2021: Deep learning for daily precipitation and temperature downscaling. Water Resour. Res., 57, e2020WR029308, https://doi.org/10.1029/2020WR029308.

    • Search Google Scholar
    • Export Citation
  • Wang, F., D. Tian, and M. Carroll, 2023: Customized deep learning for precipitation bias correction and downscaling. Geosci. Model Dev., 16, 535556, https://doi.org/10.5194/gmd-16-535-2023.

    • Search Google Scholar
    • Export Citation
  • Wang, J., Z. Liu, I. Foster, W. Chang, R. Kettimuthu, and V. R. Kotamarthi, 2021: Fast and accurate learned multiresolution dynamical downscaling for precipitation. Geosci. Model Dev., 14, 63556372, https://doi.org/10.5194/gmd-14-6355-2021.

    • Search Google Scholar
    • Export Citation
  • Wang, Y., G. Sivandran, and J. M. Bielicki, 2018: The stationarity of two statistical downscaling methods for precipitation under different choices of cross-validation periods. Int. J. Climatol., 38, e330e348, https://doi.org/10.1002/joc.5375.

    • Search Google Scholar
    • Export Citation
  • Watson-Parris, D., 2021: Machine learning for weather and climate are worlds apart. Philos. Trans. Roy. Soc., A379, 20200098, https://doi.org/10.1098/rsta.2020.0098.

    • Search Google Scholar
    • Export Citation
  • Watt-Meyer, O., and Coauthors, 2023: ACE: A fast, skillful learned global atmospheric model for climate prediction. arXiv, 2310.02074v1, https://doi.org/10.48550/arXiv.2310.02074.

  • Weiss, K., T. M. Khoshgoftaar, and D. Wang, 2016: A survey of transfer learning. J. Big Data, 3, 9, https://doi.org/10.1186/s40537-016-0043-6.

    • Search Google Scholar
    • Export Citation
  • Wilby, R. L., and T. M. L. Wigley, 1997: Downscaling general circulation model output: A review of methods and limitations. Prog. Phys. Geogr., 21, 530548, https://doi.org/10.1177/030913339702100403.

    • Search Google Scholar
    • Export Citation
  • Wilby, R. L., C. W. Dawson, and E. M. Barrow, 2002: SDSM—A decision support tool for the assessment of regional climate change impacts. Environ. Modell. Software, 17, 145157, https://doi.org/10.1016/S1364-8152(01)00060-3.

    • Search Google Scholar
    • Export Citation
  • Wu, Y., B. Teufel, L. Sushama, S. Belair, and L. Sun, 2021: Deep learning-based super-resolution climate simulator-emulator framework for urban heat studies. Geophys. Res. Lett., 48, e2021GL094737, https://doi.org/10.1029/2021GL094737.

    • Search Google Scholar
    • Export Citation
  • Xu, Z., Y. Han, and Z. Yang, 2019: Dynamical downscaling of regional climate: A review of methods and limitations. Sci. China Earth Sci., 62, 365375, https://doi.org/10.1007/s11430-018-9261-5.

    • Search Google Scholar
    • Export Citation
  • Yamazaki, K., D. M. H. Sexton, J. W. Rostron, C. F. McSweeney, J. M. Murphy, and G. R. Harris, 2021: A perturbed parameter ensemble of HadGEM3-GC3.05 coupled model projections: Part 2: Global performance and future changes. Climate Dyn., 56, 34373471, https://doi.org/10.1007/s00382-020-05608-5.

    • Search Google Scholar
    • Export Citation
  • Yiou, P., 2014: AnaWEGE: A weather generator based on analogues of atmospheric circulation. Geosci. Model Dev., 7, 531543, https://doi.org/10.5194/gmd-7-531-2014.

    • Search Google Scholar
    • Export Citation
  • Yuval, J., and P. A. O’Gorman, 2020: Stable machine-learning parameterization of subgrid processes for climate modeling at a range of resolutions. Nat. Commun., 11, 3295, https://doi.org/10.1038/s41467-020-17142-3.

    • Search Google Scholar
    • Export Citation
  • Zhang, X., L. Alexander, G. C. Hegerl, P. Jones, A. K. Tank, T. C. Peterson, B. Trewin, and F. W. Zwiers, 2011: Indices for monitoring changes in extremes based on daily temperature and precipitation data. Wiley Interdiscip. Rev.: Climate Change, 2, 851870, https://doi.org/10.1002/wcc.147.

    • Search Google Scholar
    • Export Citation
Save
  • Abramowitz, G., and Coauthors, 2019: ESD reviews: Model dependence in multi-model climate ensembles: Weighting, sub-selection and out-of-sample testing. Earth Syst. Dyn., 10, 91105, https://doi.org/10.5194/esd-10-91-2019.

    • Search Google Scholar
    • Export Citation
  • Addison, H., E. Kendon, S. Ravuri, L. Aitchison, and P. A. Watson, 2022: Machine learning emulation of a local-scale UK climate model. arXiv, 2211.16116v1, https://doi.org/10.48550/arXiv.2211.16116.

  • Adewoyin, R. A., P. Dueben, P. Watson, Y. He, and R. Dutta, 2021: TRU-NET: A deep learning approach to high resolution prediction of rainfall. Mach. Learn., 110, 20352062, https://doi.org/10.1007/s10994-021-06022-6.

    • Search Google Scholar
    • Export Citation
  • Adinolfi, M., M. Raffa, A. Reder, and P. Mercogliano, 2023: Investigation on potential and limitations of ERA5 reanalysis downscaled on Italy by a convection-permitting model. Climate Dyn., 61, 43194342, https://doi.org/10.1007/s00382-023-06803-w.

    • Search Google Scholar
    • Export Citation
  • Alerskans, E., J. Nyborg, M. Birk, and E. Kaas, 2022: A transformer neural network for predicting near-surface temperature. Meteor. Appl., 29, e2098, https://doi.org/10.1002/met.2098.

    • Search Google Scholar
    • Export Citation
  • Babaousmail, H., R. Hou, G. T. Gnitou, and B. Ayugi, 2021: Novel statistical downscaling emulator for precipitation projections using deep convolutional autoencoder over northern Africa. J. Atmos. Sol.-Terr. Phys., 218, 105614, https://doi.org/10.1016/j.jastp.2021.105614.

    • Search Google Scholar
    • Export Citation
  • Bailie, T., Y. S. Koh, N. Rampal, and P. B. Gibson, 2024: Quantile-regression-ensemble: A deep learning algorithm for downscaling extreme precipitation. Proc. AAAI Conf. on Artificial Intelligence, Vancouver, BC, Canada, AAAI, 21 91421 922, https://doi.org/10.1609/aaai.v38i20.30193.

  • Balmaceda-Huarte, R., and M. L. Bettolli, 2022: Assessing statistical downscaling in Argentina: Daily maximum and minimum temperatures. Int. J. Climatol., 42, 84238445, https://doi.org/10.1002/joc.7733.

    • Search Google Scholar
    • Export Citation
  • Balmaceda-Huarte, R., J. Baño-Medina, M. E. Olmo, and M. L. Bettolli, 2023: On the use of convolutional neural networks for downscaling daily temperatures over southern South America in a climate change scenario. Climate Dyn., 62, 383397, https://doi.org/10.1007/s00382-023-06912-6.

    • Search Google Scholar
    • Export Citation
  • Ban, N., J. Schmidli, and C. Schär, 2014: Evaluation of the convection-resolving regional climate modeling approach in decade-long simulations. J. Geophys. Res. Atmos., 119, 78897907, https://doi.org/10.1002/2014JD021478.

    • Search Google Scholar
    • Export Citation
  • Baño-Medina, J., 2020: Understanding deep learning decisions in statistical downscaling models. Proc. 10th Int. Conf. on Climate Informatics, Online, Association for Computing Machinery, 79–85, https://doi.org/10.1145/3429309.3429321.

  • Baño-Medina, J., R. Manzanas, and J. M. Gutiérrez, 2020: Configuration and intercomparison of deep learning neural models for statistical downscaling. Geosci. Model Dev., 13, 21092124, https://doi.org/10.5194/gmd-13-2109-2020.

    • Search Google Scholar
    • Export Citation
  • Baño-Medina, J., R. Manzanas, and J. M. Gutiérrez, 2021: On the suitability of deep convolutional neural networks for continental-wide downscaling of climate change projections. Climate Dyn., 57, 29412951, https://doi.org/10.1007/s00382-021-05847-0.

    • Search Google Scholar
    • Export Citation
  • Baño-Medina, J., R. Manzanas, E. Cimadevilla, J. Fernández, J. González-Abad, A. S. Cofiño, and J. M. Gutiérrez, 2022: Downscaling multi-model climate projection ensembles with deep learning (DeepESD): Contribution to CORDEX EUR-44. Geosci. Model Dev., 15, 67476758, https://doi.org/10.5194/gmd-15-6747-2022.

    • Search Google Scholar
    • Export Citation
  • Baño-Medina, J., M. Iturbide, J. Fernandez, and J. M. Gutierrez, 2023: Transferability and explainability of deep learning emulators for regional climate model projections: Perspectives for future applications. arXiv, 2311.03378v1, https://doi.org/10.48550/arXiv.2311.03378.

  • Barnes, E. A., R. J. Barnes, Z. K. Martin, and J. K. Rader, 2022: This looks like that there: Interpretable neural networks for image tasks when location matters. Artif. Intell. Earth Syst., 1, e220001, https://doi.org/10.1175/AIES-D-22-0001.1.

    • Search Google Scholar
    • Export Citation
  • Bartók, B., and Coauthors, 2017: Projected changes in surface solar radiation in CMIP5 global climate models and in EURO-CORDEX regional climate models for Europe. Climate Dyn., 49, 26652683, https://doi.org/10.1007/s00382-016-3471-2.

    • Search Google Scholar
    • Export Citation
  • Bedia, J., and Coauthors, 2020: Statistical downscaling with the downscaleR package (v3.1.0): Contribution to the VALUE intercomparison experiment. Geosci. Model Dev., 13, 17111735, https://doi.org/10.5194/gmd-13-1711-2020.

    • Search Google Scholar
    • Export Citation
  • Benestad, R. E., 2004: Empirical–statistical downscaling in climate modeling. Eos, Trans. Amer. Geophys. Union, 85, 417422, https://doi.org/10.1029/2004EO420002.

    • Search Google Scholar
    • Export Citation
  • Benestad, R. E., 2010: Downscaling precipitation extremes: Correction of analog models through PDF predictions. Theor. Appl. Climatol., 100 (1–2), 121, https://doi.org/10.1007/s00704-009-0158-1.

    • Search Google Scholar
    • Export Citation
  • Benestad, R. E., I. Hanssen-Bauer, and E. J. Førland, 2007: An evaluation of statistical models for downscaling precipitation and their ability to capture long-term trends. Int. J. Climatol., 27, 649665, https://doi.org/10.1002/joc.1421.

    • Search Google Scholar
    • Export Citation
  • Beucler, T., M. Pritchard, S. Rasp, J. Ott, P. Baldi, and P. Gentine, 2021: Enforcing analytic constraints in neural networks emulating physical systems. Phys. Rev. Lett., 126, 098302, https://doi.org/10.1103/PhysRevLett.126.098302.

    • Search Google Scholar
    • Export Citation
  • Bittner, M., S. Hobeichi, M. Zawish, S. Diatta, R. Ozioko, S. Xu, and A. Jantsch, 2023: An LSTM-based downscaling framework for Australian precipitation projections. NeurIPS 2023 Workshop on Tackling Climate Change with Machine Learning, New Orleans, LA, NeurIPS, https://www.climatechange.ai/papers/neurips2023/46.

  • Boé, J., L. Terray, F. Habets, and E. Martin, 2007: Statistical and dynamical downscaling of the Seine basin climate for hydro-meteorological studies. Int. J. Climatol., 27, 16431655, https://doi.org/10.1002/joc.1602.

    • Search Google Scholar
    • Export Citation
  • Boé, J., S. Somot, L. Corre, and P. Nabat, 2020: Large discrepancies in summer climate change over Europe as projected by global and regional climate models: Causes and consequences. Climate Dyn., 54, 29813002, https://doi.org/10.1007/s00382-020-05153-1.

    • Search Google Scholar
    • Export Citation
  • Boé, J., A. Mass, and J. Deman, 2023: A simple hybrid statistical–dynamical downscaling method for emulating regional climate models over western Europe. Evaluation, application, and role of added value? Climate Dyn., 61, 271294, https://doi.org/10.1007/s00382-022-06552-2.

    • Search Google Scholar
    • Export Citation
  • Bommer, P., M. Kretschmer, A. Hedström, D. Bareeva, and M. M.-C. Höhne, 2023: Finding the right XAI method—A guide for the evaluation and ranking of explainable AI methods in climate science. arXiv, 2303.00652v1, https://doi.org/10.48550/arXiv.2303.00652.

  • Boulaguiem, Y., J. Zscheischler, E. Vignotto, K. van der Wiel, and S. Engelke, 2022: Modeling and simulating spatial extremes by combining extreme value theory with generative adversarial networks. Environ. Data Sci., 1, e5, https://doi.org/10.1017/eds.2022.4.

    • Search Google Scholar
    • Export Citation
  • Brenowitz, N. D., and C. S. Bretherton, 2018: Prognostic validation of a neural network unified physics parameterization. Geophys. Res. Lett., 45, 62896298, https://doi.org/10.1029/2018GL078510.

    • Search Google Scholar
    • Export Citation
  • Brenowitz, N. D., T. Beucler, M. Pritchard, and C. S. Bretherton, 2020: Interpreting and stabilizing machine-learning parametrizations of convection. J. Atmos. Sci., 77, 43574375, https://doi.org/10.1175/JAS-D-20-0082.1.

    • Search Google Scholar
    • Export Citation
  • Cannon, A. J., 2008: Probabilistic multisite precipitation downscaling by an expanded Bernoulli–gamma density network. J. Hydrometeor., 9, 12841300, https://doi.org/10.1175/2008JHM960.1.

    • Search Google Scholar
    • Export Citation
  • Careto, J. A. M., P. M. M. Soares, R. M. Cardoso, S. Herrera, and J. M. Gutiérrez, 2022: Added value of EURO-CORDEX high-resolution downscaling over the Iberian Peninsula revisited—Part 1: Precipitation. Geosci. Model Dev., 15, 26352652, https://doi.org/10.5194/gmd-15-2635-2022.

    • Search Google Scholar
    • Export Citation
  • Carreau, J., and M. Vrac, 2011: Stochastic downscaling of precipitation with neural network conditional mixture models. Water Resour. Res., 47, W10502, https://doi.org/10.1029/2010WR010128.

    • Search Google Scholar
    • Export Citation
  • Chadwick, R., E. Coppola, and F. Giorgi, 2011: An artificial neural network technique for downscaling GCM outputs to RCM spatial scale. Nonlinear Processes Geophys., 18, 10131028, https://doi.org/10.5194/npg-18-1013-2011.

    • Search Google Scholar
    • Export Citation
  • Charles, S., B. C. Bates, P. Whetton, and J. Hughes, 1999: Validation of downscaling models for changed climate conditions: Case study of southwestern Australia. Climate Res., 12, 114, https://doi.org/10.3354/cr012001.

    • Search Google Scholar
    • Export Citation
  • Chen, L., B. Han, X. Wang, J. Zhao, W. Yang, and Z. Yang, 2023: Machine learning methods in weather and climate applications: A survey. Appl. Sci., 13, 12019, https://doi.org/10.3390/app132112019.

    • Search Google Scholar
    • Export Citation
  • Chen, X. I., N. Mishra, M. Rohaninejad, and P. Abbeel, 2018: PixelSNAIL: An improved autoregressive generative model. Proc. 35th Int. Conf. on Machine Learning, Stockholm, Sweden, PMLR, 864–872, https://proceedings.mlr.press/v80/chen18h.html.

  • Clare, M. C. A., M. Sonnewald, R. Lguensat, J. Deshayes, and V. Balaji, 2022: Explainable artificial intelligence for Bayesian neural networks: Toward trustworthy predictions of ocean dynamics. J. Adv. Model. Earth Syst., 14, e2022MS003162, https://doi.org/10.1029/2022MS003162.

    • Search Google Scholar
    • Export Citation
  • Coppola, E., and Coauthors, 2020: A first-of-its-kind multi-model convection permitting ensemble for investigating convective phenomena over Europe and the Mediterranean. Climate Dyn., 55, 334, https://doi.org/10.1007/s00382-018-4521-8.

    • Search Google Scholar
    • Export Citation
  • CORDEX, 2021: CORDEX experiment design for dynamical downscaling of CMIP6. 8 pp., https://cordex.org/wp-content/uploads/2021/05/CORDEX-CMIP6_exp_design_RCM.pdf.

  • Dai, A., 2006: Precipitation characteristics in eighteen coupled climate models. J. Climate, 19, 46054630, https://doi.org/10.1175/JCLI3884.1.

    • Search Google Scholar
    • Export Citation
  • Dayon, G., J. Boé, and E. Martin, 2015: Transferability in the future climate of a statistical downscaling method for precipitation in France. J. Geophys. Res. Atmos., 120, 10231043, https://doi.org/10.1002/2014JD022236.

    • Search Google Scholar
    • Export Citation
  • de Burgh-Day, C. O., and T. Leeuwenburg, 2023: Machine learning for numerical weather and climate modelling: A review. Geosci. Model Dev., 16, 64336477, https://doi.org/10.5194/gmd-16-6433-2023.

    • Search Google Scholar
    • Export Citation
  • Deser, C., and A. S. Phillips, 2023: A range of outcomes: The combined effects of internal variability and anthropogenic forcing on regional climate trends over Europe. Nonlinear Processes Geophys., 30, 6384, https://doi.org/10.5194/npg-30-63-2023.

    • Search Google Scholar
    • Export Citation
  • Deser, C., A. S. Phillips, V. Bourdette, and H. Teng, 2012: Uncertainty in climate change projections: The role of internal variability. Climate Dyn., 38, 527546, https://doi.org/10.1007/s00382-010-0977-x.

    • Search Google Scholar
    • Export Citation
  • Dhariwal, P., and A. Nichol, 2021: Diffusion models beat GANs on image synthesis. Advances in Neural Information Processing Systems 34 (NeurIPS 2021), M. Ranzato et al., Eds., Curran Associates, 8780–8794, https://proceedings.neurips.cc/paper_files/paper/2021/hash/49ad23d1ec9fa4bd8d77d02681df5cfa-Abstract.html.

  • Diaconescu, E. P., R. Laprise, and L. Sushama, 2007: The impact of lateral boundary data errors on the simulated climate of a nested regional climate model. Climate Dyn., 28, 333350, https://doi.org/10.1007/s00382-006-0189-6.

    • Search Google Scholar
    • Export Citation
  • Diez-Sierra, J., and Coauthors, 2022: The worldwide C3S CORDEX grand ensemble: A major contribution to assess regional climate change in the IPCC AR6 Atlas. Bull. Amer. Meteor. Soc., 103, E2804E2826, https://doi.org/10.1175/BAMS-D-22-0111.1.

    • Search Google Scholar
    • Export Citation
  • Dikshit, A., and B. Pradhan, 2021: Interpretable and explainable AI (XAI) model for spatial drought prediction. Sci. Total Environ., 801, 149797, https://doi.org/10.1016/j.scitotenv.2021.149797.

    • Search Google Scholar
    • Export Citation
  • Di Virgilio, G., and Coauthors, 2019: Evaluating reanalysis-driven CORDEX regional climate models over Australia: Model performance and errors. Climate Dyn., 53, 29853005, https://doi.org/10.1007/s00382-019-04672-w.

    • Search Google Scholar
    • Export Citation
  • Di Virgilio, G., J. P. Evans, A. Di Luca, M. R. Grose, V. Round, and M. Thatcher, 2020: Realised added value in dynamical downscaling of Australian climate change. Climate Dyn., 54, 46754692, https://doi.org/10.1007/s00382-020-05250-1.

    • Search Google Scholar
    • Export Citation
  • Dixon, K. W., J. R. Lanzante, M. J. Nath, K. Hayhoe, A. Stoner, A. Radhakrishnan, V. Balaji, and C. F. Gaitán, 2016: Evaluating the stationarity assumption in statistically downscaled climate projections: Is past performance an indicator of future results? Climatic Change, 135, 395408, https://doi.org/10.1007/s10584-016-1598-0.

    • Search Google Scholar
    • Export Citation
  • Doshi-Velez, F., and B. Kim, 2017: Towards a rigorous science of interpretable machine learning. arXiv, 1702.08608v2, https://doi.org/10.48550/arXiv.1702.08608.

  • Doury, A., S. Somot, S. Gadat, A. Ribes, and L. Corre, 2023: Regional climate model emulator based on deep learning: Concept and first evaluation of a novel hybrid downscaling approach. Climate Dyn., 60, 17511779, https://doi.org/10.1007/s00382-022-06343-9.

    • Search Google Scholar
    • Export Citation
  • Dujardin, J., and M. Lehning, 2022: Wind-Topo: Downscaling near-surface wind fields to high-resolution topography in highly complex terrain with deep learning. Quart. J. Roy. Meteor. Soc., 148, 13681388, https://doi.org/10.1002/qj.4265.

    • Search Google Scholar
    • Export Citation
  • Ebert-Uphoff, I., and K. Hilburn, 2020: Evaluation, tuning, and interpretation of neural networks for working with images in meteorological applications. Bull. Amer. Meteor. Soc., 101, E2149E2170, https://doi.org/10.1175/BAMS-D-20-0097.1.

    • Search Google Scholar
    • Export Citation
  • Evans, J. P., F. Ji, C. Lee, P. Smith, D. Argüeso, and L. Fita, 2014: Design of a regional climate modelling projection ensemble experiment—NARCliM. Geosci. Model Dev., 7, 621629, https://doi.org/10.5194/gmd-7-621-2014.

    • Search Google Scholar
    • Export Citation
  • Eyring, V., S. Bony, G. A. Meehl, C. A. Senior, B. Stevens, R. J. Stouffer, and K. E. Taylor, 2016: Overview of the Coupled Model Intercomparison Project Phase 6 (CMIP6) experimental design and organization. Geosci. Model Dev., 9, 19371958, https://doi.org/10.5194/gmd-9-1937-2016.

    • Search Google Scholar
    • Export Citation
  • Feng, D., H. Beck, K. Lawson, and C. Shen, 2023: The suitability of differentiable, physics-informed machine learning hydrologic models for ungauged regions and climate change impact assessment. Hydrol. Earth Syst. Sci., 27, 23572373, https://doi.org/10.5194/hess-27-2357-2023.

    • Search Google Scholar
    • Export Citation
  • Feser, F., B. Rockel, H. von Storch, J. Winterfeldt, and M. Zahn, 2011: Regional climate models add value to global model data: A review and selected examples. Bull. Amer. Meteor. Soc., 92, 11811192, https://doi.org/10.1175/2011BAMS3061.1.

    • Search Google Scholar
    • Export Citation
  • Fowler, H. J., S. Blenkinsop, and C. Tebaldi, 2007: Linking climate change modelling to impacts studies: Recent advances in downscaling techniques for hydrological modelling. Int. J. Climatol., 27, 15471578, https://doi.org/10.1002/joc.1556.

    • Search Google Scholar
    • Export Citation
  • Gagne, D. J. II, S. E. Haupt, D. W. Nychka, and G. Thompson, 2019: Interpretable deep learning for spatial analysis of severe hailstorms. Mon. Wea. Rev., 147, 28272845, https://doi.org/10.1175/MWR-D-18-0316.1.

    • Search Google Scholar
    • Export Citation
  • Gaitan, C. F., W. W. Hsieh, and A. J. Cannon, 2014: Comparison of statistically downscaled precipitation in terms of future climate indices and daily variability for southern Ontario and Quebec, Canada. Climate Dyn., 43, 32013217, https://doi.org/10.1007/s00382-014-2098-4.

    • Search Google Scholar
    • Export Citation
  • Gao, Z., and Coauthors, 2023: PreDiff: Precipitation nowcasting with latent diffusion models. arXiv, 2307.10422v1, https://doi.org/10.48550/arXiv.2307.10422.

  • Geiss, A., and J. C. Hardin, 2020: Radar super resolution using a deep convolutional neural network. J. Atmos. Oceanic Technol., 37, 21972207, https://doi.org/10.1175/JTECH-D-20-0074.1.

    • Search Google Scholar
    • Export Citation
  • Geiss, A., and J. C. Hardin, 2023: Strictly enforcing invertibility and conservation in CNN-based super resolution for scientific datasets. Artif. Intell. Earth Syst., 2, e210012, https://doi.org/10.1175/AIES-D-21-0012.1.

    • Search Google Scholar
    • Export Citation
  • Geiss, A., S. J. Silva, and J. C. Hardin, 2022: Downscaling atmospheric chemistry simulations with physically consistent deep learning. Geosci. Model Dev., 15, 66776694, https://doi.org/10.5194/gmd-15-6677-2022.

    • Search Google Scholar
    • Export Citation
  • Gensini, V. A., A. M. Haberlie, and W. S. Ashley, 2023: Convection-permitting simulations of historical and possible future climate over the contiguous United States. Climate Dyn., 60, 109126, https://doi.org/10.1007/s00382-022-06306-0.

    • Search Google Scholar
    • Export Citation
  • Gibson, P. B., W. E. Chapman, A. Altinok, L. Delle Monache, M. J. DeFlorio, and D. E. Waliser, 2021: Training machine learning models on climate model output yields skillful interpretable seasonal precipitation forecasts. Commun. Earth Environ., 2, 159, https://doi.org/10.1038/s43247-021-00225-4.

    • Search Google Scholar
    • Export Citation
  • Gibson, P. B., D. Stone, M. Thatcher, A. Broadbent, S. Dean, S. M. Rosier, S. Stuart, and A. Sood, 2023: High-resolution CCAM simulations over New Zealand and the South Pacific for the detection and attribution of weather extremes. J. Geophys. Res. Atmos., 128, e2023JD038530, https://doi.org/10.1029/2023JD038530.

    • Search Google Scholar
    • Export Citation
  • Gibson, P. B., N. Rampal, S. M. Dean, and O. Morgenstern, 2024: Storylines for future projections of precipitation over New Zealand in CMIP6 models. J. Geophys. Res. Atmos., 129, e2023JD039664, https://doi.org/10.1029/2023JD039664.

    • Search Google Scholar
    • Export Citation
  • Giorgi, F., and W. J. Gutowski Jr., 2015: Regional dynamical downscaling and the CORDEX initiative. Annu. Rev. Environ. Resour., 40, 467490, https://doi.org/10.1146/annurev-environ-102014-021217.

    • Search Google Scholar
    • Export Citation
  • Giorgi, F., C. S. Brodeur, and G. T. Bates, 1994: Regional climate change scenarios over the United States produced with a nested regional climate model. J. Climate, 7, 375399, https://doi.org/10.1175/1520-0442(1994)007<0375:RCCSOT>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Giorgi, F., C. Jones, and G. R. Asrar, 2009: Addressing climate information needs at the regional level: The CORDEX framework. WMO Bull., 58, 175–183.

    • Search Google Scholar
    • Export Citation
  • Giorgi, F., C. Torma, E. Coppola, N. Ban, C. Schär, and S. Somot, 2016: Enhanced summer convective rainfall at Alpine high elevations in response to climate warming. Nat. Geosci., 9, 584589, https://doi.org/10.1038/ngeo2761.

    • Search Google Scholar
    • Export Citation
  • Glahn, H. R., and D. A. Lowry, 1972: The use of model output statistics (MOS) in objective weather forecasting. J. Appl. Meteor., 11, 12031211, https://doi.org/10.1175/1520-0450(1972)011<1203:TUOMOS>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • González-Abad, J., J. Baño-Medina, and J. M. Gutiérrez, 2023a: Using explainability to inform statistical downscaling based on deep learning beyond standard validation approaches. J. Adv. Model. Earth Syst., 15, e2023MS003641, https://doi.org/10.1029/2023MS003641.

    • Search Google Scholar
    • Export Citation
  • González-Abad, J., Á. Hernández-García, P. Harder, D. Rolnick, and J. M. Gutiérrez, 2023b: Multi-variable hard physical constraints for climate model downscaling. arXiv, 2308.01868v1, https://doi.org/10.48550/arXiv.2308.01868.

  • Goodfellow, I., J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, 2014: Generative adversarial nets. Commun. ACM, 63, 139144, https://doi.org/10.1145/3422622.

    • Search Google Scholar
    • Export Citation
  • Goodfellow, I., Y. Bengio, and A. Courville, 2016: Deep Learning. MIT Press, 800 pp.

  • Groenke, B., L. Madaus, and C. Monteleoni, 2020: ClimAlign: Unsupervised statistical downscaling of climate variables via normalizing flows. Proc. 10th Int. Conf. on Climate Informatics, Online, Association for Computing Machinery, 60–66, https://doi.org/10.1145/3429309.3429318.

  • Gutiérrez, J. M., D. San-Martín, S. Brands, R. Manzanas, and S. Herrera, 2013: Reassessing statistical downscaling techniques for their robust application under climate change conditions. J. Climate, 26, 171188, https://doi.org/10.1175/JCLI-D-11-00687.1.

    • Search Google Scholar
    • Export Citation
  • Gutiérrez, J. M., and Coauthors, 2019: An intercomparison of a large ensemble of statistical downscaling methods over Europe: Results from the VALUE perfect predictor cross-validation experiment. Int. J. Climatol., 39, 37503785, https://doi.org/10.1002/joc.5462.

    • Search Google Scholar
    • Export Citation
  • Gutiérrez, J. M., and Coauthors, 2022: The future scientific challenges for CORDEX: Empirical statistical downscaling (ESD). 11 pp., https://cordex.org/wp-content/uploads/2022/08/White-Paper-ESD.pdf.

  • Haarsma, R. J., and Coauthors, 2016: High Resolution Model Intercomparison Project (HighResMIP v1.0) for CMIP6. Geosci. Model Dev., 9, 41854208, https://doi.org/10.5194/gmd-9-4185-2016.

    • Search Google Scholar
    • Export Citation
  • Ham, Y.-G., J.-H. Kim, and J.-J. Luo, 2019: Deep learning for multi-year ENSO forecasts. Nature, 573, 568572, https://doi.org/10.1038/s41586-019-1559-7.

    • Search Google Scholar
    • Export Citation
  • Hannachi, A., I. T. Jolliffe, and D. B. Stephenson, 2007: Empirical orthogonal functions and related techniques in atmospheric science: A review. Int. J. Climatol., 27, 11191152, https://doi.org/10.1002/joc.1499.

    • Search Google Scholar
    • Export Citation
  • Harder, P., D. Watson-Parris, P. Stier, D. Strassel, N. R. Gauger, and J. Keuper, 2022: Physics-informed learning of aerosol microphysics. Environ. Data Sci., 1, e20, https://doi.org/10.1017/eds.2022.22.

    • Search Google Scholar
    • Export Citation
  • Harder, P., V. Ramesh, A. Hernandez-Garcia, Q. Yang, P. Sattigeri, D. Szwarcman, C. Watson, and D. Rolnick, 2023: Physics-constrained deep learning for climate downscaling. arXiv, 2208.05424v1, https://doi.org/10.48550/arXiv.2208.05424.

  • Harris, L., A. T. T. McRae, M. Chantry, P. D. Dueben, and T. N. Palmer, 2022: A generative deep learning approach to stochastic downscaling of precipitation forecasts. arXiv, 2204.02028v1, https://doi.org/10.48550/arXiv.2204.02028.

  • Hatanaka, Y., Y. Glaser, G. Galgon, G. Torri, and P. Sadowski, 2023: Diffusion models for high-resolution solar forecasts. arXiv, 2302.00170v1, https://doi.org/10.48550/arXiv.2302.00170.

  • Hawkins, E., and R. Sutton, 2009: The potential to narrow uncertainty in regional climate predictions. Bull. Amer. Meteor. Soc., 90, 10951108, https://doi.org/10.1175/2009BAMS2607.1.

    • Search Google Scholar
    • Export Citation
  • Hawkins, E., and R. Sutton, 2011: The potential to narrow uncertainty in projections of regional precipitation change. Climate Dyn., 37, 407418, https://doi.org/10.1007/s00382-010-0810-6.

    • Search Google Scholar
    • Export Citation
  • He, K., X. Zhang, S. Ren, and J. Sun, 2016: Deep residual learning for image recognition. 2016 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, Institute of Electrical and Electronics Engineers, 770–778, https://doi.org/10.1109/CVPR.2016.90.

  • Hernanz, A., J. A. García-Valero, M. Domínguez, and E. Rodríguez-Camino, 2022a: A critical view on the suitability of machine learning techniques to downscale climate change projections: Illustration for temperature with a toy experiment. Atmos. Sci. Lett., 23, e1087, https://doi.org/10.1002/asl.1087.

    • Search Google Scholar
    • Export Citation
  • Hernanz, A., J. A. García-Valero, M. Domínguez, and E. Rodríguez-Camino, 2022b: Evaluation of statistical downscaling methods for climate change projections over Spain: Future conditions with pseudo reality (transferability experiment). Int. J. Climatol., 42, 39874000, https://doi.org/10.1002/joc.7464.

    • Search Google Scholar
    • Export Citation
  • Hersbach, H., and Coauthors, 2020: The ERA5 global reanalysis. Quart. J. Roy. Meteor. Soc., 146, 19992049, https://doi.org/10.1002/qj.3803.

    • Search Google Scholar
    • Export Citation
  • Hess, P., M. Drüke, S. Petri, F. M. Strnad, and N. Boers, 2022: Physically constrained generative adversarial networks for improving precipitation fields from Earth system models. arXiv, 2209.07568v1, https://doi.org/10.48550/arXiv.2209.07568.

  • Ho, J., C. Saharia, W. Chan, D. J. Fleet, M. Norouzi, and T. Salimans, 2021: Cascaded diffusion models for high fidelity image generation. arXiv, 2106.15282v3, https://doi.org/10.48550/arXiv.2106.15282.

  • Hobeichi, S., N. Nishant, Y. Shao, G. Abramowitz, A. Pitman, S. Sherwood, C. Bishop, and S. Green, 2023: Using machine learning to cut the cost of dynamical downscaling. Earth’s Future, 11, e2022EF003291, https://doi.org/10.1029/2022EF003291.

    • Search Google Scholar
    • Export Citation
  • Höhlein, K., M. Kern, T. Hewson, and R. Westermann, 2020: A comparative study of convolutional neural network models for wind field downscaling. Meteor. Appl., 27, e1961, https://doi.org/10.1002/met.1961.

    • Search Google Scholar
    • Export Citation
  • Holden, P. B., N. R. Edwards, P. H. Garthwaite, and R. D. Wilkinson, 2015: Emulation and interpretation of high-dimensional climate model outputs. J. Appl. Stat., 42, 20382055, https://doi.org/10.1080/02664763.2015.1016412.

    • Search Google Scholar
    • Export Citation
  • Hoogewind, K. A., M. E. Baldwin, and R. J. Trapp, 2017: The impact of climate change on hazardous convective weather in the United States: Insight from high-resolution dynamical downscaling. J. Climate, 30, 10 08110 100, https://doi.org/10.1175/JCLI-D-16-0885.1.

    • Search Google Scholar
    • Export Citation
  • Hutengs, C., and M. Vohland, 2016: Downscaling land surface temperatures at regional scales with random forest regression. Remote Sens. Environ., 178, 127141, https://doi.org/10.1016/j.rse.2016.03.006.

    • Search Google Scholar
    • Export Citation
  • Iotti, M., P. Davini, J. von Hardenberg, and G. Zappa, 2022: Downscaling of precipitation over the Taiwan region by a conditional generative adversarial network. International Symposium on Grids & Clouds 2022 (ISGC2022), Vol. 415, Proceedings of Science, 004, https://doi.org/10.22323/1.415.0004.

  • Iqbal, T., and H. Ali, 2018: Generative adversarial network for medical images (MI-GAN). J. Med. Syst., 42, 231, https://doi.org/10.1007/s10916-018-1072-9.

    • Search Google Scholar
    • Export Citation
  • Irrgang, C., N. Boers, M. Sonnewald, E. A. Barnes, C. Kadow, J. Staneva, and J. Saynisch-Wagner, 2021: Towards neural Earth system modelling by integrating artificial intelligence in Earth system science. Nat. Mach. Intell., 3, 667674, https://doi.org/10.1038/s42256-021-00374-3.

    • Search Google Scholar
    • Export Citation
  • Isola, P., J. -Y. Zhu, T. Zhou, and A. A. Efros, 2017: Image-to-image translation with conditional adversarial networks. 2017 IEEE Conf. on Computer Vision and Pattern Recognition, Honolulu, HI, IEEE, 59675976, https://doi.org/10.1109/CVPR.2017.632.

  • Isphording, R. N., L. V. Alexander, M. Bador, D. Green, J. P. Evans, and S. Wales, 2024: A standardized benchmarking framework to assess downscaled precipitation simulations. J. Climate, 37, 10891110, https://doi.org/10.1175/JCLI-D-23-0317.1.

    • Search Google Scholar
    • Export Citation
  • Izumi, T., M. Amagasaki, K. Ishida, and M. Kiyama, 2022: Super-resolution of sea surface temperature with convolutional neural network- and generative adversarial network-based methods. J. Water Climate Change, 13, 16731683, https://doi.org/10.2166/wcc.2022.291.

    • Search Google Scholar
    • Export Citation
  • Jacob, D., and Coauthors, 2020: Regional climate downscaling over Europe: Perspectives from the EURO-CORDEX community. Reg. Environ. Change, 20, 51, https://doi.org/10.1007/s10113-020-01606-9.

    • Search Google Scholar
    • Export Citation
  • Jiang, Y., K. Yang, C. Shao, X. Zhou, L. Zhao, Y. Chen, and H. Wu, 2021: A downscaling approach for constructing high-resolution precipitation dataset over the Tibetan Plateau from ERA5 reanalysis. Atmos. Res., 256, 105574, https://doi.org/10.1016/j.atmosres.2021.105574.

    • Search Google Scholar
    • Export Citation
  • Jones, R. G., J. M. Murphy, and M. Noguer, 1995: Simulation of climate change over Europe using a nested regional-climate model. I: Assessment of control climate, including sensitivity to location of lateral boundaries. Quart. J. Roy. Meteor. Soc., 121, 14131449, https://doi.org/10.1002/qj.49712152610.

    • Search Google Scholar
    • Export Citation
  • Kashinath, K., and Coauthors, 2021: Physics-informed machine learning: Case studies for weather and climate modelling. Philos. Trans. Roy. Soc., A379, 20200093, https://doi.org/10.1098/rsta.2020.0093.

    • Search Google Scholar
    • Export Citation
  • Kay, J. E., and Coauthors, 2015: The Community Earth System Model (CESM) Large Ensemble Project: A community resource for studying climate change in the presence of internal climate variability. Bull. Amer. Meteor. Soc., 96, 13331349, https://doi.org/10.1175/BAMS-D-13-00255.1.

    • Search Google Scholar
    • Export Citation
  • Kendon, E. J., E. M. Fischer, and C. J. Short, 2023: Variability conceals emerging trend in 100yr projections of UK local hourly rainfall extremes. Nat. Commun., 14, 1133, https://doi.org/10.1038/s41467-023-36499-9.

    • Search Google Scholar
    • Export Citation
  • Khan, S., M. Naseer, M. Hayat, S. W. Zamir, F. S. Khan, and M. Shah, 2022: Transformers in vision: A survey. ACM Comput. Surv., 54 (10s), 141, https://doi.org/10.1145/3505244.

    • Search Google Scholar
    • Export Citation
  • Kochkov, D., and Coauthors, 2023: Neural general circulation models. arXiv, 2311.07222v2, https://doi.org/10.48550/arXiv.2311.07222.

  • Kumar, A., M. Chen, L. Zhang, W. Wang, Y. Xue, C. Wen, L. Marx, and B. Huang, 2012: An analysis of the nonstationarity in the bias of sea surface temperature forecasts for the NCEP Climate Forecast System (CFS) version 2. Mon. Wea. Rev., 140, 30033016, https://doi.org/10.1175/MWR-D-11-00335.1.

    • Search Google Scholar
    • Export Citation
  • Langousis, A., and V. Kaleris, 2014: Statistical framework to simulate daily rainfall series conditional on upper-air predictor variables. Water Resour. Res., 50, 39073932, https://doi.org/10.1002/2013WR014936.

    • Search Google Scholar
    • Export Citation
  • Langousis, A., A. Mamalakis, R. Deidda, and M. Marrocu, 2016: Assessing the relative effectiveness of statistical downscaling and distribution mapping in reproducing rainfall statistics based on climate model results. Water Resour. Res., 52, 471494, https://doi.org/10.1002/2015WR017556.

    • Search Google Scholar
    • Export Citation
  • Lanzante, J. R., K. W. Dixon, M. J. Nath, C. E. Whitlock, and D. Adams-Smith, 2018: Some pitfalls in statistical downscaling of future climate. Bull. Amer. Meteor. Soc., 99, 791803, https://doi.org/10.1175/BAMS-D-17-0046.1.

    • Search Google Scholar
    • Export Citation
  • LeCun, Y., Y. Bengio, and G. Hinton, 2015: Deep learning. Nature, 521, 436444, https://doi.org/10.1038/nature14539.

  • Legasa, M. N., S. Thao, M. Vrac, and R. Manzanas, 2023: Assessing three perfect prognosis methods for statistical downscaling of climate change precipitation scenarios. Geophys. Res. Lett., 50, e2022GL102525, https://doi.org/10.1029/2022GL102525.

    • Search Google Scholar
    • Export Citation
  • Leinonen, J., D. Nerini, and A. Berne, 2021: Stochastic super-resolution for downscaling time-evolving atmospheric fields with a generative adversarial network. IEEE Trans. Geosci. Remote Sens., 59, 72117223, https://doi.org/10.1109/TGRS.2020.3032790.

    • Search Google Scholar
    • Export Citation
  • Leinonen, J., U. Hamann, D. Nerini, U. Germann, and G. Franch, 2023: Latent diffusion models for generative precipitation nowcasting with accurate uncertainty quantification. arXiv, 2304.12891v1, https://doi.org/10.48550/arXiv.2304.12891.

  • Lempitsky, V., A. Vedaldi, and D. Ulyanov, 2018: Deep image prior. 2018 IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, Institute of Electrical and Electronics Engineers, 9446–9454, https://doi.ieeecomputersociety.org/10.1109/CVPR.2018.00984.

  • Linardatos, P., V. Papastefanopoulos, and S. Kotsiantis, 2020: Explainable AI: A review of machine learning interpretability methods. Entropy, 23, 18, https://doi.org/10.3390/e23010018.

    • Search Google Scholar
    • Export Citation
  • Liu, C., and Coauthors, 2017: Continental-scale convection-permitting modeling of the current and future climate of North America. Climate Dyn., 49, 7195, https://doi.org/10.1007/s00382-016-3327-9.

    • Search Google Scholar
    • Export Citation
  • Liu, G., R. Zhang, R. Hang, L. Ge, C. Shi, and Q. Liu, 2023: Statistical downscaling of temperature distributions in southwest China by using terrain-guided attention network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 16, 16781690, https://doi.org/10.1109/JSTARS.2023.3239109.

    • Search Google Scholar
    • Export Citation
  • Liu, J., Y. Sun, K. Ren, Y. Zhao, K. Deng, and L. Wang, 2022: A spatial downscaling approach for WindSat satellite sea surface wind based on generative adversarial networks and dual learning scheme. Remote Sens., 14, 769, https://doi.org/10.3390/rs14030769.

    • Search Google Scholar
    • Export Citation
  • Liu, Y., A. R. Ganguly, and J. Dy, 2020: Climate downscaling using YNet: A deep convolutional network with skip connections and fusion. Proc. 26th ACM SIGKDD Int. Conf. on Knowledge Discovery & Data Mining, Online, Association for Computing Machinery, 31453153, https://doi.org/10.1145/3394486.3403366.

  • Liu, Y., K. Duffy, J. G. Dy, and A. R. Ganguly, 2023: Explainable deep learning for insights in El Niño and river flows. Nat. Commun., 14, 339, https://doi.org/10.1038/s41467-023-35968-5.

    • Search Google Scholar
    • Export Citation
  • Lloyd, E. A., M. Bukovsky, and L. O. Mearns, 2021: An analysis of the disagreement about added value by regional climate models. Synthese, 198, 11 64511 672, https://doi.org/10.1007/s11229-020-02821-x.

    • Search Google Scholar
    • Export Citation
  • Lopez-Gomez, I., A. McGovern, S. Agrawal, and J. Hickey, 2023: Global extreme heat forecasting using neural weather models. Artif. Intell. Earth Syst., 2, e220035, https://doi.org/10.1175/AIES-D-22-0035.1.

    • Search Google Scholar
    • Export Citation
  • Maher, N., and Coauthors, 2019: The Max Planck Institute Grand Ensemble: Enabling the exploration of climate system variability. J. Adv. Model. Earth Syst., 11, 20502069, https://doi.org/10.1029/2019MS001639.

    • Search Google Scholar
    • Export Citation
  • Maher, N., S. Milinski, and R. Ludwig, 2021: Large ensemble climate model simulations: Introduction, overview, and future prospects for utilising multiple types of large ensemble. Earth Syst. Dyn., 12, 401418, https://doi.org/10.5194/esd-12-401-2021.

    • Search Google Scholar
    • Export Citation
  • Mamalakis, A., J.-Y. Yu, J. T. Randerson, A. AghaKouchak, and E. Foufoula-Georgiou, 2018: A new interhemispheric teleconnection increases predictability of winter precipitation in southwestern US. Nat. Commun., 9, 2332, https://doi.org/10.1038/s41467-018-04722-7.

    • Search Google Scholar
    • Export Citation
  • Mamalakis, A., E. A. Barnes, and I. Ebert-Uphoff, 2022a: Investigating the fidelity of explainable artificial intelligence methods for applications of convolutional neural networks in geoscience. Artif. Intell. Earth Syst., 1, e220012, https://doi.org/10.1175/AIES-D-22-0012.1.

    • Search Google Scholar
    • Export Citation
  • Mamalakis, A., I. Ebert-Uphoff, and E. A. Barnes, 2022b: Neural network attribution methods for problems in geoscience: A novel synthetic benchmark dataset. Environ. Data Sci., 1, e8, https://doi.org/10.1017/eds.2022.7.

    • Search Google Scholar
    • Export Citation
  • Mamalakis, A., E. A. Barnes, and I. Ebert-Uphoff, 2023: Carefully choose the baseline: Lessons learned from applying XAI attribution methods for regression tasks in geoscience. Artif. Intell. Earth Syst., 2, e220058, https://doi.org/10.1175/AIES-D-22-0058.1.

    • Search Google Scholar
    • Export Citation
  • Manzanas, R., L. Fiwa, C. Vanya, H. Kanamaru, and J. M. Gutiérrez, 2020: Statistical downscaling or bias adjustment? A case study involving implausible climate change projections of precipitation in Malawi. Climatic Change, 162, 14371453, https://doi.org/10.1007/s10584-020-02867-3.

    • Search Google Scholar
    • Export Citation
  • Maraun, D., 2016: Bias correcting climate change simulations—A critical review. Curr. Climate Change Rep., 2, 211220, https://doi.org/10.1007/s40641-016-0050-x.

    • Search Google Scholar
    • Export Citation
  • Maraun, D., and Coauthors, 2010: Precipitation downscaling under climate change: Recent developments to bridge the gap between dynamical models and the end user. Rev. Geophys., 48, RG3003, https://doi.org/10.1029/2009RG000314.

    • Search Google Scholar
    • Export Citation
  • Maraun, D., and Coauthors, 2015: VALUE: A framework to validate downscaling approaches for climate change studies. Earth’s Future, 3 (1), 114, https://doi.org/10.1002/2014EF000259.

    • Search Google Scholar
    • Export Citation
  • Mardani, M., and Coauthors, 2023: Generative residual diffusion modeling for km-scale atmospheric downscaling. arXiv, 2309.15214v3, https://doi.org/10.48550/arXiv.2309.15214.

  • Marotzke, J., and Coauthors, 2017: Climate research must sharpen its view. Nat. Climate Change, 7, 8991, https://doi.org/10.1038/nclimate3206.

    • Search Google Scholar
    • Export Citation
  • Materia, S., and Coauthors, 2023: Artificial intelligence for prediction of climate extremes: State of the art, challenges and future perspectives. arXiv, 2310.01944v1, https://doi.org/10.48550/arXiv.2310.01944.

  • McGovern, A., R. Lagerquist, D. J. Gagne II, G. E. Jergensen, K. L. Elmore, C. R. Homeyer, and T. Smith, 2019: Making the black box more transparent: Understanding the physical implications of machine learning. Bull. Amer. Meteor. Soc., 100, 21752199, https://doi.org/10.1175/BAMS-D-18-0195.1.

    • Search Google Scholar
    • Export Citation
  • McGovern, A., R. J. Chase, M. Flora, D. J. Gagne II, R. Lagerquist, C. K. Potvin, N. Snook, and E. Loken, 2023: A review of machine learning for convective weather. Artif. Intell. Earth Syst., 2, e220077, https://doi.org/10.1175/AIES-D-22-0077.1.

    • Search Google Scholar
    • Export Citation
  • Meinshausen, M., and Coauthors, 2020: The shared socio-economic pathway (SSP) greenhouse gas concentrations and their extensions to 2500. Geosci. Model Dev., 13, 35713605, https://doi.org/10.5194/gmd-13-3571-2020.

    • Search Google Scholar
    • Export Citation
  • Miao, Q., B. Pan, H. Wang, K. Hsu, and S. Sorooshian, 2019: Improving monsoon precipitation prediction using combined convolutional and long short term memory neural network. Water, 11, 977, https://doi.org/10.3390/w11050977.

    • Search Google Scholar
    • Export Citation
  • Miralles, O., D. Steinfeld, O. Martius, and A. C. Davison, 2022: Downscaling of historical wind fields over Switzerland using generative adversarial networks. Artif. Intell. Earth Syst., 1, e220018, https://doi.org/10.1175/AIES-D-22-0018.1.

    • Search Google Scholar
    • Export Citation
  • Mirza, M., and S. Osindero, 2014: Conditional generative adversarial nets. arXiv, 1411.1784v1, https://doi.org/10.48550/arXiv.1411.1784.

  • Mishra Sharma, S. C., and A. Mitra, 2022: ResDeepD: A residual super-resolution network for deep downscaling of daily precipitation over India. Environ. Data Sci., 1, e19, https://doi.org/10.1017/eds.2022.23.

    • Search Google Scholar
    • Export Citation
  • Molina, M. J., D. J. Gagne, and A. F. Prein, 2021: A benchmark to test generalization capabilities of deep learning methods to classify severe convective storms in a changing climate. Earth Space Sci., 8, e2020EA001490, https://doi.org/10.1029/2020EA001490.

    • Search Google Scholar
    • Export Citation
  • Molina, M. J., and Coauthors, 2023: A review of recent and emerging machine learning applications for climate variability and weather phenomena. Artif. Intell. Earth Syst., 2, 220086, https://doi.org/10.1175/AIES-D-22-0086.1.

    • Search Google Scholar
    • Export Citation
  • Moss, R. H., and Coauthors, 2010: The next generation of scenarios for climate change research and assessment. Nature, 463, 747756, https://doi.org/10.1038/nature08823.

    • Search Google Scholar
    • Export Citation
  • Nguyen, T., J. Brandstetter, A. Kapoor, J. K. Gupta, and A. Grover, 2023: ClimaX: A foundation model for weather and climate. arXiv, 2301.10343v5, https://doi.org/10.48550/arXiv.2301.10343.

  • Nishant, N., S. Hobeichi, S. Sherwood, G. Abramowitz, Y. Shao, C. Bishop, and A. Pitman, 2023: Comparison of a novel machine learning approach with dynamical downscaling for Australian precipitation. Environ. Res. Lett., 18, 094006, https://doi.org/10.1088/1748-9326/ace463.

    • Search Google Scholar
    • Export Citation
  • Norris, J., G. Chen, and J. D. Neelin, 2019: Thermodynamic versus dynamic controls on extreme precipitation in a warming climate from the Community Earth System Model Large Ensemble. J. Climate, 32, 10251045, https://doi.org/10.1cdel175/JCLI-D-18-0302.1.

    • Search Google Scholar
    • Export Citation
  • Nourani, V., K. Khodkar, A. H. Baghanam, S. A. Kantoush, and I. Demir, 2023: Uncertainty quantification of deep learning–based statistical downscaling of climatic parameters. J. Appl. Meteor. Climatol., 62, 12231242, https://doi.org/10.1175/JAMC-D-23-0057.1.

    • Search Google Scholar
    • Export Citation
  • Oyama, N., N. N. Ishizaki, S. Koide, and H. Yoshida, 2023: Deep generative model super-resolves spatially correlated multiregional climate data. Sci. Rep., 13, 5992, https://doi.org/10.1038/s41598-023-32947-0.

    • Search Google Scholar
    • Export Citation
  • Pan, B., K. Hsu, A. AghaKouchak, and S. Sorooshian, 2019: Improving precipitation estimation using convolutional neural network. Water Resour. Res., 55, 23012321, https://doi.org/10.1029/2018WR024090.

    • Search Google Scholar
    • Export Citation
  • Pan, S. J., and Q. Yang, 2010: A survey on transfer learning. IEEE Trans. Knowl. Data Eng., 22, 13451359, https://doi.org/10.1109/TKDE.2009.191.

    • Search Google Scholar
    • Export Citation
  • Pegion, K., E. J. Becker, and B. P. Kirtman, 2022: Understanding predictability of daily southeast U.S. precipitation using explainable machine learning. Artif. Intell. Earth Syst., 1, e220011, https://doi.org/10.1175/AIES-D-22-0011.1.

    • Search Google Scholar
    • Export Citation
  • Perkins-Kirkpatrick, S. E., E. M. Fischer, O. Angélil, and P. B. Gibson, 2017: The influence of internal climate variability on heatwave frequency trends. Environ. Res. Lett., 12, 044005, https://doi.org/10.1088/1748-9326/aa63fe.

    • Search Google Scholar
    • Export Citation
  • Prein, A. F., and Coauthors, 2015: A review on regional convection-permitting climate modeling: Demonstrations, prospects, and challenges. Rev. Geophys., 53, 323361, https://doi.org/10.1002/2014RG000475.

    • Search Google Scholar
    • Export Citation
  • Price, I., and S. Rasp, 2022: Increasing the accuracy and resolution of precipitation forecasts using deep generative models. Proc. 25th Int. Conf. on Artificial Intelligence and Statistics, Valencia, Spain, PMLR, 10 555–10 571, https://proceedings.mlr.press/v151/price22a.html.

  • Quesada-Chacón, D., K. Barfus, and C. Bernhofer, 2022: Repeatable high-resolution statistical downscaling through deep learning. Geosci. Model Dev., 15, 73537370, https://doi.org/10.5194/gmd-15-7353-2022.

    • Search Google Scholar
    • Export Citation
  • Rampal, N., P. B. Gibson, A. Sood, S. Stuart, N. C. Fauchereau, C. Brandolino, B. Noll, and T. Meyers, 2022a: High-resolution downscaling with interpretable deep learning: Rainfall extremes over New Zealand. Wea. Climate Extremes, 38, 100525, https://doi.org/10.1016/j.wace.2022.100525.

    • Search Google Scholar
    • Export Citation
  • Rampal, N., T. Shand, A. Wooler, and C. Rautenbach, 2022b: Interpretable deep learning applied to rip current detection and localization. Remote Sens., 14, 6048, https://doi.org/10.3390/rs14236048.

    • Search Google Scholar
    • Export Citation
  • Rasmussen, R., and C. Liu, 2017: High resolution WRF simulations of the current and future climate of North America. National Center for Atmospheric Research Computational and Information Systems Laboratory Research Data Archive, accessed 17 December 2023, https://doi.org/10.5065/D6V40SXP.

  • Rasmussen, R., and Coauthors, 2023: CONUS404: The NCAR–USGS 4-km long-term regional hydroclimate reanalysis over the CONUS. Bull. Amer. Meteor. Soc., 104, E1382E1408, https://doi.org/10.1175/BAMS-D-21-0326.1.

    • Search Google Scholar
    • Export Citation
  • Rasp, S., and N. Thuerey, 2021: Data-driven medium-range weather prediction with a Resnet pretrained on climate simulations: A new model for WeatherBench. J. Adv. Model. Earth Syst., 13, e2020MS002405, https://doi.org/10.1029/2020MS002405.

    • Search Google Scholar
    • Export Citation
  • Rasp, S., M. S. Pritchard, and P. Gentine, 2018: Deep learning to represent subgrid processes in climate models. Proc. Natl. Acad. Sci., 115, 9684–9689, https://doi.org/10.1073/pnas.1810286115.

  • Ravuri, S., and Coauthors, 2021: Skilful precipitation nowcasting using deep generative models of radar. Nature, 597, 672677, https://doi.org/10.1038/s41586-021-03854-z.

    • Search Google Scholar
    • Export Citation
  • Reddy, P. J., R. Matear, J. Taylor, M. Thatcher, and M. Grose, 2023: A precipitation downscaling method using a super-resolution deconvolution neural network with step orography. Environ. Data Sci., 2, e17, https://doi.org/10.1017/eds.2023.18.

    • Search Google Scholar
    • Export Citation
  • Renwick, J. A., A. B. Mullan, and A. Porteous, 2009: Statistical downscaling of New Zealand climate. Wea. Climate, 29, 2444, https://doi.org/10.2307/26169704.

    • Search Google Scholar
    • Export Citation
  • Ronneberger, O., P. Fischer, and T. Brox, 2015: U-Net: Convolutional networks for biomedical image segmentation. Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, N. Navab et al., Eds., Springer, 234–241, https://doi.org/10.1007/978-3-319-24574-4_28.

  • Ruder, S., 2017: An overview of multi-task learning in deep neural networks. arXiv, 1706.05098v1, https://doi.org/10.48550/arXiv.1706.05098.

  • Rudin, C., 2019: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1, 206215, https://doi.org/10.1038/s42256-019-0048-x.

    • Search Google Scholar
    • Export Citation
  • Rummukainen, M., 2016: Added value in regional climate modeling. Wiley Interdiscip. Rev.: Climate Change, 7, 145159, https://doi.org/10.1002/wcc.378.

    • Search Google Scholar
    • Export Citation
  • Saha, A., and S. Ravela, 2022: Downscaling extreme rainfall using physical-statistical generative adversarial learning. arXiv, 2212.01446v1, https://doi.org/10.48550/arXiv.2212.01446.

  • Schär, C., and Coauthors, 2020: Kilometer-scale climate models: Prospects and challenges. Bull. Amer. Meteor. Soc., 101, E567E587, https://doi.org/10.1175/BAMS-D-18-0167.1.

    • Search Google Scholar
    • Export Citation
  • Schmidli, J., C. Frei, and P. L. Vidale, 2006: Downscaling from GCM precipitation: A benchmark for dynamical and statistical downscaling methods. Int. J. Climatol., 26, 679689, https://doi.org/10.1002/joc.1287.

    • Search Google Scholar
    • Export Citation
  • Schmith, T., and Coauthors, 2021: Identifying robust bias adjustment methods for European extreme precipitation in a multi-model pseudo-reality setting. Hydrol. Earth Syst. Sci., 25, 273290, https://doi.org/10.5194/hess-25-273-2021.

    • Search Google Scholar
    • Export Citation
  • Sexton, D. M. H., and Coauthors, 2021: A perturbed parameter ensemble of HadGEM3-GC3.05 coupled model projections: Part 1: Selecting the parameter combinations. Climate Dyn., 56, 33953436, https://doi.org/10.1007/s00382-021-05709-9.

    • Search Google Scholar
    • Export Citation
  • Sha, Y., D. J. Gagne II, G. West, and R. Stull, 2020: Deep-learning-based gridded downscaling of surface meteorological variables in complex terrain. Part II: Daily precipitation. J. Appl. Meteor. Climatol., 59, 20752092, https://doi.org/10.1175/JAMC-D-20-0058.1.

    • Search Google Scholar
    • Export Citation
  • Sharifi, E., B. Saghafian, and R. Steinacker, 2019: Downscaling satellite precipitation estimates with multiple linear regression, artificial neural networks, and spline interpolation techniques. J. Geophys. Res. Atmos., 124, 789805, https://doi.org/10.1029/2018JD028795.

    • Search Google Scholar
    • Export Citation
  • Shen, C., and Coauthors, 2023: Differentiable modelling to unify machine learning and physical models for geosciences. Nat. Rev. Earth Environ., 4, 552567, https://doi.org/10.1038/s43017-023-00450-9.

    • Search Google Scholar
    • Export Citation
  • Simpson, I. R., and Coauthors, 2023: The CESM2 single-forcing large ensemble and comparison to CESM1: Implications for experimental design. J. Climate, 36, 56875711, https://doi.org/10.1175/JCLI-D-22-0666.1.

    • Search Google Scholar
    • Export Citation
  • Solman, S., D. Jacob, A. Frigon, C. Teichmann, and M. Rixen, 2021: The future scientific challenges for CORDEX. CORDEX, 11 pp., https://cordex.org/wp-content/uploads/2021/05/The-future-of-CORDEX-MAY-17-2021-1.pdf.

  • Sonnewald, M., and R. Lguensat, 2021: Revealing the impact of global heating on North Atlantic circulation using transparent machine learning. J. Adv. Model. Earth Syst., 13, e2021MS002496, https://doi.org/10.1029/2021MS002496.

    • Search Google Scholar
    • Export Citation
  • Sørland, S. L., C. Schär, D. Lüthi, and E. Kjellström, 2018: Bias patterns and climate change signals in GCM-RCM model chains. Environ. Res. Lett., 13, 074017, https://doi.org/10.1088/1748-9326/aacc77.

    • Search Google Scholar
    • Export Citation
  • Stengel, K., A. Glaws, D. Hettinger, and R. N. King, 2020: Adversarial super-resolution of climatological wind and solar data. Proc. Natl. Acad. Sci. USA, 117, 16 80516 815, https://doi.org/10.1073/pnas.1918964117.

    • Search Google Scholar
    • Export Citation
  • Sun, L., and Y. Lan, 2021: Statistical downscaling of daily temperature and precipitation over China using deep learning neural models: Localization and comparison with other methods. Int. J. Climatol., 41, 11281147, https://doi.org/10.1002/joc.6769.

    • Search Google Scholar
    • Export Citation
  • Tang, J., X. Niu, S. Wang, H. Gao, X. Wang, and J. Wu, 2016: Statistical downscaling and dynamical downscaling of regional climate in China: Present climate evaluations and future climate projections. J. Geophys. Res. Atmos., 121, 21102129, https://doi.org/10.1002/2015JD023977.

    • Search Google Scholar
    • Export Citation
  • Taranu, I. S., S. Somot, A. Alias, J. Boé, and C. Delire, 2023: Mechanisms behind large-scale inconsistencies between regional and global climate model-based projections over Europe. Climate Dyn., 60, 38133838, https://doi.org/10.1007/s00382-022-06540-6.

    • Search Google Scholar
    • Export Citation
  • Taylor, K. E., R. J. Stouffer, and G. A. Meehl, 2012: An overview of CMIP5 and the experiment design. Bull. Amer. Meteor. Soc., 93, 485498, https://doi.org/10.1175/BAMS-D-11-00094.1.

    • Search Google Scholar
    • Export Citation
  • Teutschbein, C., and J. Seibert, 2012: Bias correction of regional climate model simulations for hydrological climate-change impact studies: Review and evaluation of different methods. J. Hydrol., 456–457, 1229, https://doi.org/10.1016/j.jhydrol.2012.05.052.

    • Search Google Scholar
    • Export Citation
  • Teutschbein, C., and J. Seibert, 2013: Is bias correction of regional climate model (RCM) simulations possible for non-stationary conditions? Hydrol. Earth Syst. Sci., 17, 50615077, https://doi.org/10.5194/hess-17-5061-2013.

    • Search Google Scholar
    • Export Citation
  • Themeßl, M. J., A. Gobiet, and G. Heinrich, 2012: Empirical-statistical downscaling and error correction of regional climate models and its impact on the climate change signal. Climatic Change, 112, 449468, https://doi.org/10.1007/s10584-011-0224-4.

    • Search Google Scholar
    • Export Citation
  • Toms, B. A., E. A. Barnes, and J. W. Hurrell, 2021: Assessing decadal predictability in an Earth-system model using explainable neural networks. Geophys. Res. Lett., 48, e2021GL093842, https://doi.org/10.1029/2021GL093842.

    • Search Google Scholar
    • Export Citation
  • Ullrich, P. A., and C. M. Zarzycki, 2017: TempestExtremes: A framework for scale-insensitive pointwise feature tracking on unstructured grids. Geosci. Model Dev., 10, 10691090, https://doi.org/10.5194/gmd-10-1069-2017.

    • Search Google Scholar
    • Export Citation
  • Vandal, T., E. Kodra, S. Ganguly, A. Michaelis, R. Nemani, and A. R. Ganguly, 2017: DeepSD: Generating high resolution climate change projections through single image super-resolution. arXiv, 1703.03126v1, https://doi.org/10.48550/arXiv.1703.03126.

  • Vandal, T., E. Kodra, and A. R. Ganguly, 2019: Intercomparison of machine learning methods for statistical downscaling: The case of daily and extreme precipitation. Theor. Appl. Climatol., 137, 557570, https://doi.org/10.1007/s00704-018-2613-3.

    • Search Google Scholar
    • Export Citation
  • van der Meer, M., S. de Roda Husman, and S. Lhermitte, 2023: Deep learning regional climate model emulators: A comparison of two downscaling training frameworks. J. Adv. Model. Earth Syst., 15, e2022MS003593, https://doi.org/10.1029/2022MS003593.

    • Search Google Scholar
    • Export Citation
  • Vosper, E., P. Watson, L. Harris, A. McRae, R. Santos-Rodriguez, L. Aitchison, and D. Mitchell, 2023: Deep learning for downscaling tropical cyclone rainfall to hazard-relevant spatial scales. J. Geophys. Res. Atmos., 128, e2022JD038163, https://doi.org/10.1029/2022JD038163.

    • Search Google Scholar
    • Export Citation
  • Vrac, M., and P. V. Ayar, 2017: Influence of bias correcting predictors on statistical downscaling models. J. Appl. Meteor. Climatol., 56, 526, https://doi.org/10.1175/JAMC-D-16-0079.1.

    • Search Google Scholar
    • Export Citation
  • Vrac, M., M. L. Stein, K. Hayhoe, and X.-Z. Liang, 2007: A general method for validating statistical downscaling methods under future climate change. Geophys. Res. Lett., 34, L18701, https://doi.org/10.1029/2007GL030295.

    • Search Google Scholar
    • Export Citation
  • Wan, Z. Y., R. Baptista, Y.-f. Chen, J. Anderson, A. Boral, F. Sha, and L. Zepeda-Núñez, 2023: Debias coarsely, sample conditionally: Statistical downscaling through optimal transport and probabilistic diffusion models. arXiv, 2305.15618v2, https://doi.org/10.48550/arXiv.2305.15618.

  • Wang, F., D. Tian, L. Lowe, L. Kalin, and J. Lehrter, 2021: Deep learning for daily precipitation and temperature downscaling. Water Resour. Res., 57, e2020WR029308, https://doi.org/10.1029/2020WR029308.

    • Search Google Scholar
    • Export Citation
  • Wang, F., D. Tian, and M. Carroll, 2023: Customized deep learning for precipitation bias correction and downscaling. Geosci. Model Dev., 16, 535556, https://doi.org/10.5194/gmd-16-535-2023.

    • Search Google Scholar
    • Export Citation
  • Wang, J., Z. Liu, I. Foster, W. Chang, R. Kettimuthu, and V. R. Kotamarthi, 2021: Fast and accurate learned multiresolution dynamical downscaling for precipitation. Geosci. Model Dev., 14, 63556372, https://doi.org/10.5194/gmd-14-6355-2021.

    • Search Google Scholar
    • Export Citation
  • Wang, Y., G. Sivandran, and J. M. Bielicki, 2018: The stationarity of two statistical downscaling methods for precipitation under different choices of cross-validation periods. Int. J. Climatol., 38, e330e348, https://doi.org/10.1002/joc.5375.

    • Search Google Scholar
    • Export Citation
  • Watson-Parris, D., 2021: Machine learning for weather and climate are worlds apart. Philos. Trans. Roy. Soc., A379, 20200098, https://doi.org/10.1098/rsta.2020.0098.

    • Search Google Scholar
    • Export Citation
  • Watt-Meyer, O., and Coauthors, 2023: ACE: A fast, skillful learned global atmospheric model for climate prediction. arXiv, 2310.02074v1, https://doi.org/10.48550/arXiv.2310.02074.

  • Weiss, K., T. M. Khoshgoftaar, and D. Wang, 2016: A survey of transfer learning. J. Big Data, 3, 9, https://doi.org/10.1186/s40537-016-0043-6.

    • Search Google Scholar
    • Export Citation
  • Wilby, R. L., and T. M. L. Wigley, 1997: Downscaling general circulation model output: A review of methods and limitations. Prog. Phys. Geogr., 21, 530548, https://doi.org/10.1177/030913339702100403.

    • Search Google Scholar
    • Export Citation
  • Wilby, R. L., C. W. Dawson, and E. M. Barrow, 2002: SDSM—A decision support tool for the assessment of regional climate change impacts. Environ. Modell. Software, 17, 145157, https://doi.org/10.1016/S1364-8152(01)00060-3.

    • Search Google Scholar
    • Export Citation
  • Wu, Y., B. Teufel, L. Sushama, S. Belair, and L. Sun, 2021: Deep learning-based super-resolution climate simulator-emulator framework for urban heat studies. Geophys. Res. Lett., 48, e2021GL094737, https://doi.org/10.1029/2021GL094737.

    • Search Google Scholar
    • Export Citation
  • Xu, Z., Y. Han, and Z. Yang, 2019: Dynamical downscaling of regional climate: A review of methods and limitations. Sci. China Earth Sci., 62, 365375, https://doi.org/10.1007/s11430-018-9261-5.

    • Search Google Scholar
    • Export Citation
  • Yamazaki, K., D. M. H. Sexton, J. W. Rostron, C. F. McSweeney, J. M. Murphy, and G. R. Harris, 2021: A perturbed parameter ensemble of HadGEM3-GC3.05 coupled model projections: Part 2: Global performance and future changes. Climate Dyn., 56, 34373471, https://doi.org/10.1007/s00382-020-05608-5.

    • Search Google Scholar
    • Export Citation
  • Yiou, P., 2014: AnaWEGE: A weather generator based on analogues of atmospheric circulation. Geosci. Model Dev., 7, 531543, https://doi.org/10.5194/gmd-7-531-2014.

    • Search Google Scholar
    • Export Citation
  • Yuval, J., and P. A. O’Gorman, 2020: Stable machine-learning parameterization of subgrid processes for climate modeling at a range of resolutions. Nat. Commun., 11, 3295, https://doi.org/10.1038/s41467-020-17142-3.

    • Search Google Scholar
    • Export Citation
  • Zhang, X., L. Alexander, G. C. Hegerl, P. Jones, A. K. Tank, T. C. Peterson, B. Trewin, and F. W. Zwiers, 2011: Indices for monitoring changes in extremes based on daily temperature and precipitation data. Wiley Interdiscip. Rev.: Climate Change, 2, 851870, https://doi.org/10.1002/wcc.147.

    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    (a) The spatial resolution of precipitation from a typical CMIP6 GCM (∼100 km; left plot) vs the detail level required (12 km; right plot) for climate impact research from an RCM over the New Zealand region. Shaded values are daily precipitation amount (mm day−1), and contours are mean sea level pressure (hPa). (b) Illustration of the concept of uncertainty in climate projections by comparing two scenarios; the small ensemble (green) is a random subset of the large ensemble (black). (c) An illustration of how the future climate can be outside the distribution of the present-day climate, using a comparison of specific humidity at 850 hPa (g kg−1), area averaged over the New Zealand region, extending from 150°E to 200°W longitude and 25° to 50°S latitude. The red distribution represents historical ERA5 (1975–2014), and the blue one depicts results for the future climate scenario from the Australian Community Climate and Earth System Simulator (ACCESS-CM2) for SSP5.85 (2060–99).

  • Fig. 2.

    An overview of the topics concerning climate downscaling that are discussed in this review.

  • Fig. 3.

    (a) Comparison of PP (left plot) and SR (right plot) approaches for climate downscaling in the New Zealand region. In PP, an ML algorithm is trained to map large-scale circulation fields from reanalyses at the resolution of a typical CMIP6 GCM (∼100 km) onto a high-resolution target. In SR downscaling, an algorithm is trained to map from a coarsened target (∼100 km) onto a high-resolution target. The high-resolution target can either be a dataset of high-resolution observational data [as in (a)] or be a simulated quantity in an RCM [as in (b)], both of which can serve as training data. The SR approaches can also include additional predictor variables. (b) A comparison between the perfect and imperfect training frameworks for RCM emulation. The perfect framework uses coarsened RCM fields as predictor variables for an ML algorithm, whereas the imperfect framework uses the GCM fields directly. Here, the target variables are high-resolution RCM fields (i.e., precipitation).

  • Fig. 4.

    An illustration of three ML algorithms used for climate downscaling. (a) First, a random forest algorithm is shown, in which an individual algorithm is developed for each grid, based on climate features extracted at that specific grid location. (b) Second, a CNN architecture, capable of automatically extracting features from all grid cells within the spatial domain as a one-dimensional vector, is shown. (c) Third, an end-to-end CNN architecture known as U-Net is illustrated, which has a contractive path that reduces the spatial resolution of the input image by a factor of 4.

  • Fig. 5.

    Two possible deep learning training techniques: (a) a single-task CNN and (b) a multitask CNN. In a single-task CNN, an algorithm is simply trained to perform one task, such as downscaling rainfall as implemented in Rampal et al. (2022a). In a multitask CNN, the algorithm is trained and optimized to perform two tasks simultaneously, where both tasks share the same latent space. In this case, the CNN is being trained to be both an autoencoder (i) and a regression in which it downscales rainfall (ii). The brown squares represent pooling of feature maps with convolution and ReLU, leading to a flatter layer.

  • Fig. 6.

    An illustration of (a) historical and (b) future evaluation strategies for assessing the performance of downscaled simulations from an ML algorithm. Historical validation strategies focus on two datasets: one over an independent observational period on which it was trained and one consisting of historical simulations from a GCM.

  • Fig. 7.

    Schematic view of XAI-based diagnostics, adapted from González-Abad et al. (2023a) with permission. Saliency maps, which are interpretations generated using XAI techniques, are computed across all grid points in the predictand space, for each observation (day). Two days (start and end of a given period) are shown in columns, and three grid points representing north, central, and south locations are depicted in the predictand space. The resulting saliency maps are aggregated, yielding two diagnostics that provide insights into different aspects of the ML algorithm [for detailed explanations, refer to González-Abad et al. (2023a)]. Note that the accumulated saliency and saliency dispersion diagnostics are applied over the predictor and predictand domains, respectively.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 10548 8424 1367
PDF Downloads 8522 6461 869