Methodology of the Constraint Condition in Dynamical Downscaling for Regional Climate Evaluation: A Review

The dynamical downscaling method with a regional climate model (RCM) is widely used to assess the spatially detailed information about regional climate. However, the RCM result is considerably influenced by the systematic errors inherent to a general circulation model (GCM), which provides the initial and boundary conditions to the RCM. Such systematic errors sometimes lead to meaningless downscaled results. Many modified boundary dynamical downscaling (MBDDS) methods have been proposed to reduce the influences of the systematic errors of a GCM and extract meaningful signals for regional climate change. This study comprehensively reviews the MBDDS methods. The MBDDS methods partially modify the climate information projected by a GCM and use them as the boundary conditions of an RCM. The objectives of the methods are organized into two main objectives, that is, to obtain more reliable projections by correcting the biases in boundary conditions and to better understand the regional climate change mechanisms. To ensure comprehensive understanding of the MBDDS methods, this study attempts to interpret the errors included in the downscaled results using mathematical expressions, separating the GCM‐originated bias and RCM's own bias. Using this analysis, the MBDDS methods are classified based on the following questions: What effect is expected from the bias correction? Which of the climate change components projected by a GCM is considered when assessing the future climate change? The direction and issues that need to be addressed in the future for better understanding the regional climate change are also discussed.


Introduction
Recently, extreme weather events, including heat waves and heavy rainfall, have been observed throughout the world (e.g., Herring et al., 2018) and have considerably affected human life and daily activities. The event attribution approaches have shown that the occurrence risks of the extreme weather events have increased because of global climate change, which can be attributed to the presence of an increased amount of greenhouse gases in the atmosphere since the industrial revolution (e.g., Imada et al., 2019;Pall et al., 2011;Stott et al., 2004). The extreme weather events and local climate changes are projected to increase with any further increase in the amount of greenhouse gases (e.g., Prein et al., 2017;Yamada et al., 2010). The risk mitigation measures and adaptation plans with respect to the global climate change remain an urgent issue throughout the world, demonstrating the increasing demand for reliable and spatially detailed future climate information.
The spatial resolution of the general circulation models (GCMs) used to simulate the global climate change has increased with the increasing computational capability (Cubasch et al., 2013). However, the spatial resolution remains insufficient to assess the local-governmental-scale climate change. Although global atmospheric simulations with a horizontal grid spacing of subkilometer to several kilometers have already been achieved using atmospheric global cloud-resolving models (e.g., Miyamoto et al., 2013;Satoh et al., 2019), the simulation periods of these models only range from several days to tens of days. More integrated GCMs, including coupled atmosphere-ocean GCMs (CGCMs) or Earth system models, are necessary to evaluate the century-scale climate change; many of these models are set at a grid spacing of 100 km or more, even in the age of the Coupled Model Intercomparison Project 5 (CMIP5) (Flato et al., 2013). Downscaling techniques have been widely used to fill the scale gap between the global climate projections and the demand for local climate information (Giorgi et al., 2001).

10.1029/2019JD032166
Downscaling methods can be categorized into two types. One type is the statistical downscaling method (Wang et al., 2004;Wilby & Wigley, 1997), and the other type is the dynamical downscaling (DDS) method (Giorgi & Mearns, 1991). The statistical downscaling method deduces fine-scale information using some statistical relations between the large-and fine-scale climate information (Wilby et al., 2004). Users should establish the statistical relations beforehand using available data, including the past observation data, reanalysis, or GCM present climate data. The DDS method uses a high-resolution regional climate model (RCM) to obtain the fine-scale climate information; the RCM uses the large-scale atmospheric state obtained from a GCM as the boundary condition (Dickinson et al., 1989;. The DDS method requires a larger computational cost when compared with that required by the statistical downscaling method. However, it has a significant advantage that the physical interpretation of the downscaled results is easier because a model is based on a physical principle. A high-resolution GCM with a stretched grid is another useful DDS approach (e.g., Fox-Rabinovitz et al., 2006, 2008. The stretched grid technique facilitates high-resolution calculation over the region of interest by enhancing the uniform resolution (e.g., Côté et al., 1998;Courtier & Geleyn, 1988;Katzfey et al., 2016;McGregor, 2015;Tomita, 2008). However, we focus on DDS by an RCM in this study. The advantages and disadvantages of downscaling by a GCM in comparison with that by an RCM will be discussed from the viewpoint of imposing the boundary condition to the target region in the discussion section (section 4.5.1).
Many studies have investigated regional climate change using the DDS technique since the first studies conducted by Dickinson et al. (1989) and Giorgi and Bates (1989), as reviewed in Giorgi and Mearns (1991), Wang et al. (2004), Kitoh et al. (2016), Giorgi (2019), and so on. In the previous decade, several DDS projects were conducted, wherein various organizations participated to assess the future regional climate change in each country or region, for example, the Ensembles-Based Predictions of Climate Changes and Their Impacts in Europe (ENSEMBLES) (van der Linden & Mitchell, 2009), the North American Regional Climate Change Assessment Program in North America (NARCCAP) (Mearns et al., 2012), the S5 project led by Japan's Ministry of the Environment (Ishizaki et al., 2012), and the Coordinated Regional Climate Downscaling Experiment (CORDEX) (Giorgi et al., 2009) in the World Meteorological Organization's World Climate Research Programme framework. Those projects progressed our understanding about the bias and uncertainty associated with the downscaled climate (e.g., Takayabu et al., 2016). The future regional climate projected using the DDS method suffers from several uncertainty factors (Wilby & Dessai, 2010). A climate projection by a GCM, which provides boundary condition to an RCM, includes three main uncertainty factors, including the internal variability of the climate system that is independent of global warming, differences in future greenhouse gas scenarios, and systematic errors (model uncertainty) inherent to the GCM (Hawkins & Sutton, 2009. In addition, the systematic errors inherent to an RCM are included in the downscaled results of the RCM (Misra, 2007). Regional scenarios, including land use, economic activity, and population of the target city, affect the regional climate projections (Adachi et al., 2014;McCarthy et al., 2010). Thus, the uncertainty with respect to the downscaled climate is a significant problem when assessing the regional issues arising from global climate change (Pielke Sr & Wilby, 2012;Wilby & Dessai, 2010).
In the history of DDS, the influence of the systematic errors of the GCM has always been a significant problem. The GCM systematic errors are mainly caused by the imperfections in their physical schemes and the uncertainty in the upper and lower boundary conditions. The imperfections cause an incorrect interaction between the model components, resulting in an incorrect feedback between multiscale phenomena in the simulated climate. The most conventional DDS method directly uses the simulation result obtained from a GCM as the boundary conditions of an RCM (hereafter, the method is referred to as the direct DDS method). Thus, a direct DDS simulation by an RCM inherits the systematic errors of the GCM (e.g., Caldwell et al., 2009;Rojas & Seth, 2003;Warner et al., 1997;Wu et al., 2005); consequently, the downscaled climate estimated from an inaccurate boundary condition may yield erroneous information. This problem is sometimes referred to as the "garbage in, garbage out" problem (Hall, 2014).
Recently, the GCM performance has been significantly improved with model sophistication and an increase in the spatial resolution. There is a new discussion on whether the errors of a GCM or an RCM are more critical for the downscaled results. This could be partially related to the question of how much an RCM simulation is affected by the host GCM providing the boundary conditions. Several studies have reported that the differences of the RCM results were mainly derived from the GCM differences based on comparison studies conducted using multiple GCMs and RCMs (e.g., Déqué et al., 2007;Inatsu et al., 2015). However, Suzuki-Parker et al. (2018) showed that the contribution of the RCM uncertainty and that of the GCM to the evaluation of the changes in the precipitation indices are comparable with respect to their magnitudes; their analyses suggested that the RCM uncertainty is potentially linked to the model configurations such as physics schemes and model topography. Regarding the relation between the RCM and GCM biases, Sørland et al. (2018) reported that they are neither additive nor independent. In any case, there is no doubt that the systematic errors inherent to the GCMs and RCMs are major causes of the reduction in the reliability of the projection results.
Based on the aforementioned background, various DDS methods, which differ from the conventional DDS method, have been proposed. These proposed methods have two main objectives. The first objective is to eliminate the model biases with respect to the boundary conditions provided by a GCM as much as possible. The second objective is to understand the mechanism of regional climate response to global warming in a target region. The RCM simulations can be used to obtain not only high-resolution information but also the climate response to a specific constraint. By imposing the constraint on an RCM via the boundary conditions, the processes become easy to understand. In this study, these methods are collectively referred to as the modified boundary dynamical downscaling (MBDDS) methods.
This study focuses on the MBDDS methods and comprehensively reviews each of these methods to properly understand their concept and aims. Recently, Xu et al. (2019) summarized some MBDDS methods from the viewpoint of bias correction. This study extends the scope of the MBDDS method from Xu et al. (2019) and reorganizes the methods based on the following two questions. First, what assumptions are made and what improvements are intended in the bias correction procedure of each method? Second, which part of the future climate change projected by a GCM is considered reliable as input for an RCM? The requirements for regional climate projection research have gradually changed with time through the development of computers and research progress. When considering the recent rapid progress in this field, it is important to reorganize the existing improved downscaling methods and clarify the differences in these methods with respect to the two aforementioned questions. This attempt will lead to the further development and progress as well as broaden the understanding of the DDS method. This paper is structured as follows. Section 2 introduces the basic concepts used to organize the MBDDS methods and the mathematical notations used in this study. Section 3 introduces the published MBDDS methods using the concepts derived in section 2. In Section 4, the influences of the model biases included in the downscaled results are interpreted, and the MBDDS methods are organized in terms of the two aspects of bias correction and the reliability of future projections in GCMs. In addition, the nonlinear effects that cannot be avoided when using the MBDDS methods are discussed; several practical configurations for an experiment are specifically discussed. Finally, section 5 provides a summary of this study, addressing issues related to the current MBDDS methods and demonstrating future prospects in this field of study.

Conceptual Idea and Mathematical Description
In this section, several terms used in this study are defined to facilitate conceptual thinking for downscaling. First, the models referred to as GCMs and RCMs are clarified. There are various types of global models (Cubasch et al., 2013), including atmospheric GCMs (AGCMs), CGCMs, and Earth system models. GCM is used to represent all these models. RCMs are also of various types, similar to GCMs (Giorgi & Gao, 2018;Peng et al., 2012). Although Giorgi and Gao (2018) argued the importance of air-sea interaction, majority of the previous studies on regional climate have used the atmospheric model under a given ocean surface condition. Thus, the atmospheric RCM is referred to as an RCM in this study.
Next, the initial and boundary conditions are considered, which are required by the RCMs during simulation. In this study, we refer to them as the constraint conditions because the results of a parent model control the large-scale state, even in RCM simulations. Let a constraint to the RCM be denoted by Ψ, which has spatial dimensions x, y, z, and a time dimension t. The resolution of Ψ is dependent on the parent data, including the GCM results or the reanalysis data. Ψ is a set, containing multiple variables represented as elements. The typical GCM and reanalysis data include temperature (T), geopotential (Φ), eastward and northward wind velocities (U, V), and relative humidity (RH). The specific humidity (Q v ) is sometimes used instead of RH. In this study, a variable belonging to Ψ is denoted as Ψ[ ]. t , is defined as the climatology component, whereas the deviation from it, Ψ ′ [t] , is defined as the perturbation component. Nishizawa et al. (2018) proposed the interpretation of the downscaling procedure using a conceptual phase-space diagram for the climatology and perturbation components. According to the proposal, a constraint to an RCM can be established by combining two components from different data sets. For example, if two data sets, Ψ A and Ψ B , are available, then where Ψ * is a constraint condition. Most of the MBDDS methods are primarily based on this concept.
Then, consider an output result, , from an RCM. Nishizawa et al. (2018) considered it as a function of the two components of the large-scale atmospheric state; that is, where  represents the downscale operator, which denotes that downscaling has been conducted. The output from the downscale operator includes GCM-and sub-GCM-scale information, which can be resolved by an RCM with a high resolution. Because the downscaling simulation includes feedbacks from the sub-GCM-scale variation with respect to large-scale phenomena, the large-scale state in the downscaled results is slightly modulated from that in the constraint. In section 4.1, more details on the downscale operator and its implications are provided.

Modified Boundary DDS Methods
In this section, the MBDDS methods are reviewed based on their characteristics and objectives and are ordered roughly chronologically. The reviewed MBDDS methods are presented in Table 1. The review in this section will form the basis for the discussion in the subsequent section.

Direct DDS Method
Before introducing the MBDDS methods, the conventional downscaling method, that is, the direct DDS method, is presented to ensure easy comparison with the MBDDS methods. The RCM constraints are constructed from the GCM results without any modification; that is, A = B in equation (1). Let the present and future large-scale atmospheric states calculated by a GCM be denoted as G p and G f , respectively. The constraint conditions of the direct DDS method can be simply described as where B p and B f represent the constraint conditions for the present and future climate experiments, respectively.

Surrogate Climate Change Method
The surrogate climate change method (hereafter, the SCC method), which was proposed by Schär et al. (1996), is the simplest method to examine the influence of global warming on the regional climate. This method considers only the thermodynamic change, which is an effect of an increase in temperature associated with the increasing amount of greenhouse gases.
The SCC method uses the reanalysis data instead of using the present climate data of a GCM for obtaining the constraint condition as a reference state: The most conventional DDS method, which uses reanalysis data or simulation result from a model (usually a GCM) directly as the constraints for an RCM. Dickinson et al. (1989) and Giorgi and Bates (1989) SCC Surrogate climate change method The simplest method for evaluating the regional climate responses to thermodynamic changes (temperature warming associated with increasing greenhouse gases) due to global warming. Schär et al. (1996) PGW Pseudo global warming method The method for evaluating regional climate responses to thermodynamic and dynamic changes in a large-scale atmospheric mean state (the climatology component) due to global climate change. Kimura and Kitoh (2007) and Sato et al. (2007) MBC

Mean bias correction method
The method for estimating precisely regional climate with bias-corrected GCM outputs; the concept of BC is to eliminate biases in the climatology component. The method for estimating precisely regional climate with bias-corrected GCM outputs; the concept of BC is to correct the magnitude of the perturbation as well as to eliminate the biases in the climatology component. Xu and Yang (2012) QQC Quantile-quantile correction method The method for estimating precisely regional climate with bias-corrected GCM outputs; the concept of BC is to correct the cumulative distribution functions (CDFs) for each variable used in the constraint. Colette et al. (2012) NBC

Nesting bias correction method
The method for estimating precisely regional climate with bias-corrected GCM outputs; the concept of BC is to correct the low-frequency variability with monthly to yearly time scales. Rocheta et al. (2017) SFS Sequential factor separation method The method for understanding the mechanisms of regional climate change due to global climate change; the method separates large-scale atmospheric changes into three factors and evaluates its implications on regional climate. Kröner et al. (2017) and Schär and Kröner (2017)  The method for understanding the mechanisms of regional climate change due to global climate change; the method quantifies the implications of the changes in the climatology and perturbation components as well as that of the nonlinearity between the two component changes. Adachi et al. (2017) Note. The details of each method are introduced in section 3. "BC" is the abbreviation of bias correction.
where A represents the reanalysis data.
The SCC method assumes that the temperature increases uniformly by a certain amount in space and that the water vapor increases in accordance with the increase in temperature as large-scale climate changes. The temperature increment is defined as the changes in the mean value, which are horizontally and temporally averaged at a certain height level, z 0 , as follows: Schär et al. (1996) set the representative height, z 0 , to 850 hPa. The temperature constraint after climate change can be described as using equation (6).
To ensure dynamic balance, the geopotential is calculated so that the hydrostatic balance equation and equation of state are satisfied. Although there are several methods to set up the constraint condition of atmospheric moisture content for the future climate experiment, this method assumes that the RH does not change in association with the climate change. Other methods for setting up the moisture constraint in future climate experiments are discussed in section 4.5.2. Finally, the constraint to an RCM after climate change can be described as along with equation (7), where ΔΦ is the change in geopotential due to ΔT. Wu and Lynch (2000) used a method similar to the SCC method to examine the sensitivity of the terrestrial carbon exchanges with respect to the thermodynamic climate change. Their method replaced only the air temperature and specific humidity from the reanalysis data with those from the future climate data simulated by a GCM with increasing CO 2 .

Pseudo Global Warming Method
The pseudo global warming method (hereafter, the PGW method) was proposed by Kimura and Kitoh (2007) and Sato et al. (2007). Lynn et al. (2009) independently proposed the same method and called it mean signal nesting. The PGW method can be interpreted as an extension of the SCC method. The SCC method modifies the constraint by only ensuring a constant increase in temperature, whereas the PGW method considers all the future changes with respect to the climatology component.
Similar to the SCC method, the reanalysis data are used as the constraint condition for the present climate simulation, that is, equation (5). The constraint for the future climate experiment can be described as From equation (9), it is obvious that the PGW method does not consider any changes in the perturbation component. Thus, the regional climate response to the modulations of large-scale disturbances, including tropical cyclones and midlatitude low-pressure systems, cannot be discussed. However, the use of the same perturbation component in the two climate experiments has some advantages because the assessment of the regional climate responses is not affected by the changes in internal variability. Therefore, the method is applicable not only to long-term climate simulations (e.g., Kawase et al., 2009;Sato et al., 2007) but also to short-term event simulations. For example, Hara et al. (2008) and Rasmussen et al. (2011) investigated the impact of global warming on snow cover by applying the PGW method for a couple of years with different characteristics of snow amount. The PGW method can be also used to assess the damages when past disaster-class events occur under warmer and humid climate conditions (e.g., Lynn et al., 2009;Takemi et al., 2016).
Several studies have been conducted to validate the PGW method by comparing it with the direct DDS method. Kawase et al. (2008) applied the PGW method to past precipitation changes in China and proved that this method could appropriately reproduce the spatial pattern of the past precipitation changes in the target region. Adachi et al. (2012) performed a similar verification for a past temperature change in the Tokyo metropolitan area in Japan. Yoshikane et al. (2012) compared the future climate projections performed using the PGW method with those calculated using the direct DDS method by considering the latter as the actual future climate. They concluded that the effect of omitting the perturbation component changes was small.

Mean Bias Correction Method
The mean bias correction method (hereafter, the MBC method) uses the GCM outputs to assess the present and future climates. Although the SCC and PGW methods evaluate the regional climate response using the reanalysis data as the reference state, the MBC method uses the GCM output with correcting biases due to the systematic errors of the GCM. Bias correction is applied only to the climatology component in the GCM output. The original idea, the so-called anomaly nesting (AN) method, was introduced by Misra and Kanamitsu (2004) to improve the reproducibility of the present climate simulated using the direct DDS method. From the final half of the 1990s to the initial half of the 2000s, many researchers denoted that the downscaled results obtained by the direct DDS method using a GCM simulation was strongly affected by the significant biases in the GCM output and could not always improve the fine-scale structures beyond the resolution of the GCM grid scale (e.g., Noguer et al., 1998;Risbey & Stone, 1996). These criticisms have led the researchers to propose the AN method, as discussed in Misra and Kanamitsu (2004).
This method defines the climatology bias as the difference between the climatology components in the GCM and reanalysis data during the reference period. The detected bias is used to correct the constraint condition for an RCM. The constraint for the present climate simulation is described as where the subscript, RPeriod, represents the reference period, which can be selected independent of the target period. This means that simulations can be conducted even if reanalysis data are unavailable during the target period. If the reference period is the same as the target period, equation (10) can be simply written , that is, the climatology component of the GCM is completely replaced by that of the reanalysis data. Misra and Kanamitsu (2004) applied this method to a simulation in South America and the surrounding oceans and investigated the efficacy of this method to improve the reproducibility of the summer seasonal simulations. By comparing the downscaled results obtained using the AN method and the direct DDS method, the AN method was shown to improve the reproducibility of the precipitation characteristics, intraseasonal variability, and synoptic variability. To investigate the effects of the correction of each variable on the simulated climate, Misra (2007) compared an experiment that applied bias correction to all the prognostic variables and some experiments that applied the correction only to one or several prognostic variable(s). They concluded that the sensitivities of the correction to each variable are dependent on the size and location of the domain, the model used, and the season simulated and that it is difficult to specify the most sensitive variables before the simulation. Holland et al. (2010) independently proposed the MBC method to assess the future climate with fewer influences from the GCM biases. The MBC method comprises a set of present and future climate experiments that employs the AN method. The MBC method for the future climate experiments is similar to that in the present climate experiment, as shown in equation (10). This is equivalent to assuming that the climatology bias in the GCMs does not change with respect to the present and future climates. The constraint for the future climate experiment can be given as Bruyère et al. (2014) verified the effectiveness of this correction method with respect to the number of typhoons occurring over western Pacific. The number of typhoons generated in the present climate experiment using CGCMs was less than the actual number observed because of the climatology bias of the low sea surface temperature (SST) and strong wind shear in the upper atmosphere, which suppressed the onset of typhoons. The environment must be suitable for typhoon generation to reproduce the appropriate number of typhoons in the RCM. They proved that the reproducibility of typhoon generation considerably improved by correcting the bias with respect to the climatology component using this method. Using the constraint conditions with respect to the future climate (equation (11)), Holland et al. (2010) and Done et al. (2015) evaluated the changes in future typhoon activity in the western Pacific.

Mean and Variance Bias Correction Method
Xu and Yang (2012) extended the MBC method and proposed the mean and variance bias correction method (hereafter, the MVBC method), which corrects not only the climatology component but also the perturbation component based on the reanalysis data. The aforementioned SCC and PGW methods do not consider the future changes of the perturbation component. The MBC method considers the changes in the perturbation component without any correction. However, the perturbation can change in association with the climate changes and may contain significant bias; the MVBC method addresses these problems.
The bias correction of the climatology component in the MVBC method is identical to that in the MBC method. The perturbation component is corrected to ensure that the standard deviation with respect to the present climate matches that obtained based on the reanalysis. The constraint conditions in this method can be given as where A|RPeriod and G p |RPeriod are the standard deviations of the reanalysis data and the present climate data from the GCM, respectively. As shown by equation (12), the MVBC method corrects only the magnitude of the perturbation component and does not correct its frequency and timing. Furthermore, the bias related to the magnitude of the perturbation component does not change with respect to the future climate because the same coefficient of perturbation bias, that is, A|RPeriod ∕ G p |RPeriod , is adopted in the present and future climate experiments. Xu and Yang (2012) conducted experiments using the direct DDS, MBC, and MVBC methods with respect to the past climate change in North America to verify the effectiveness of the MVBC method. Consequently, they demonstrated that the MVBC method improved the downscaled atmospheric dynamic field and the precipitation climatology when compared with the direct DDS method. They also reported that the frequency distributions of temperature and precipitation improved for extreme cases, for example, the distribution of the daily maximum temperature during summer.
One of the concerns associated with the MVBC method is that the dynamic consistency of the constraints could not be ensured because the coefficients required to correct the perturbation component are calculated individually for each variable. In addition, if the number of samples used to calculate the coefficient is considerably small, the coefficients may be affected by the differences in internal variability and daily fluctuation. In Xu and Yang (2012), the climatology component and standard deviations were calculated every six hours based on 6-hourly data sets for 30 years (this information is available in Xu & Yang, 2015). These drawbacks may be mitigated by calculating the mean and standard deviations using the data over a longer period.
Another concern associated with the MVBC method is the change in climate trends between the present and future climates; the changes in climate trends are included in the perturbation component (Xu et al., 2019). If the changes in climate trends are considerably large, applying bias correction to the future perturbation  component means the correction of not only the synoptic-scale fluctuations but also climate trends. In this case, it is useful to divide a large-scale atmospheric state into three categories, that is, the climatology component as the time average, its trend, and others (i.e., fluctuations), and subsequently conduct magnitude correction of the fluctuations. Such a method has been proposed as a bias correction method for the SST condition used in AGCM Mizuta et al., 2008). Colette et al. (2012) proposed the quantile-quantile correction method (hereafter, the QQC method) based on the quantile matching method (Déqué, 2007;Wood et al., 2004) along with the cumulative distribution function transform (Michelangeli et al., 2009), which was first proposed as a statistical downscaling method with bias correction. The bias corrections performed using statistical techniques are usually applied to the model output after completing the model calculations. However, the QQC method is applied in advance to the GCM output for preparing the bias-corrected constraint to an RCM. This method has the benefits of the dynamical and statistical downscaling methods; the RCM output is expected to simultaneously have less bias and maintain physical consistency between the variables. Although there is no restriction that the input and output variables must be the same in statistical downscaling, the correction process of the QQC method considers them to be the same variables.

Quantile-Quantile Correction Method
In the present climate experiment, the GCM results are corrected using the reanalysis data and the quantile matching method. Initially, this method constructs cumulative distribution functions (CDFs) for the model result and the reanalysis data. Subsequently, the GCM data are corrected to ensure that the CDF matches with that of the reanalysis data. An unbiased value, A[ ], is determined under the assumption of is the CDF for a variable belonging to Ψ. Once the quantile of the model value is obtained, the value of the reanalysis data corresponding to the quantile is specified (Figure 1). This correction is applied individually to the three-dimensional atmospheric variables. The constraint with respect to the RCM in the present climate simulation can be given as where ∈ {T, U, V, RH}, and F −1 Ψ[ ] are the inverse functions of the CDF, F Ψ[ ] . The correction is performed for each grid of the GCM. All the variables, except temperature and geopotential, are corrected on a monthly basis, and the temperature is corrected on a 6-hourly basis to express the diurnal cycle. Further, the geopotential is recalculated using the hydrostatic balance based on the corrected temperature.
In the future climate experiment, the GCM results are corrected similarly to that in the present climate experiment. However, the major difference is that the unbiased CDF is not available in the future climate experiment because we do not know the actual future climate. To overcome this problem, the CDF-transform method (CDF-t) was proposed (Michelangeli et al., 2009). CDF-t estimates an unbiased CDF with respect to the future climate as follows. It is assumed that there exists some transformation T, which directly converts the CDF of the present climate data of a GCM into the CDF of the reanalysis data; that is, T . If this relation remains and the transformation T does not change in the future climate, the unbiased CDF F Sf [ ] for the future climate can be obtained as Using equation (14), the constraint with respect to the future climate can be given as A well-known disadvantage of posterior bias correction using the statistical method is the physical inconsistency of the corrected variables. The corrected constraint conditions face the same problem. There is a concern that correction ignores the spatiotemporal relations between variables, potentially adding spurious fluctuations to the corrected data. Colette et al. (2012) observed this problem but expected an RCM simulation to eliminate the spurious fluctuations. White and Toumi (2013) also discussed this issue by comparing the QQC method with the MBC method. They argued that although the influences of the unbalanced variables may be mitigated by some relaxation schemes, they may not be sufficient to eliminate the spurious variability associated with the QQC method. Their results based on the simulations conducted in the river basin of South Africa showed that both these methods improved the reproducibility of the yearly average precipitation and that the QQC method overestimated the variability with respect to the monthly precipitation; the maximum yearly precipitation was outside the range of natural variability.

Nesting Bias Correction Method
The nesting bias correction method (hereafter, the NBC method) was initially proposed by Sharma (2009, 2011) as a postprocessing statistical bias correction method. Their objective is to improve the low-frequency variability of the precipitation simulated by a GCM from the viewpoint of water resource assessment. Rocheta et al. (2017) applied this method to large-scale atmospheric variables that are used as the constraint conditions of an RCM. They expected that the low-frequency variability of the downscaled output, such as precipitation, will be improved by correcting the low-frequency variability of the constraint conditions.
The constraint conditions for the present climate experiment in this method can be given as Finally, the corrected monthly average of the model ⟨G p [ ]⟩ mn for month m can be obtained by rescaling ⟨g⟩ m based on the mean and standard deviations of the reanalysis data of month m. The time series of corrected yearly average, that is, ⟨G p [ ]⟩ r , can also be calculated via the same procedure based on the monthly average, except that the corrected monthly average, that is, That is, the time series of yearly average calculated based on the corrected monthly average is used as the uncorrected yearly average, and then the procedure is applied to it.
Although Rocheta et al. (2017) only demonstrated the present climate experiment using the NBC method, the constraint conditions for a future climate experiment can be obtained in a manner similar to that for the present climate experiment, except for rescaling based on the mean and standard deviations of the reanalysis data. The bias-corrected mean and standard deviations based on the GCM future climate can be used as one of the ways for rescaling. Rocheta et al. (2017) compared the NBC, MBC, and MVBC methods with respect to the precipitation data in Australia from 1980 to 2010. They intended to improve the RCM output, especially the low-frequency variability of precipitation, by correcting the low-frequency variability bias of the RCM constraints. However, the obtained results were different from the expectations; the low-frequency variability of the constraint conditions does not play a dominant role with respect to the low-frequency variability of precipitation. The bias correction of the climatology component, that is, the MBC method, significantly improved the results, whereas the improvement obtained using the NBC method was limited. Rocheta et al. (2017) demonstrated that more complicated techniques did not necessarily result in more skillful simulations. Kröner et al. (2017) and Schär and Kröner (2017) proposed the sequential factor separation method (hereafter, the SFS method), which is an extended method of the SCC and PGW methods. The SCC method distinguishes three impacts of the changes in large-scale atmospheric state on the regional climate. Here, bias correction is not essential because the objective is to understand the sensitivity of the climate factors. The total regional climate change Δ can be obtained via two direct DDS experiments as follows:

Sequential Factor Separation Method
Further, the total change Δ can be expressed as the sum of the changes because of the following three factors.
where TD is the large-scale thermodynamic change effect, LR is the lapse-rate change effect, and CO is the effect of the changes in large-scale circulation and other remaining effects. In practice, TD extracts the response of the regional climate in which the temperature increases uniformly in the target region, similar to that in the SCC method. LR attempts to explain the impact of the changes in vertical stability; the global warming experiments conducted based on the GCMs indicate that the temperature increase differs between the troposphere and the stratosphere (Bony et al., 2006).
Let us consider the experimental series and their constraints to estimate the three effects. The temperatures in the present and future climate projected by a GCM can be decomposed into the following three components: x, ,t| x, ,t| The first and second terms on the RHS of equation (20) represent the temperature averaged with respect to the time and horizontal directions at a certain level (z 0 ) and the vertical variation of temperature, respectively. The third term represents their residual, including the horizontal and temporal variations of temperature. By considering the difference between two equations in equation (20), each temperature change can be described as The experimental set to estimate TD is the same as that in the SCC method, except for the reference state; the present climate data of a GCM are used as the reference state of the TD experiment, that is, equation (3). The temperature constraint in the TD experiment is similar to equation (7): As shown in Schär et al. (1996), the constraints to the remaining variables have to be determined to ensure dynamic consistency; they can be obtained using the same procedure as that of the SCC method such as equation (8). Then, TD can be obtained as LR is estimated based on the constraint condition of B TD as the reference state. The constraint condition of temperature in the LR experiment can be given as The constraint conditions with respect to other variables are obtained in a manner similar to the TD experiment. Further, it is assumed that the RH does not differ from the reference state. Thus, LR can be obtained as CO can be estimated using equations (19), (23), and (25). Kröner et al. (2017) and Schär and Kröner (2017) presented an issue associated with the SFS method. The sensitivity estimated via each experiment is dependent on the reference state because the changed components interact with the remaining components during a downscaling simulation. Therefore, they prepared another set of experiments in which they sequentially subtracted each factor from the future direct DDS Figure 3. Schematic of the factor separation for climate component (FSCC) method. The triangles and circles indicate the constraint conditions and the corresponding downscaling outputs for each experiment, respectively. Δ is the regional climate change estimated as the difference between the present and future direct DDS experiments. ΔP and ΔC represent the contributions of changes with respect to the perturbation and climatology components, respectively. The expected climate change is defined as the sum of ΔP and ΔC. Δcp is the difference between the actual and expected future climates (based on Figure 1 of Adachi et al., 2017). experiment. In this sequence, the reference state for each experiment differs from the aforementioned experimental set. Figure 2 presents two series of experiments. The left sequence indicates the path from the present climate experiment to the future climate experiment via the TD and LR experiments, whereas the right sequence indicates the return path from the future climate to the present climate via experiments by subtracting the TD and LR effects. Therefore, the two series of sensitivity values determined under the two references are not identical. Schär and Kröner (2017) and Kröner et al. (2017) proposed a method of halving two values for estimating TD, LR, and CO. If the direction of the interaction effects is opposite in two experimental series, the interaction effect is expected to be canceled to some extent. This aspect of the SFS method is explained in detail in section 4.4.
Recently, Brogli et al. (2019) extended the SFS method and attempted to decompose the CO of the SFS method into three parts, that is, the meridional anomaly of the lapse rate, the three-dimensional temperature distribution anomaly from the zonal mean at each vertical level, and the perturbation component.

Factor Separation for Climate Component Method
Adachi et al. (2017) proposed a method to quantify the sensitivities of the changes in the climatology and perturbation components as well as the nonlinear effect of the changes in these two components. This method is called the factor separation for climate component method (hereafter, the FSCC method) in this study. Its mathematical basis was proved in Stein and Alpert (1993). The method comprises two direct DDS experiments for the present and future climates and two pseudo climate experiments called the pseudo climatology change downscaling (Pseudo-Clim-DS) experiment and the pseudo perturbation change downscaling (Pseudo-Perturb-DS) experiment. Figure 3 presents the outline of the proposed procedure. The total difference Δ between the present and future climate with respect to the regional climate can be obtained from equation (18). Using the present climate direct DDS as the reference, the Pseudo-Clim-DS experiment, which has the same concept as the PGW experiment, is used to estimate the effects of the changes in the large-scale climatology component, and the Pseudo-Perturb-DS experiment estimates the effects of changes in the perturbation component. The

10.1029/2019JD032166
constraints with respect to these experiments can be given as where B PCC and B PPC are constraints for the Pseudo-Clim-DS and Pseudo-Perturb-DS experiments, respectively. The impacts of the changes in the climatology and perturbation components on the regional climate, respectively, can be obtained as The nonlinear contribution of the two changes can be determined by the following relation: Prior to Adachi et al. (2017), Nishizawa et al. (2016Nishizawa et al. ( , 2018 compared the contributions of the changes in the climatology and perturbation components to the future precipitation change in western Japan. They estimated the contribution of perturbation change using equations (4) and (26) instead of equation (29).
Their results showed that the contribution of the changes in the perturbation component was more significant than that of the changes in the climatology component; further, the decrease in precipitation accompanying the changes in the perturbation component was mainly explained by the decrease in the number of typhoons passing through the target region. This result suggests that the changes in the perturbation component cannot be always ignored when considering the future regional climate change. However, this result is contradictory to that of Yoshikane et al. (2012), in which a future direct DDS experiment and a Pseudo-Clim-DS experiment were directly compared. They concluded that the effects of the perturbation component are limited.
To further investigate this issue, Adachi et al. (2017) estimated the nonlinear effect Δcp between the two changes as well as the changes in each sensitivity (ΔC and ΔP) in the same region and period as in Nishizawa et al. (2018) using the FSCC method. Then, they indicated that the changes in the perturbation component still significantly influenced the future climate and that the influence of nonlinearity cannot be ignored. The discrepancies between the results of Nishizawa et al. (2018) and Adachi et al. (2017) and that of Yoshikane et al. (2012) are speculated to be related to the differences in the actual meaning of the perturbation component and its projected changes; they vary depending on the parent model used and the evaluation target, such as region, season, and phenomena. The FSCC method should be applied to multiple GCM projections to reevaluate the impact of the perturbation changes.

Use of Multiple GCM Projections With the MBDDS Methods
All the MBDDS methods use the constraints constructed from the large-scale atmospheric state obtained from the GCMs. Because each GCM has its own model bias and physical response to external forcings, the climates simulated by the GCMs are different from each other (e.g., Levy et al., 2013). Although the GCM bias is expected to be reduced by bias correction in the MBDDS methods, different responses to external forcings are considered to remain. The issue of how to select the future projections of GCMs is crucial because it substantially affects the downscaled results. One solution to this problem is to denote the probabilistic projections using as many climate projections as possible (e.g., Giorgi et al., 2009;Mizuta et al., 2017). The variations in the projected climate indicated by the probabilistic projection provide additional information related to reliability. However, this approach requires many downscaling experiments and considerable computational costs. As alternative approaches that require relatively low computational costs, the following approaches have been proposed with respect to the MBDDS methods.
The first approach is to replace the future change in the climatology component projected by a single GCM with a multimodel mean projected by multiple GCMs (e.g., Liu et al., 2017). This approach is useful in case of 10.1029/2019JD032166 the SCC, PGW, and MBC methods. Kawase et al. (2009) applied this approach to the PGW method to evaluate the future changes in precipitation associated with the Baiu front, which is a stationary rain front in East Asia. The Baiu front brings heavy rainfall to China and Japan from May to July, resulting in abundant water resources for crop cultivation even though heavy rainfall sometimes causes severe disasters. Kawase et al. (2009) selected seven GCMs that presented a relatively good performance during present climate simulation in the target area and season. They subsequently conducted the PGW experiment using the multimodel mean of the climatology components from seven GCMs: where the angle bracket indicates the multimodel mean. They compared the result of the PGW experiment using the multimodel mean to the average of seven PGW experiments for each GCM projection; they proved that the two results were similar. This means that a single PGW experiment using the multimodel mean is sufficient for estimating the averaged state of multiple PGW experiments, resulting in reduced computational resources. The efficiency of this approach when assessing the changes in surface air temperature was demonstrated by Adachi et al. (2012). Dai et al. (2017) also applied the multimodel mean approach to the MBC method (equations (10) and (11)) as follows: They used the multimodel mean for obtaining the future change of the climatology components, whereas the perturbation component was obtained using one GCM.
As described above, even though the MBDDS method using multiple GCMs is useful, the most efficient way to select and average GCM projections has not yet been established. The equal-weighted averages of multiple projections are not always the best estimation (e.g., Shiogama et al., 2011). The model democracy can be an efficient approach; however, the plausibility of the projected climate remains a major research subject Flato et al., 2013;Gleckler et al., 2008;Knutti et al., 2010;Knutti, 2010;Stephenson et al., 2012). Although this issue will not be addressed any further in this study, we need to gain an update regarding new ideas for evaluating and addressing different projection results.
The second approach is an attempt to grasp the possible range of projections using the results of as few downscaling experiments as possible by devising a methodology to select GCMs (Wakazuki & Rasmussen, 2015). Although the first approach using the multimodel mean of the climatology component has the advantage of low computational costs, it cannot be used to estimate the range of projections. Wakazuki and Rasmussen (2015) proposed the incremental DDS and analysis system (InDDAS) to overcome this problem. The InDDAS extracts climate change modes with respect to the climatology component from multiple GCM projections. First, they statistically analyzed the climatology anomalies of multiple GCM projections relative to their multimodel mean via singular vector decomposition. Then, the analyzed positive and negative climatology change modes were added to the multimodel mean; the created climatology change patterns were considered to be the typical future changes in the climatology component and used in equation (32). They argued that the InDDAS enabled the approximate estimation of the probabilistic climate change using some downscaling simulations. However, they also reported that this method underestimated the variation in mean precipitation when the analyzed area was small.
Further, we must consider the influence of internal variability on the downscaled results because the MBDDS methods aim to estimate the climate change signals and not the effects of internal variability in most cases. To avoid the influences of internal variability, the periods used to calculate the climatology component, bias correction (i.e., RPeriod), and downscaling simulation should be sufficiently long. First, we consider reducing the influences of internal variability with respect to the climatology component. To achieve this, Yoshimura and Kanamitsu (2013) proposed to use an ensemble mean of simulations with different initial conditions instead of a single simulation result. This concept is technically similar to the first approach presented in equations (32) and (33), except that an ensemble mean from a single GCM is used instead of multiple model averages. Next, we consider the simulation period required to reduce the

10.1029/2019JD032166
influences of internal variability. When assessing changes in climatological mean and extreme events, it is generally recommended to perform simulations over a period of approximately 30 years (e.g., Yoshikane et al., 2013) which requires a high computational cost. To overcome this problem, techniques, such as the statistical-DDS method (e.g., Frey-Buness et al., 1995;Fuentes & Heimann, 2000;Heimann & Sept, 2000;Hoffmann et al., 2018;Reyers et al., 2015) and sampling downscaling (SmDS) (Kuno & Inatsu, 2014), have been proposed. These techniques attempt to obtain a result equivalent to that of a long-term downscaling experiment from a short-term downscaling experiment. DDS is employed only to some samples, such as typical years or weather patterns, from a target period. Although most of the studies applied these techniques to the direct DDS method, they can be also applied to the MBDDS methods.

Discussion
This section discusses the following three questions with respect to the MBDDS methods to comprehensively understand them. (i) In each method, what effect is expected on the downscaled result due to bias correction? (section 4.2) (ii) Which part of the GCM output is considered reliable when constructing constraints for the future climate experiment in each method? (section 4.3) (iii) In a sensitivity experiment dealing with the changes in multiple climate factors, how should the interactions associated with these changes be interpreted? (section 4.4).

Errors Derived From the GCM and RCM in Downscaled Outputs
Before discussing the MBDDS methods, we interpret the influences of the model biases included in the downscaled results. There are two types of biases in the RCM output, that is, the bias included in the constraint conditions and the bias that can be attributed to the systematic errors inherent to the RCM. The main cause of the former bias is the systematic errors inherent to a GCM (or the parent model), which provides the constraint.
First, consider the constraint condition on an RCM, which represents a large-scale atmospheric state, denoted by Ψ. This state can be divided into two parts as If the true state is introduced as Ψ 0 , the GCM biases in the climatology and perturbation components of Ψ are described as Although the reanalysis data do not perfectly express the true state, the reanalysis data are assumed to represent the true state.
Next, we consider the RCM output. Nishizawa et al. (2018) presented that the downscaled result can be expressed as the sum of the large-scale climatology component, the large-scale perturbation component, and the sub-GCM grid-scale variations. Based on this, the downscale operator in equation (2) can be obtained as where the operators  and  express the alteration via downscaling for the large-scale climatology and perturbation components, respectively. The operator  expresses the generation of the sub-GCM grid-scale variations, that is, the so-called added value that has been spatially refined using an RCM. Thus,  and  ] .
Here, sensitivities are expressed as partial differentiation of the downscaled result based on the true state for convenience purposes. Although these expressions are conceptual and not mathematical partial differentiation in a formal manner, they are instrumental to understand the biases. If equation (36) where n + m = 2. By introducing the ideal RCM without any model bias, where the operators are indicated with a subscript of 0, and inputting the true input (Ψ 0 t , Ψ ′[t] 0 ), the true downscaled state true can be described as This is a special case in which the first and second terms on the RHS of equation (36) 0 , respectively. This indicates that the climatology and perturbation components in the RCM output are the same as those in the constraint condition when neither the constraint conditions to the RCM nor the RCM itself has a bias. The third term on the RHS of equation (38) indicates the sub-GCM-scale fluctuations without bias. Thus, using equation (38), equation (37) can be finally described as where E rcm can be interpreted as the biases derived from the RCM because it is 0 when an RCM without a systematic bias is used under the constraint condition of the true climate. On the other hand, E gcm can be interpreted as the biases contained in the RCM output caused by the GCM bias because it becomes 0 at ΔΨ t = 0 and ΔΨ ′[t] = 0.

Expected Effect of Bias Correction
Based on the above derivation, it is possible to understand roughly what to expect when using the bias correction methods. Figure 4 presents the relation between the large-scale atmospheric state (the constraint to an RCM) and the downscaled results for each MBDDS method in the phase space of the climatology and perturbation components. This conceptual diagram was originally introduced by Nishizawa et al. (2018). They considered that the statistical features of the climatology and perturbation components roughly have a one-to-one relation because they are not independent of each other. The solid and long dashed lines in Figure 4 represent the relation between two components for a GCM and the true climate (nature), respectively; they can be referred to as a state curve. The state curves of the model and nature differ because a model cannot perfectly represent all the complex relations in nature. An RCM also has its own state curve, which is different from those of a GCM and nature (the short dashed line in Figure 4); it would be exposed when the influences of a constraint on the RCM output are negligible or small. This situation can be imagined if the calculation domain is not limited but expanded to the whole globe.
First, consider the direct DDS method (Figure 4a). If the RCM is not affected by the constraints, the RCM results would be located on the RCM state curve. However, in actuality, the RCM is strongly restricted by the constraint conditions. Therefore, the actual position of the RCM results is expected to be distributed Figure 4. Schematic of the bias correction methods in a climatology-perturbation phase space. The horizontal and vertical axes represent the climatology and perturbation components with a large spatial scale, respectively. The solid, short dashed, and long dashed lines illustrate the state curve corresponding to the stable relation between climatology and perturbation components in a GCM, an RCM, and nature (reanalysis data), respectively. The white and black crosses illustrate the large-scale true state and the large-scale state simulated by a GCM, respectively. The white circle represents the constraint condition driving the downscaling simulations. The red areas show the expected position of RCM output, which is projected on the phase space. around the constraint; simultaneously, it moves slightly toward the direction of the RCM state curve from the constraint, as shown by the red circle in Figure 4a. In the direct DDS experiments, the RCM output is different from the true state denoted by the white cross because of the biases E gcm and E rcm . The RCM also generates sub-GCM grid-scale variations, expressed by the operator ; this sub-GCM component may considerably differ from the true value because of the far-constraint condition from the true state.
Second, consider a case in which the climatology and perturbation components are replaced with those from the reanalysis data; this is used in the SCC and PGW methods (Figure 4b). In this case, E gcm becomes 0 because ΔΨ t = 0 and ΔΨ ′[t] = 0. Even in this case, the bias from the RCM itself E rcm remains; the downscaled results slightly move toward the direction of the RCM state curve from the constraint conditions. If E rcm is sufficiently small, the RCM output would be located around the true state . Thus, this situation is considerably more reliable than the first case (the direct DDS method).
Third, instead of completely replacing the GCM results with the reanalysis data, reducing the biases in the climatology and perturbation components was considered, as shown in Figure 4c. The MVBC, QQC, and NBC methods fall into this category. This case is similar to the second case, except that small errors are present in the constraints (ΔΨ t and ΔΨ ′[t] ). When compared with the first case (the direct DDS method), E gcm is significantly reduced, whereas E rcm remains. Thus, a similar effect to that in the second case can be expected in the third case. From the viewpoint of bias correction in the present climate simulation, the second case seems to be better than the third case. However, the same correction procedures cannot be applied to the future climate. The third case presents an advantage that the bias correction procedure can be consistently applied to the present and future climate experiments.
Fourth, consider methods that correct only the climatology component (Figure 4d), that is, the MBC method. The position of the bias-corrected constraint is . Although the RCM simulation is restricted by the given constraint conditions similarly to that in the remaining downscaling methods, the fourth correction has a different concept for the expected effect of bias correction. In the remaining methods, the RCM output is expected to be located around the constraint conditions, whereas this method intends to correct not only the climatology component but also the perturbation component in an RCM output by modifying only the climatology component of the constraint condition. This works because there is a one-to-one relation between the climatology and perturbation components along with the state curve of the RCM. Thus, this method expects that the perturbation bias ΔΨ ′[t] will be corrected by ensuring that the climatology bias ΔΨ t becomes approximately 0. Hence, the downscaled result can be located closer to the true state than the constraint conditions. Note that the simulated perturbation component based on the corrected climatology component is dependent on the RCM's state curve; if E rcm is large, the downscaled results can become worse than the state given as the constraint conditions. One question arises here: How important is it to provide the constraint of the perturbation component through the lateral boundary for an RCM? The third method, that is, the MVBC, QQC, and NBC methods, is based on the idea that the perturbation component in the constraint is important for downscaled climates, whereas the fourth method is not based on this idea because an RCM itself generates perturbation according to the constraint of the corrected climatology component. The answer to this question is likely dependent on the experimental design of the downscaling simulation. The latter idea would be suitable for the situation in which the influence of the perturbation constraints is sufficiently weak, for example, the simulations performed with a large computational domain or in tropical regions.
The SFS and FSCC methods are sensitivity experiments performed to understand the physical mechanism of regional climate change. Because they do not intend to provide an actual accurate projection, they do not correct any bias associated with the constraints. However, the concept of these methods may be used along with bias correction, like the SCC and PGW methods.

Configurations for Future Climate Experiments
This section discusses the part of the GCM output used when constructing constraints for the future climate experiment in each method. This is related to the thinking about which parts of the GCM outputs are reliable and meaningful. In this sense, it is also related to the bias correction method employed for the constraint conditions. According to (Adachi et al., 2017(Adachi et al., , 2019, the changes in climatology and perturbation components in the large-scale state are divided into thermodynamic and dynamic changes based on the interpretation of their physical meanings (Table 2). Conceptually, the future climate change can be expressed using all or a combination of the four components even though it is difficult to distinguish the thermodynamic change   (7) and (8) and Figure 5a), whereas the PGW method uses the thermodynamic and dynamic changes (equation (9) and Figure 5b). One of the reasons owing to which these methods only considered the climatology component is because a large bias was recognized in the GCM climate simulation, even in the present climate condition. Therefore, it was necessary to extract only meaningful signals from the GCM results (Hall, 2014). The climatology change, especially the thermodynamic change, presents a robust response in case of climate change because of the increase in greenhouse gases, and the spatial uniformity of the response is relatively high compared to the changes in the perturbation component.
The MBC, MVBC, QQC, and NBC methods consider the changes in the climatology and perturbation components (Figures 5c-5f). These methods apply the bias correction procedures constructed for the present climate constraint to the future climate constraint. Thus, these methods consider that the changes in both the components projected by a GCM are meaningful even though bias corrections are necessary either only for climatology or for both climatology and perturbation.
The main objective of SFS and FSCC methods is to understand the large-scale climate factors that impact the regional climate (Figures 5g and 5h). Thus, these methods consider all the climate change components; the SFS method further divides them into several climate change factors. These methods comprise several sensitivity experiments to assess the contribution of each factor.

Influence of the Nonlinear Effect in Sensitivity Experiments
The estimation of the regional climate response to a changing factor is dependent on the remaining experimental settings because of the potential of strong nonlinearity between the changing factor and other factors (Stein & Alpert, 1993). Although nonlinearity has been recognized, most of the MBDDS methods have not been sufficiently discussed because the nonlinear effect is considered to be relatively small when compared with the interesting responses.
Several papers dealing with nonlinearity have been published. Stein and Alpert (1993) proposed a factor separation method that separates the pure contribution from each factor and the nonlinear effect between the factors when considering the changes of multiple factors affecting the atmospheric phenomena. For example, consider the effect of two factors. The total impact obtained by changing these factors can be expressed aŝ1 +̂2 +̂1 2 , wherê1 and̂2 represent the direct impact of changing factors 1 and 2, respectively, and̂1 2 represents the impact of the nonlinearity between the changes of these two factors. The three contributions can be obtained based on the outputs of four experiments. f 0 did not consider the changes of factors 1 and 2. f 1 and f 2 considered the changes of factors 1 and 2, respectively. f 12 considered the changes of factors 1 and 2. The three contributions can be described aŝ Adachi et al. (2017) applied the factor separation method to assess the contributions to regional climate change owing to the changes in the large-scale climatology and perturbation components as well as their nonlinear effect, as described in section 3.9. They considered the large-scale climatology component as factor 1 and the large-scale perturbation component as factor 2. To obtain the three contributions of̂1,̂2, and̂1 2 , the direct DDS experiments for the present and future climates, the Pseudo-Clim-DS experiment, and the Pseudo-Perturb-DS experiment were conducted. They investigated the precipitation change in summer in western Japan using this method and showed that the influence of the nonlinearity between factors cannot be ignored. The nonlinear effect was particularly strong in case of heavy precipitation, where the nonlinear effect was suggested to suppress the increase in the heavy precipitation frequency owing to the change in the climatology component. Thus, it is important to advance our understanding of the nonlinear effects. Adachi et al. (2019) proposed an application using this method for analyzing the influence of the uncertainty in GCM projections on the downscaled projections. The method allows to not only estimate the magnitude of the spread of regional climate projections but also evaluate the factors causing this spread; it also enables the physical interpretation of the regional climate change.
In the factor separation method, the number of experiments exponentially increases with an increase in the number of considered factors. For example, consider a case in which three factors are considered. To separate the pure effects of each factor and all the interactions, the following eight experiments are required: one experiment (f 0 ) without changing any factor; three experiments (f 1 , f 2 , f 3 ) that change only one factor; three experiments (f 12 , f 13 , f 23 ) that change two factors; and one experiment (f 123 ) that changes all the three factors. Generally, when there are n factors, 2 n experiments are required. Schär and Kröner (2017) presented three problems when considering the many factors associated with this method. First, the larger the number of considered factors, the larger will be the computational cost. Second, it is difficult to interpret the results of many interaction terms. Third, the uncertainty associated with its evaluation may increase because of noise contamination when the evaluated contribution is obtained by manipulating many experiments; for example, when n = 3, the nonlinear effect between all the three factors can be obtained usinĝ 123 = 123 − ( 12 + 13 + 23 ) + ( 1 + 2 + 3 ) − 0 . The aforementioned problems motivated the proposal of the SFS method section 3.8).
In this method, the influences of the changes in factors 1, 2, and 3, denoted byF 1 ,F 2 , andF 3 , respectively, are defined as follows: In this method, two types of experimental series were conducted. The first type includes a series of experiments that starts from the f 0 experiment and proceeds to the f 123 experiment by sequentially adding each factor. The second type includes a series of experiments that starts from the f 123 experiment and proceeds to the f 0 experiment by sequentially subtracting each factor ( Figure 2). This requires only six experiments. Generally, this method requires only 2n experiments for n factors so that it requires less computational costs than that required by the factor separation method at large n. A conceptual diagram of the experimental series of Kröner et al. (2017) is presented in Figure 6. The four experiments used for the first terms on the RHS of equation (43) are indicated by the blue lower triangle in Figure 6, whereas those used for the second terms on the RHS are indicated by the blue upper triangle. Finally, the influences of the changes of factors 1, 2, and 3 are defined as the average of the two evaluations. The valuesF 1 ,F 2 , andF 3 obtained in this manner are not pure sensitivities, as evaluated by the FSCC method; therefore, these values are somewhat contaminated by the nonlinear effects.
Many sensitivity experiments for factor analysis intend to evaluate the contribution of each factor. In this case, the direct contribution from each factor is the signal, whereas the nonlinear effects between the changing factors are sometimes considered to be noise. The SFS method is an effective method when there is no interest in evaluating the nonlinear effect and when its contribution is assumed to be small. However, it is also important to evaluate whether the magnitude of the nonlinear effects is negligible or not compared with the pure contributions from each factor and understand the meaning of the nonlinear effect itself. Therefore, the factor separation method is also useful.

Inconsistency Between the Constraint and Downscaled Results
Because a GCM and an RCM have different state curves (Figure 4), as discussed in section 4.2, the large-scale atmospheric state in the downscaled result splits from the constraint with the increasing integration period (Jones et al., 1995). Even if their state curves are similar, the inappropriate constraint at the lateral boundary

Journal of Geophysical Research: Atmospheres
10.1029/2019JD032166 may result in an inconsistency between the boundary condition (constraint) and the RCM result (Denis et al., 2002). This is known as the mathematically ill-posed lateral boundary problem (Davies, 1976).
One of the countermeasures to this problem is the nudging technique applied to the inner area of the calculation domain of an RCM. The nudging technique is effective to inherit the large-scale state information provided by a parent model (GCM) over the inner area of the nested model (RCM). There are two main types of nudging, that is, spectral nudging (Kida et al., 1991;von Storch et al., 2000) and grid nudging (Stauffer & Seaman, 1990). They have different effects on the RCM results (e.g., Liu et al., 2012) even though it is not described in detail in this study. The strength of the nudging coefficient is selected based on the experimental purpose and settings, including the size of the computational domain (Schaaf et al., 2017), the length of the integration time (Sasaki et al., 1995), and the downscaling method.
In the studies that use the MBDDS methods, there are cases in which nudging is adopted (e.g., Kanamaru & Kanamitsu, 2007;Misra, 2007) and not adopted (e.g., Hara et al., 2008;Holland et al., 2010;Pontoppidan et al., 2018). Some studies have been positive about the effectiveness of nudging, whereas others have not necessarily recommended it. Xu and Yang (2015) proposed the application of spectral nudging to the MVBC method to effectively impose the information of the constraint conditions ( Figure 4c). Based on the results obtained from the downscaling simulation in North America using this method, they showed that the climatological averages of the surface air temperature at 2 m and the atmospheric variables of temperature, geopotential height, and wind vectors are improved, whereas the reproducibility of precipitation does not necessarily improve when compared with the direct DDS methods. Therefore, they recommended using a weak nudging coefficient or not using the nudging technique if the downscaling purpose is only to evaluate the precipitation. However, Pontoppidan et al. (2018) did not employ nudging for the MBC method to ensure that the RCM response to the constraints are free in the inner area of the calculation domain. As shown in Figure 4d, the MBC method expects that the representation of large-scale perturbation component in an RCM improves by correcting only the climatology component of the constraint. The concept of the MBC method differs from that of the MVBC method even though they are mathematically similar (equations (10) and (12)).
When there is a large bias in the constraint of an RCM, nudging may worsen the simulated results in some cases Pielke Sr and Wilby (2012). This means that the reliability of the constraint conditions should be sufficiently considered when applying nudging. In addition, the objective of an experiment and the intent of the downscaling method used are crucial. In some cases in which strong nudging may not be appropriate, including the aforementioned case of the MBC method, the reinitialized simulation technique is also one of the ways to impose the information of constraints (Lucas-Picher et al., 2013;Qian et al., 2003). The reinitialized simulation covers the integration periods by a sequence of short-term simulations; that is, it updates the initial conditions at the beginning of the short-term simulations. Adachi et al. (2017) adopted reinitialization for their FSCC experiment.
For either type of simulation, that is, the reinitialized simulation or the continuous simulation, it is necessary to devise a methodology for preparing constraints in case of a long-term target period. There are several definitions for the climatological components used in the MBDDS methods. The most basic definition is the monthly means averaged over approximately 30 years of the concerned month; however, it may be inappropriate for long-term target period over several seasons. In such a case, time-variant climatology is applicable (e.g., Adachi et al., 2017;Nishizawa et al., 2018). The time-variant climatology on each day is determined by linearly interpolating two monthly means averaged over 30 years; it includes information about the seasonal change of the climatology component. If the target period extends longer, such as over several decades, the changes in climate trends will have to be considered.
Another effective way to avoid the ill-posed lateral boundary problem is to use the stretched GCM for a downscaling simulation (Fox-Rabinovitz et al., 2006). The ill-posed boundary problem can be avoided because the stretched GCM uses the same equation and numerical scheme outside and inside the target region. Although the problem of systematic model biases still exists in the stretched GCM, it can be overcome to some extent using AGCM with bias-corrected SST (e.g., Katzfey et al., 2009;Nguyen et al., 2012). The stretched grid method can be used in case of a modified boundary condition problem by applying the nudging technique to a region outside the target. However, the imposed constraint to the target region is ambiguous because the target and nontarget areas more or less interact with each other. The same thing could be said when applying the nudging technique using two-way nesting instead of one-way nesting to an RCM.
The MBDDS methods using an RCM consequently allow ill-posedness between the constraint and RCM results in exchange for imposing clear constraints. It should be noted that all of the solutions against the ill-posed lateral boundary issue are fundamentally empirical. Sufficient attention is required when analyzing the calculation results.

Treatment of the Moisture Content in Experimental Conditions
The constraints with respect to the atmospheric water content are usually given by the values of RH or specific humidity. There are two main methodologies to prepare these constraints for the MBDDS method. The first method is to correct the specific humidity in a similar manner as that of the remaining atmospheric variables (e.g., Xu & Yang, 2015). The second method is to assume that the RH does not change owing to climate change (e.g., Kawase et al., 2009;Schär et al., 1996).
The first method is simple; however, unintended situations, such as oversaturation, possibly occur because the specific humidity and temperature are modified independent of the Clausius-Clapeyron relation. One may consider using RH instead of specific humidity. However, the same problem will occur because the prepared RH may exceed 100%. This problem can be avoided using a quantity called modified RH proposed by Wakazuki (2013). Note that because the RH is a function of pressure, temperature, and specific humidity, the correction of RH ignoring the relation makes the physical meaning of the correction ambiguous. The physical implications of the correction are important to understand the downscaled results.
The second method assumes the same RH regardless of climate change; this method is widely used in the MBDDS methods that do not consider perturbation changes, such as the SCC method, the PGW method, and the Pseudo-Clim-DS experiment of the FSCC method. There are several reasons for adopting this method. First, the analyses of the past climate change suggest that there is no obvious trend of near-surface RH (Dai, 2006;Soden et al., 2005). Second, in case of the SCC and PGW methods, the objective is to investigate the implications of the thermodynamic changes associated with the temperature change. The assumption that the RH does not change implies that the absolute amount of water vapor in the atmosphere changes with the temperature change according to the Clausius-Clapeyron relation. Third, in case of the methods in which only the climatology component changes are considered, it can be interpreted that the spatiotemporal variation of the RH depends on the perturbation component (Adachi et al., 2017). By clarifying the concept of the experimental setting with respect to the RH, physical interpretations of the downscaled results become easier.

Summary and Future Research
The DDS method is an effective method for understanding the impacts of a global-scale future climate change on the regional climate and examining adaptation measures. An RCM inherits the systematic errors of the GCM because the regional climate is evaluated within the constraints of the initial and boundary conditions provided by the GCM. Many MBDDS methods have been proposed to obtain meaningful results by reducing or avoiding the influence of these errors (Table 1).
In addition to the systematic errors of GCMs, RCMs have own systematic errors, complicating the interpretation of the downscaled regional climate projections. To interpret these errors in the RCM results, the errors were conceptually formulated by separating the GCM-originated bias from the RCM's own bias (equation (39) with equations (40) and (41)). Based on the formula and the concept of bias corrections to the constraints, MBDDS methods, including the direct DDS method, were categorized into four types ( Figure 4). Furthermore, the climate change components that were considered in each MBDDS method to assess the future climate change were discussed based on the classification of the climate change components of the large-scale atmospheric state denoted in Table 2 ( Figure 5). The variety of experimental settings related to nudging and the constraint of atmospheric moisture in the MBDDS method were also discussed.
The main conclusions of this study are summarized below. The MBDDS methods have one or both of the following aims: to accurately estimate the regional climate projection as much as possible and to better understand the mechanisms of regional climate change. The former requires bias correction with respect to the constraint conditions provided by a GCM. The biases in the GCM output are defined by comparing the present climate results obtained from a GCM with the reanalysis data. There are several levels of bias correction, including replacing all the large-scale atmospheric components with the reanalysis data (the SCC and PGW methods), correcting only the climatology component (the MBC method) and modifying both the climatology and perturbation components (the MVBC, QQC, and NBC methods). On the first and third levels, the large-scale state in the downscaled results are expected to be distributed around the corrected constraint conditions, whereas on the second level, both the climatology and perturbation components in the downscaled results are expected to be corrected even though bias correction is applied only to the climatology component of the constraint. When selecting the correction method, it is necessary to pay attention to the following two points. The first point is that the perturbation and climatology components are not completely independent; the relation between these two components depends on a model because models have different state curves. The second point is which of the spatiotemporal-scale phenomena in RCM results are inherited from a GCM and which are generated by an RCM itself. The answer to the second point will vary depending on the climatic characteristics of the target area, the size of the computational domain, the application of the nudging technique, and so on (Wang et al., 2004). For example, let us consider teleconnection. To improve the teleconnection pattern represented in an RCM, the calculation domain has to be sufficiently large to include all the phenomena related to teleconnection, most of which are planetary-scale phenomena. When a domain size is of the order of less than thousands of kilometers, the RCM only receives a portion of the teleconnection signal via the constraint conditions.
Even if there is a bias in the GCM simulations, the influence of the bias can be expected to be minimized using the difference between the downscaled results of two climates (e.g., present and future climates). In addition, the MBDDS methods correct the biases in constraints by assuming that the GCM bias defined in the present climate remains unchanged in future climates. These thoughts are implicitly based on the assumption that the large-scale climatological feature does not drastically change before and after climate change. However, this assumption is not always guaranteed; the bias correction by MBDDS methods is not applicable to a situation considerably different from this assumption. For example, consider a case in which the position of the jets in the present climate experiment projected by a GCM shifts far from the observation (e.g., Zappa et al., 2013). In this case, the bias correction does not work well when the position of jets is projected to move to another position in the future climate. The downscaling assessment on the implication of the future change of the jets on the local climate may result in an incorrect conclusion. We should consider such limitations when employing the MBDDS methods.
One of the issues that should be addressed in a future study is to understand the nonlinear effects induced by changing two or more climate factors within the MBDDS experiments. Although the SCC and PGW methods are effective for understanding the basic regional climate responses to global warming, there is a possibility of overestimating the regional climate responses. Adachi et al. (2017) demonstrated that the PGW method possibly overestimated the regional climate responses to the changes in the climatology component, especially in case of heavy precipitation; this overestimation is suppressed by the nonlinear effects. Thus, appropriate understanding of the nonlinear effects on RCM sensitivity analyses is important for improving the DDS technique.
Another future research objective is to advance our understanding of the regional climate change by accumulating their physical interpretation. This can be realized by employing methods, including the SFS and FSCC methods, which decompose large-scale future changes into several physical processes and estimate the impact of each process on the regional climate. From a practical viewpoint that provides actionable information with regard to future climate changes, the estimation of the regional climate change along a storyline has been proposed by Shepherd et al. (2018). They presented a limitation of conventional probabilistic projections that they are ambiguous due to uncertainty and argued that event-oriented climate projections in terms of risk assessment are important for considering an adaptation strategy to the future climate change.
In this respect, the MBDDS methods are useful as process studies. Attempts, such as detection attribution (Pall et al., 2011;Stott et al., 2010) and event attribution (Gutmann et al., 2018;Stott et al., 2004), are also helpful for this purpose.
DDS studies have received massive attention in the previous decade; therefore, significant progress has been made in understanding the strengths and weaknesses of the DDS technique. Along with the improvement of the computer capability, there has been an increase in the demand for spatially detailed climate data with