Volume 7, Issue 7 p. 734-747
Research Article
Open Access

Robust Adaptation to Multiscale Climate Variability

James Doss-Gollin

Corresponding Author

James Doss-Gollin

Department of Earth and Environmental Engineering Columbia University, New York, NY, USA

Columbia Water Center, Columbia University, New York, NY, USA

Correspondence to: J. Doss-Gollin,

[email protected]

Search for more papers by this author
David J. Farnham

David J. Farnham

Department of Earth and Environmental Engineering Columbia University, New York, NY, USA

Columbia Water Center, Columbia University, New York, NY, USA

Search for more papers by this author
Scott Steinschneider

Scott Steinschneider

Department of Biological and Environmental Engineering, Cornell University, Ithaca, NY, USA

Search for more papers by this author
Upmanu Lall

Upmanu Lall

Department of Earth and Environmental Engineering Columbia University, New York, NY, USA

Columbia Water Center, Columbia University, New York, NY, USA

Search for more papers by this author
First published: 07 June 2019
Citations: 20

Abstract

The assessment and implementation of structural or financial instruments for climate risk mitigation requires projections of future climate risk over the operational life of each proposed instrument. A point often neglected in the climate adaptation literature is that the physical sources of predictability differ between projects with long and short planning periods: While historical and paleo climate records emphasize low-frequency modes of variability, anthropogenic climate change is expected to alter their occurrence at longer time scales. In this paper we present a set of stylized experiments to assess the uncertainties and biases involved in estimating future climate risk over a finite future period, given a limited observational record. These experiments consider both quasi-periodic and secular change for the underlying risk, as well as statistical models for estimating this risk from an N-year historical record. The uncertainty of IPCC-like future scenarios is considered through an equivalent sample size N. The relative importance of estimating short- or long-term risk depends on the investment life M. Shorter design lives are preferred for situations where interannual to decadal variability can be successfully identified and predicted, highlighting the importance of sequential investment strategies for adaptation.

Key Points

  • Quasi-periodic and secular climate signals, with different identifiability and predictability, control future uncertainty and risk
  • Adaptation strategies must consider how uncertainties in risk projections influence success of decision pathways
  • Stylized experiments reveal how bias and variance of climate risk projections influence risk mitigation over a finite planning period

1 Introduction

Recent climate extremes such as floods, droughts, hurricanes, tornadoes, hailstorms, and heat waves have caused death and destruction, motivating investments in climate adaptation for the public and private sectors. Further, rapid and continuing changes to global climate hazard and exposure underscore the need for adaptation strategies. For example, population growth and urbanization have driven rapid increases in global exposure to events such as floods (Jongman et al., 2012) and tropical cyclones (Peduzzi et al., 2012). At the same time, anthropogenic modification of global and local climate processes affects the frequency, intensity, and location of extreme events (IPCC, 2012; Milly et al., 2008; Shaw et al., 2016). Even if future mitigation efforts are successful, existing levels of atmospheric CO2 and ocean heat content necessitate the development of novel adaptation strategies.

This need has motivated a multitude of approaches for estimating the probability distribution of future climate risk and for choosing between different risk mitigation instruments based on these estimates (see, e.g., Merz et al., 2014). A typical goal is to create systems which are robust in the sense that they perform well over a wide range of plausible futures (Borgomeo et al., 2018; Lempert & Collins, 2007) and which fail along noncatastrophic modes (Brown, 2010). Although climate risk has traditionally been managed with centrally planned structural instruments (e.g., a levee), the high price (Papakonstantinou et al., 2016), environmental costs (Dugan et al., 2010), and vulnerability to biased climate projections (Lempert & Collins, 2007) have recently dampened enthusiasm. Rather, actors such as New York City have turned to a combination of structural (e.g., stormwater barrier), operational (e.g., improved evacuation routes), and financial (e.g., a catastrophe bond) instruments for reducing vulnerability and increasing resilience to climate extremes (City of New York, 2013). These instruments are not typically implemented in isolation or statically. Instead, investment decisions made at each point in time affect the viability, costs, and benefits of future decisions, causing the system to trace a “pathway” through time (Haasnoot et al., 2013, 2015; Walker et al., 2013).

Despite recent insights, important questions remain. How should a portfolio of risk mitigation instruments be optimized? How should one choose between permanent and transient instruments? Under what conditions is a permanent, large infrastructure investment required, and what information is needed to recognize this threshold? In this paper we focus more narrowly on the temporal structure of climate risk and how the uncertainty associated with its estimation influences the answers to these questions. We continue this section with three specific observations about climate risk which, while seemingly obvious, have important and subtle implications that we examine in sections 5, 5-8, 10, 10-12, 14.

1.1 Planning Decisions Are Made With Finite Horizons

Public or private sector investments in climate adaptation require not only the design of each potential instrument but also the selection between instruments with vastly different operational planning periods. This project planning period, which we define as being M years, describes the nominal economic or physical life-span of the structure or contract. Typical planning periods may vary from M=1 year or less for a financial contract to M=100 years or longer for a structural instrument, as illustrated in Table 1. The planning period can also be interpreted as the finite period over which cost-benefit analysis is conducted when assessing the project.

Table 1. Six Real-World Risk Mitigation Instruments and the Associated Project Planning Period (M)
Location Description M Reference
Iowa River Purchase options for inundation of downstream 1 Spence and Brown (2016)
agricultural lands to allow higher release flows
from the flood control reservoir
New York City Catastrophe bond for protection against storm 3
surge caused by named storms and earthquakes
County of Santa Emergency improvements to portions of the 5 USACE (2007)
Barbara, California Santa Maria Levee to reduce risk of levee failure
Iowa River Raise levees by 6 feet 30 Spence and Brown (2016)
Dallas, Texas Evacuation of Rockefeller Boulevard 50 USACE (2014)
Central California Tulare Lake storage and floodwater protection 100 GEI Consultants, Inc. (2017)
project

Typical climate risk management policies do not use a single risk mitigation instrument but rather build a portfolio of several instruments. Each has its own operational period, which may or may not match the planning horizon of the portfolio as a whole. This means that even if the portfolio has a long planning period, that is, if long-term plans are a priority, this goal may be best accomplished through a series of flexible and adaptive instruments with short individual planning periods. For example, the optimal policy for New York City to manage uncertain hurricane risk in the 21st century might potentially be to keep areas devastated by hurricane Sandy zoned for low-impact development for the next 10 years. This would reduce future risk over all climate scenarios while postponing major investments until large uncertainties as to the magnitude of future sea level rise are resolved. The costs and benefits of each individual instrument will be assessed over its individual, finite planning period, but decisions about the portfolio structure are evaluated over the longer planning horizon.

The availability of precise climate information in the near future may significantly alter the choice between a large, long-duration instrument and a sequence of smaller, short duration instruments that can be executed quickly. For example, if above-average climate risk is projected over the next few years, a more costly project might be justified. However, in the plausible case of a long construction period for the large, permanent instrument, a financial risk mitigation instrument might be needed in the immediate term to cover potential losses before the large project is completed. Conversely, if the near-term risk is projected to be low, then deferral of the large, potentially expensive instrument may be warranted. These cases highlight how the precision of short- and long-term climate risk projections plays directly into climate adaptation.

1.2 Climate Risk Varies on Many Scales

Climate risk is governed by a variety of physical processes which occur on scales ranging from local and transient to global and permanent. Of these processes, anthropogenic climate change has received the most attention in the climate adaptation literature, and its influence on some river floods, droughts, hurricanes, urban flooding, and many other climate hazards has been the subject of substantial investigation (e.g., Coumou & Rahmstorf, 2012; Milly et al., 2008; O'Gorman & Schneider, 2009; Trenberth et al., 2003). Human activities can also affect climate risk through modification of local land or river systems (see Merz et al., 2014) and through changes in exposure to extremes (Di Baldassarre et al., 2018; Jongman et al., 2012). In combination, these effects highlight that the past may not be an adequate representation of future climate risk (termed “nonstationarity” by Milly et al., 2008).

Secular change is not the only mechanism which can cause historical records to provide a biased view of future risk. The Hurst phenomenon is a well-known mathematical relationship which describes the long memory of processes found in geophysics, physics, biology, medicine, traffic, network dynamics, and finance (O'Connell et al., 2016). The extensive observations of such behavior in hydrologic and climatic time series emphasize the need to consider such processes as underlying any discussion of climate change or nonstationarity (Koutsoyiannis, 2003; Markonis & Koutsoyiannis, 2013; Palmer, 1993). The Hurst phenomenon has also been connected to low frequency quasiperiodic phenomenon, especially where fractal scaling is expected. For example, wavelet methods have been used to estimate the Hurst exponent (Chamoli et al., 2007; Simonsen et al., 1998) and to design simulation algorithms that reproduce self-similarity, long-range dependence, and quasiperiodic regimes (Bullmore et al., 2001; Feng et al., 2005; Geweke & Porter-Hudak, 1983; Kwon et al., 2007). The Hurst phenomenon also provides a link between catchment hydrology and global climate dynamics (Blöschl & Montanari, 2010; Montanari, 2003). The Hurst exponent is directly related to the fractal dimension of a process, and there is a rich multidisciplinary literature as to the process level and statistical justification of long memory and fractal processes in hydrology (Beran, 1994; Mandelbrot, 1985; Mandelbrot & Wallis, 1969). These processes have also been used to describe multiscale dynamics of the climate (Lovejoy, 2013; Lovejoy & Schertzer, 2012; 2013; Selvam, 2017), including El Niño-Southern Oscillation (ENSO; Maruyama, 2018; Živković & Rypdal, 2013) and the Pacific Decadal Oscillation (Mantua et al., 1997).

External forcing from structured climate signals (“teleconnections”; Ångström, 1935) and catchment dynamics are both useful in explaining the low-frequency variability (LFV) observed in natural hydroclimate time series. We illustrate such LFV in Figure 1, which shows a 500-year drought reconstruction from the Living Blended Drought Analysis (LBDA; Cook et al., 2010), a 100-year record of annual maximum streamflowflow on the American River at Folsom, and the global wavelet power spectrum for both (Roesch & Schmidbauer, 2016; Torrence & Compo, 1998). Peaks for the American River time series are apparent at 2.3 and 15 years and in the LBDA time series at approximately 8, 20, and 64 years. This is illustrated by the blue line in Figure 1b, which shows a 20-year moving average of the LBDA time series. A detailed analysis of these time series is beyond the scope of this paper, but we note that the high amplitude and long time periods of the quasiperiodic oscillations they exhibit are consistent with analyses of LFV in other hydroclimate systems (Hodgkins et al., 2017; Kiem et al., 2002; Swierczynski et al., 2012; Woollings et al., 2014). The key implication is that the observations, (Jain & Lall, 2001), trends (Bhattacharya et al., 1983), and frequencies (Newman et al., 2016) observed in the past are often poor predictors of future behavior.

Details are in the caption following the image
Hydroclimate time series vary on many time scales. (a) A 500-year reconstruction of summer rainfall over Arizona from the Living Blended Drought Analysis. Lower values indicate more severe drought. A 20-year running mean is also shown in blue. (b) A 100-year record of annual-maximum streamflow for the American River at Folsom. Daily streamflow values were divided by the catchment area to yield a normalized flow in units of millimeters per day. (c) The global wavelet power spectrum of the Living Blended Drought Analysis time series (a). Blue (red) dots indicate frequencies which are significant at α=0.10 (0.05) compared to white noise. (d) Global wavelet power spectrum, like (c), for the American River data.

1.3 The Dominant Processes Depend on the Planning Period

Evaluating a particular risk mitigation instrument involves projecting climate risk over the M-year planning period. Consequently, the physical mechanisms which impart predictability on the system differ between projects with long and short planning periods. As illustrated in Figure 2a, the lifetime risk of a permanent structure with a 100-year planning period depends on the magnitude and extent of future human activities, with very large associated uncertainty. Even in the idealized and unrealistic case of a perfect climate model, these uncertainties will be large. By contrast, this perfect climate model may usefully inform estimates of climate hazard over a 3-year insurance contract with much less associated uncertainty.

Details are in the caption following the image
A stylized illustration of (a) irreducible and (b) estimation uncertainty. (a) Irreducible uncertainty cannot be resolved with better models or data and is dominated in the short term by chaotic behavior of the climate and in the long term by the uncertainty in future anthropogenic climate change. (b) Informational uncertainty limits the potential to identify different climate signals. The blue line shows an idealized climate signal, and the black line shows observations, which are scattered stochastically around the signal line. The green shading shows the true range within which observations will occur 95% of the time, while the gray shading the 95% confidence interval as estimated with a linear trend model. IID = independent and identically distributed.

Of course, scientists are not equipped with perfect models. Since different physical processes control climate risk at different time scales, successful integration of climate projections into decision frameworks depends on identifying, and subsequently predicting, these processes. A key question is whether the limited information in an N-year observational record permits the identification and projection of cyclical climate variability and secular change and what the resulting bias and uncertainty portend for risk mitigation instruments with a planning period ranging from a few years to several decades. As shown in Figure 2b, the combination of LFV, stochastic variability, and secular change in a limited record can lead to large uncertainty in estimated future risk. Although Figure 2 focuses on physical processes, similar conclusions would also be valid for the socioeconomic processes which drive exposure to floods and other hydroclimate hazards.

2 Methods

We consider a set of stylized experiments to assess how well one can identify and predict risk associated with cyclical and secular climate signals for the M-year planning period and the probability of overdesign or underdesign of a climate adaptation strategy based on these projections. We consider different temporal structures for the underlying risk which encompass quasiperiodic, regime-like, and secular change, as well as simple statistical models for estimating this risk from an N-year historical record. The relative importance of estimating the short- or long-term risk associated with these extremes depends on the design life M, but the potential to understand and predict these different types of variability depends on the informational uncertainty in the N-year historical record. Though we illustrate our findings with a simple flood risk example, the conclusions drawn apply to other hydroclimate hazards and, in particular, those typically characterized through a time series of annual maxima or minima.

We consider three scenarios for climate risk, which we define by the structure of the underlying climate signal: (i) secular change only, (ii) LFV only, and (iii) LFV plus secular change. For each scenario and for its identification from the N year length historical data, the bias and variance of the estimated flood risk over the M-year design life relative to the “true model” are computed. We repeat the simulations J=1,000 times for each combination of experiment parameters to obtain estimates of the expected bias and variance for each scenario given M and N (section 7).

We caution the reader that the models for sampling climate risk (section 5) and for statistically projecting future risk (section 6) were chosen for their intuitive interpretation, rather than their general validity (see Held, 2005, for a thoughtful discussion of the value of simple models). We do not, in general, endorse these models for practical use but instead argue that the conclusions drawn from these simple models may be straightforwardly applied to more complex and realistic models. This discussion continues in section 14.

2.1 Sampling Climate Risk

The first step is to sample climate risk by generating synthetic streamflow sequences. To do this, we model annual-maximum flood peaks with a log-normal distribution, conditional on a location parameter which varies in time:
urn:x-wiley:eft2:media:eft2568:eft2568-math-0001(1)
We further assume a constant coefficient of variation of the log streamflow
urn:x-wiley:eft2:media:eft2568:eft2568-math-0002(2)
and apply a lower threshold on the standard deviation
urn:x-wiley:eft2:media:eft2568:eft2568-math-0003(3)
This formulation describes all scenarios for future climate considered in this paper within a single equation. To add climate variability to the system, the only component which needs to change is the dependence of μ(t) on time, which we parameterize as
urn:x-wiley:eft2:media:eft2568:eft2568-math-0004(4)
where x(t) represents a climate time series which itself exhibits LFV but not secular change. This parameterization is analogous to the “climate-informed” approach described in several studies for estimating climate risk (Delgado et al., 2014; Farnham et al., 2018; Merz et al., 2014). Following equation 4, when β≠0, there will be LFV, and when γ≠0, there will be secular change. The values of all parameters used for sampling climate risk are listed in Supporting Information S1 for each of the three scenarios considered.

We represent the climate state variable x(t) through an index for ENSO, which has been shown to impact flood risk around the world (Ropelewski & Halpert, 1987; Ward et al., 2014) and has characteristic variability on time scales of 3 to 7 years (Sarachik & Cane, 2009) as well as a “staircase” of lower-frequency scales (Jin et al., 1994). We model ENSO variability by taking a 20,000-year integration of the Cane-Zebiak model (Zebiak & Cane, 1987) to produce a monthly NINO3 index (Ramesh et al., 2016). To create an annual time series, we average the October–December values of the NINO3 index for each year. Figure S1 shows a wavelet spectrum and time series plot of the resulting annual time series. In Supporting Information S1, we consider an alternative parameterization of μ(t), which considers a Markovian state transition rather than an explicit ENSO model, and note a general agreement of results.

2.2 Projecting Climate Risk Over the Future M Years

Once a synthetic streamflow sequence has been generated, we evaluate the identifiability and predictability of the dominant climate modes by fitting the sequence to statistical models and creating probabilistic projections of the future. We use three well-studied statistical methods for future flood risk, each of which parameterizes time in a different way. One is purely stationary, another captures LFV, and the third captures secular change. We choose these models for their interpretability and simplicity, rather than because of a belief that they are generally valid. For each synthetic flood sequence to be analyzed, the first N years are treated as observations. Once a statistical model is fit to these observations, then K=1,000 sequences of future annual-maximum streamflow over the future M-year record are generated from the fitted model using Monte Carlo simulation.

In the first case we fit a stationary model to the observed flood record, following classical assumptions of independent and identically distributed sequences. In this model annual-maximum streamflow are taken to follow a log-normal distribution with constant mean and variance. We refer to this model as “LN2 Stationary.” The parameters of the model are fit in a Bayesian framework to fully represent the posterior uncertainty, using the stan probabilistic computing package (Carpenter et al., 2017) with weakly informative priors (Gelman et al., 2017; Simpson et al., 2017). The full model, including priors, is given in equation (S1).

Next, we modify this stationary model to incorporate secular change. Many studies have done this by regressing certain parameters of the model on time (see Salas et al., 2018, for a comprehensive review). We consider an extension of the stationary log-normal model by adding a time trend on the scale parameter and maintaining a constant coefficient of variation, as given in equation (S2). We refer to this model as “LN2 Linear Trend.” This model gives a lower bound on total informational uncertainty because it correctly represents the trend's known form, whereas in real-world analyses the form of the trend is unknown.

Finally, we explicitly model LFV using a hidden Markov model (HMM). An HMM is a latent variable model in which the system being modeled is assumed to follow a Markov process with unobserved (i.e., hidden) states S(t) (Rabiner & Juang, 1986). The (unobserved) states evolve following a first-order Markov process, and the observed variable (e.g., streamflow) depends only on the underlying state. HMMs have been widely used for modeling streamflow sequences (Bracken et al., 2016) and ENSO (Rojo Hernandez et al., 2017). We fit streamflow sequences using an HMM with two states. The model is fit using the Baum-Welch algorithm, assuming that the data follow a log-normal distribution that is conditional only on the unobserved state variables. This algorithm simultaneously estimates the transition matrix of the Markov process and the conditional parameters of each distribution. For simplicity, we fit only a two-state HMM to each sequence. Future floods are then estimated by simulating future states from the estimated transition matrix and then drawing Q(t) conditional on the simulated state.

2.3 Evaluating Fitting Models

Both estimation bias and estimation uncertainty affect the utility of a climate risk projection. An instrument whose design was based on projections with overestimated variance or positive bias will be overdesigned, either causing the risk manager to avoid the investment, given its higher cost, or will lead to unnecessary diversion of funds from other instruments. Similarly, an instrument designed based on underestimated variance or negative bias may be underdesigned and thus fail to protect the public.

We evaluate both the estimation bias and estimation uncertainty. For a given choice of M, N, and generating model, we compare the synthetic streamflow sequence's N-year “historical record” and the K=1,000 posterior simulations of future flows. The quantity urn:x-wiley:eft2:media:eft2568:eft2568-math-0005, the estimated expected number of floods per year, is taken by calculating, for each of the K posterior simulations, the number of exceedances of the flood design threshold and then dividing by M to get exceedances per year. We then compute the variance of these K estimates. We further calculate the bias of urn:x-wiley:eft2:media:eft2568:eft2568-math-0006 by averaging it across the K samples and comparing this to the number of times the M-year “future period” of the synthetic streamflow sequence exceeds the flood design threshold. Since the “observed” number of flood exceedances from the generating model is inherently noisy for an M-year period, we average the bias and variance across J=1,000 different streamflow sequences to compute expected values of both. These sequences are generated with the same underlying parameters, but the specific synthetic NINO3 sequence (or set of Markov states) may differ between the J sequences.

2.4 Experiment Design

Figure 3 describes the experimental design. We assess estimation bias and variance for three scenarios of future climate. First, we consider an idealized scenario where only secular change is present in the system and LFV is fully damped (“secular change only”). Next, we consider the “preindustrial” case where there is no secular change but LFV modulates climate risk in time (“low-frequency variability only”). Finally, we consider a more realistic (though still idealized) case with both LFV and secular change (“low-frequency variability plus secular change”). Model parameters for each scenario are given in Text S1.

Details are in the caption following the image
Flow chart describing experiment design. Parameters are shown in red. N denotes the informational uncertainty (length of historical record) and M the amount of extrapolation (project design life). Calculated quantities are shown in white. Quantities used for analysis are shown in blue.

Computation was carried out in the python programming language, making particular use of the matplotlib, numpy, pandas, pomegranate, scipy, and xarray libraries for scientific computing (Hoyer & Hamman, 2017; Hunter, 2007; Jones et al., 2001; McKinney, 2010; Schreiber, 2017; van der Walt et al., 2011). Wavelet analysis was conducted using the WaveletComp package (Roesch & Schmidbauer, 2016) in the R programming language. Bayesian models were written in the stan probabilistic programming language (Carpenter et al., 2017) using the No U-Turn Sampler (Betancourt, 2017; Hoffman & Gelman, 2011). The codes used to generate the figures and text of this paper are available in a live repository at https://github.com/jdossgollin/2018-robust-adaptation-cyclical-risk and a permanent archive at https://doi.org/10.5281/zenodo.1294280.

3 Results

These three scenarios considered for future climate are illustrated in Figure 4, which shows a single synthetic streamflow sequence generated with N=50 and M=100. We also show projected future climate risk with each of the three estimating models described in section 6. This figure highlights that even where projections of average streamflow are unbiased, if the spread is too large then projection of the threshold exceedance probability will also be too large. In the remainder of this section we present a more systematic analysis of each of these three cases.

Details are in the caption following the image
An illustration of the estimation procedure. A single streamflow sequence with N=50 and M=100 is shown for each of the three cases (secular only, LFV only, and secular plus LFV) considered. The blue line shows the observed sequence. The gray shading indicates the 50% and 95% confidence intervals using each of the three fitting methods discussed (rows). The horizontal black line indicates the flood threshold. LFV = low-frequency variability.

3.1 Secular Change Only

In the idealized case where only secular change exists, accurate climate predictions need to either use a long record to identify and model this trend or to ignore the trend and predict only a few years ahead. This is shown in Figure 5, which depicts the estimation bias and variance for each of the three estimation models for many combinations of M and N.

Details are in the caption following the image
Expected estimation bias and variance for sequences generated with secular change only (no low-frequency variability). Sequences were fit to each of three statistical models (columns) for different N and M (x and y axes, respectively). Top row shows estimation bias, and bottom row shows log standard deviation of estimates. Note the uneven spacing of the x and y axes.

The log-normal trend model tends to overestimate risk (positive bias), except when N is large, because the model assigns substantial probability to the trend being larger than it actually is. The variance of these estimates is also large. This again highlights the difficulty of fitting complex models for estimating risk when informational uncertainty is large. By contrast, the stationary log-normal model and HMM, which do not account for secular change, show relatively low variance of their estimates and exhibit low bias for short M. As N, these (mis-specified) models can only represent the trend by setting the scale parameter very large, leading to high estimation variance and (as M) also a large bias. This principle has prompted some to consider only the most recent years of the data, deliberately shortening N (i.e., Müller et al., 2014). However, these results also highlight that the increase in variance as N is reduced may quickly outpace the utility of any bias reductions.

If the analyst could know a priori that secular change is present in a time series, and if M is long, then the use of a complex model which represents the processes causing this change is required. Here the log-normal linear trend model has the advantage of being correctly specified (both the generating and fitting processes assume a log-normal distribution conditional on a linear time trend), which is generally not the case in the real world (Montanari & Koutsoyiannis, 2014; Serinaldi & Kilsby, 2015). As a result, in real-world settings longer N may be required to identify trends whose exact form is not known. Alternatively, if M is small then it may be reasonable to use a stationary estimate, since the bias will be small and the variance substantially lower.

3.2 LVF Only

We next turn to the idealized case where LFV is present but there is no secular change in the system. Figure 6 highlights that identification of nonexistent trends from limited data may lead to gross overestimation of true risk through an increase in the variance of the estimated risk. As expected, the stationary log-normal model performs well overall, with low bias and low variance. The HMM actually outperforms the stationary model, with slightly lower variance than the stationary model, because it better captures the multimodal distribution that emerges from dependence on the ENSO index, which exhibits several regimes (see Figure S1). By contrast, the linear trend model performs poorly for low N and high M because a positive probability is assigned to the existence of a positive trend.

Details are in the caption following the image
As Figure 5 but for sequences generated with zero secular change and strong low-frequency variability.

Of particular relevance to analysis of real-world data sets is the ratio of the project planning period M to the characteristic periods of variability of the LFV. If this period is much larger than M, then a stationary assumption may provide reasonable estimates, and fewer observations may be required (shorter N). As shown in Figure S1, the ENSO time series is most active in the 3- to 6-year band. In the real world, however, many hydroclimate time series vary at multidecadal and longer frequencies. In this case, as illustrated in Figure 2, the characteristic periods may be as large or larger than M, particularly if multidecadal modes such as the Pacific Decadal Oscillation or Atlantic Multidecadal Oscillation are involved, and the LFV must therefore be estimated explicitly. This in turn requires a longer observational record N in order to identify and predict these different signals.

3.3 LVF and Secular Change

In the final and most realistic case, where both LFV and secular change are present, stationary models perform well for short M, while for long M, the trend must be identified from a long record and modeled explicitly.

Consistent with the conceptual illustration of Figure 2, the results of Figure 7 highlight that the relative importance of secular change and LFV depends on M. When M is long, climate risk is dominated by secular change and it becomes essential to model this risk explicitly with a more complex model (i.e., the linear trend model). Alternatively, when M is short LFV dominates and the increased variance associated with estimating a trend is not worth the modest reduction in bias. As before, when the informational uncertainty is large (small N), the identifiability and predictability of the trend are limited.

Details are in the caption following the image
As Figure 5 but for sequences generated with both low-frequency variability and secular change.

4 Discussion

Evaluating and implementing investments for climate risk mitigation involves making projections of climate risk, which generally exhibits both LFV and secular trends, over the M-year project life of the instrument. The success of this prediction will depend on the identifiability of different signals from limited information, the time scales of LFV relative to the project life of the instrument, and the degree of intrinsic uncertainty in the system. In this paper we took a synthetic data approach to explore the implications of varying M and N in stylized scenarios that represent important features of real-world hydroclimate systems.

Figures 5 and 7 show that for projects where M is sufficiently short, intrinsic uncertainty is low and cyclical climate variability is dominant over the project planning period (Hodgkins et al., 2017; Jain & Lall, 2001). However, one's ability to identify and predict this variability depends on having a model of sufficient complexity to represent the processes that cause LFV and the data to fit the model. In this case, the project may be in the “potential predictability zone” of Figure 8. If sufficient information is not available, however, then simple models which represent fewer processes may be preferred (the “rough guess zone”).

Details are in the caption following the image
The importance of predicting different signals, and the identifiability and predictability of the signals, depends on the degree of informational uncertainty (N) and the project planning period (M).

For projects with longer M, our results highlight the importance of identifying and predicting secular change. As illustrated schematically in Figure 2, large uncertainties (e.g., as to future CO2 concentrations and local climate impacts) lead to large intrinsic uncertainty in projections of future climate risk. As the physical mechanisms cascade from global (e.g., global mean surface temperature) to regional (e.g., storm track position; Barnes & Screen, 2015) and local (e.g., annual-maximum streamflows) scales, informational uncertainties also compound and increase (Dittes et al., 2017). With sufficient information (large N), this informational uncertainty may be reduced, but these data cannot address intrinsic uncertainty, and this zone is thus named the “intrinsic uncertainty zone.” Finally, if N is limited, then there will be strong potential for misleading estimates and overextrapolation (i.e., a “danger zone” for planning).

These findings were derived conceptually and through idealized computational experiments for simulating and predicting climate risk, but the principles are applicable to more complex, physically based methods. For example, flood frequency analysis may join observations across time and space (Lima et al., 2016; Merz & Blöschl, 2008) or apply model chains based on general circulation models and hydrologic models (see Merz et al., 2014) to increase N. We suggest that the sample size N defined in our experiments may be straightforwardly interpreted as a measure of the total informational uncertainty in the analysis; as N increases, informational uncertainty decreases.

Similarly, real-world climate adaptation plans will typically include multiple instruments which may be placed in different locations and times in a sequential fashion. Even if the planning period of a portfolio is long, the individual instruments within the portfolio may have short planning periods. Since section 10 shows that the bias and variance of climate risk projections tend to increase with M, the total bias and variance associated with sequencing 20 consecutive M=5-year projects will be less than that associated with making a single M=100-year project. This effect will be compounded by the fact that if the first M=5-year project is based on estimates with informational uncertainty N, the second will have N+5, the third N+10, and so on.

The climate adaptation decisions which our analysis can inform are typically framed as economic cost-benefit analyses which discount future cash flows at some annual rate (Powers, 2003; Sodastrom et al., 1999). The application of a positive discount rate, mandated for many public sector projects in the United States (Powers, 2003), further emphasizes the importance of predicting near-term risk. Projects with long planning periods must therefore overcome future discounting, the potential for large bias or variance, and that all estimates are made with informational uncertainty N. By contrast, the informational uncertainties for a sequence of short-term instruments are N,N+M,N+2M,…, potentially yielding improved identifiability and predictability of relevant climate signals.

5 Summary

In this paper we considered how the temporal structure of the climate affects the potential for successful prediction over a finite M-year future period. We began with three premises, or observations, about the nature of climate risk: (i) that different climate risk mitigation instruments have different planned life-spans; (ii) that climate risk varies on many scales; and (iii) that the processes which dominate this risk over the planning period depend on the planning period itself. Although the simulations presented here are neatly divided into secular change, LFV only, and LFV plus secular change, real-world hydroclimate time series exhibit LFV on many time scales and several sources of (not necessarily linear) secular change, adding further informational and intrinsic uncertainties.

Depending on the specific climate mechanisms that impact a particular site and the predictability thereof, the cost and risk associated with a sequence of short-term adaptation projects may be lower than with building a single, permanent structure to prepare for a worst-case scenario far into the future. For most large actors, a portfolio of both large M and small M projects will likely be necessary, none of which precludes the need for mitigation of global and local climate change and the the execution of vulnerability reduction strategies.

Acknowledgments

The authors thank Celine Mari, Alberto Montanari, and one anonymous reviewer for comments which greatly improved this manuscript. The authors thank Nandini Ramesh of Columbia University for providing the synthetic NINO3 index from a 100,000-year run of the Cane-Zebiak model as described in Ramesh et al. (2016). The authors thank John High of the U.S. Army Corps of Engineers for providing the naturalized daily streamflows at the Folsom Dam. J. D. G. thanks the NSF GRFP program (grant DGE 16-44869: “Understanding & Predicting Climate Drivers of Extreme, Mid-latitude River Floods”) and SERDP program (grant 2516: “Climate Informed Estimation of Hydrologic Extremes for Robust Adaptation to Non-Stationary Climate”) for support. All codes and data used to generate this paper are available in a live repository (https://github.com/jdossgollin/2018-robust-adaptation-cyclical-risk/) and a permanent archive (https://doi.org/10.5281/zenodo.1294280).