# Time of Detection as a Metric for Prioritizing Between Climate Observation Quality, Frequency, and Duration

## Abstract

We advance a simple framework based on “time of detection” for estimating the observational needs of studies assessing climate changes amidst natural variability and apply it to several examples related to ocean acidification. This approach aims to connect the Global Ocean Acidification Observing Network “weather” and “climate” data quality thresholds with a single dynamic threshold appropriate for a range of potential ocean signals and environments. A key implication of the framework is that measurement frequency can be as important as measurement accuracy, particularly in highly variable environments. Pragmatic cost-benefit analyses based on this framework can be performed to quantitatively determine which observing strategy will accomplish a given detection goal soonest and resolve a signal with the greatest confidence and to assess how the trade-offs between measurement frequency and accuracy vary regionally.

## Key Points

- Time of detection can be used to evaluate the effectiveness of climate observing efforts
- Measurement frequency can be as important as measurement accuracy, particularly in highly variable environments
- Climate quality measurements are critical for monitoring some, but not all, climate signals

## 1 Introduction

A growing body of scientific literature focuses on assessing the “time of emergence” of various climate signals or the time at which a climate signal grows to exceed a multiple of the noise in an observed or modeled climate record. The idea has been used both to characterize how rapidly modeled climate signals become meaningful relative to natural variability (e.g., Christian, 2014; Hawkins & Sutton, 2012; Henson et al., 2017; Keller et al., 2014; Rodgers et al., 2015) and to determine the length of a record required for detection of a climate signal despite natural variability and/or observational uncertainty (e.g., Carter et al., 2016; Ilyina et al., 2009; McKinley et al., 2016; Weatherhead et al., 1998). The latter application is referred to here as “time of detection,” and is our focus. There is also currently scientific discussion regarding what level of measurement uncertainty is allowable for climate research and how requirements might differ when researching shorter-timescale natural phenomena (e.g., ocean weather; Newton et al., 2015). We contend that these topics are connected by a simple, practical question: “What is needed to confidently assess whether the Earth system is changing over a given length of time in a given region?” The answer to this question is, in some cases, certainly “better measurements,” but other possible answers include “a longer measurement time series,” “more frequent measurements of the system,” or even “knowledge of how much natural variation we should expect.” In this study, we present a simple approach that combines the time-of-detection concept with measurement uncertainties into a framework that can be used to test the effectiveness of monitoring efforts within the context of the potential answers to this question. The approach accounts for differing timescales of variability and allows for strategies that might be used to reduce the impacts of natural variability in the measurement record.

We first present the framework and then show how it can be applied using a handful of examples that are related to ocean acidification (OA). While we focus on OA applications, we note that this approach could be applied to any climate signal (e.g., warming, eutrophication, and deoxygenation). With a large number of emerging climate signals to observe—and a large and growing number of sensor and measurement platform technologies being used to observe them—our vision is that this approach will provide means to optimize the effectiveness of future observation strategies.

The framework is built around the minimum requirements for signal detection, but we caution that “good enough for detection” is only good enough provided some common but seldom tested simplifications hold (e.g., normally distributed uncertainties and variabilities). It is therefore advisable that one aims higher than the minimum benchmarks provided by this framework, and we note that the more one exceeds the minimum requirements by, the sooner a signal can be detected and the better the magnitude of the signal will be constrained. Nevertheless, the minimum signal detection criterion is a convenient quantitative measure with which different observation strategies can be compared.

## 2 The Framework

*S*, expressed as a difference from an initially or distantly measured value) must then equal or exceed twice the noise (

*N*, expressed as a standard deviation (e.g., Keller et al., 2014)

*S*and

*N*and their components.

*S*and

*N*terms are estimated from a consideration of the signals of interest, the variability of the associated systems, and the uncertainties inherent in the measurements and simplifying assumptions. For marine systems, the common component terms for

*S*and

*N*are measurement uncertainty (

*U*

_{M}), daily variability (

*V*

_{D}), subseasonal variability (

*V*

_{SS}), seasonal variability (

*V*

_{S}), interannual and longer timescale natural variability (

*V*

_{I}), long-term change (Δ) or trend multiplied by time (

*T*× d

*t*), and uncertainty in any assumptions (

*U*

_{A}) made during data reduction. Depending on the region and question of interest, one must decide whether each of these components is part of the signal or the noise. Once the components are partitioned between

*S*and

*N*, components of the noise can be combined as the square root of the sum of their squares. A common example in climate science is detecting the presence of a long-term trend in a raw signal such as seawater alkalinity. In this case the signal would equal Δ while noise would be the combination of all other terms (e.g., Carter et al., 2016; Ilyina et al., 2009) added in quadrature

*V*

_{S}and allowing it to be neglected. Alternately, a researcher interested in

*V*

_{S}(e.g., Fassbender, Alin, et al., 2018) could “detrend” monthly measurements by subtracting a linear fit to a multiyear time series, allowing the impacts of

*T*and the longer-term portion of

*V*

_{I}to be neglected (these two terms can be challenging to distinguish from one another in short time series; see work by Weatherhead et al. (1998) for more on this topic). However, in both cases errors in the assumption that these approaches completely removed the impacts of the variabilities/trends must be included (as

*U*

_{A}) in the noise estimate. For example,

*U*

_{A}might equal the uncertainty on the annual mean value in the first example above and the uncertainty on the subtracted trend line in the second. The possibility of reducing the variability by averaging portions of a highly temporally resolved time series gives rise to an important insight for observation network design: An observation network providing a large number of moderately accurate measurements may be preferable to one that provides infrequent yet highly accurate measurements, particularly when natural variability is large compared to the measurement uncertainty. The proposed framework allows for quantitative comparison of the merits of the full spectrum of such possibilities using, for example, the

*S*to

*N*ratio.

Two additional complications that could be (but are not here) included in the framework are evolving variability terms (e.g., a growing *V*_{S}; Fassbender, Rodgers, et al., 2018) and spatial variability. Spatial variability is an additional mode of variability that should be considered for observing platforms that can cross natural seawater property gradients (e.g., floats, drifters, and ship measurements). For this reason, these platforms are often implemented as arrays that partially cancel these biases using a large number of independent measurements within a region. These complications are topics for the future development of this method.

*S*in this case) could solve for this

*U*

_{M}(Figure 1a) as

*T*) that could be detected for a time series of length d

*t*(here the

*S*is expanded to

*T*× d

*t*):

*t*expected for a given increase in

*U*

_{M}(supporting information Figure S1).

When rearranged to solve for d*t* (Figure 1b), equation 4 is one formulation for “time of emergence/detection” (e.g., Hawkins & Sutton, 2012; Keller et al., 2014). This framework approach to time-of-emergence estimation differs from the formulation of Weatherhead et al. (1998)—which is also used in climate science (Beaulieu et al., 2013; Henson et al., 2010; Henson et al., 2018; Sutton et al., 2018)—in that it considers timescales of variability separately and explicitly rather than inferring the net impact of all variability collectively from deseasonalized monthly time series data. Our approach has the advantage of being applicable to a broader range of climate records but the disadvantage that it requires independent information about the impacts of all timescales of variability. For our purposes therefore, even data sets or model simulations that show null results of “no emergent trends” are often useful for characterizing regional variability on different timescales.

## 3 Example Applications

We consider example applications related to underway ship (section 3), mooring (4.2), and repeat hydrographic cruise data (summarized in Table 1 and explained in subsequent sections).

Code | Description | Units | U_{M} |
V_{D} |
V_{SS} |
V_{S} |
V_{I} |
U_{A} |
N |
T (year^{−1}) |
dt (year) |
S |
---|---|---|---|---|---|---|---|---|---|---|---|---|

Figure 1a, calculating maximum measurement uncertainty for given signal and variability | ||||||||||||

1Uni | Unimak | μatm | x | - | - | - | ~ | - | 27 | 2.3 | 20 | 46 |

2Bd | BATS decadal | μatm | x | 2 | 8 | 31 | 4 | - | 20 | 2 | 20 | 40 |

2Bt | BATS monthly time series | μatm | 17 |
2 | 8 | ~ | 4 | 7 | 20 | 2 | 20 | 40 |

2Bf | BATS float array | μatm | 18 |
2 | ~ | ~ | 4 | 7 | 20 | 2 | 20 | 40 |

2Bm | BATS mooring | μatm | 18 |
~ | ~ | ~ | 4 | 7 | 20 | 2 | 20 | 40 |

2Cd |
CCE2 decadal |
μatm |
x |
15 |
36 |
33 |
9 |
- |
20 |
2 |
20 |
40 |

2Ct |
CCE2 monthly time series |
μatm |
x |
15 |
36 |
~ |
9 |
7 |
20 |
2 |
20 |
40 |

2Cf | CCE2 float array | μatm | 6 |
15 | ~ | ~ | 9 | 8 | 20 | 2 | 20 | 40 |

2Cm | CCE2 mooring | μatm | 16 |
~ | ~ | ~ | 9 | 8 | 20 | 2 | 20 | 40 |

3C_{anth} |
Decadal C_{anth} |
μmol kg^{−1} |
4.1 |
~ | ~ | ~ | ~ | 0.55^{a} |
4.1^{a} |
0.41 | 20 | 8.2 |

Figure 1b, calculating minimum trend rate or length of trend with a given rate | ||||||||||||

1Uni | Unimak | μatm | - | - | - | - | ~ | - | 27 | 2.3 | 23 |
54 |

2Bd | BATS decadal | μatm | 2 | 2 | 8 | 31 | 4 | - | 32 | 2 | 32 |
64 |

2Bt | BATS monthly time series | μatm | 2 | 2 | 8 | ~ | 4 | 9 | 12 | 2 | 12 |
18 |

2Bf | BATS float array | μatm | 11 | 2 | ~ | ~ | 4 | 8 | 14 | 2 | 14 |
24 |

2Bm | BATS mooring | μatm | 2 | ~ | ~ | ~ | 4 | 10 | 10 | 2 | 10 |
8 |

2Cd |
CCE2 decadal |
μatm |
2 |
15 |
36 |
33 |
9 |
- |
52 |
2 |
52 |
104 |

2Ct |
CCE2 monthly time series |
μatm |
2 |
15 |
36 |
~ |
9 |
5 |
40 |
2 |
40 |
80 |

2Cf | CCE2 float array | μatm | 11 | 15 | ~ | ~ | 9 | 7 | 22 | 2 | 22 |
41 |

2Cm | CCE2 mooring | μatm | 2 | ~ | ~ | ~ | 9 | 10 | 13 | 2 | 13 |
18 |

3C_{anth} |
Decadal C_{anth} |
μmol kg^{−1} |
- | ~ | ~ | ~ | ~ | - | 1.45^{a} |
0.15 |
10 | - |

Additional calculations discussed in text | ||||||||||||

Unimak float array | μatm | 11 |
~ | ~ | ~ | - | - | 29 | 2.3 | 25 | 58 | |

Peru float array | μatm | 11 | 15 | ~ | ~ | 66 | 4 | 66 | 2 | 66 |
131 | |

Peru mooring | μatm | 2 | ~ | ~ | ~ | 63 | 4 | 63 | 2 | 63 |
126 |

*Note*. Each row is one calculation. Assumed quantities are underlined. Calculated quantities are in bold. The “code” refers to dots in Figures 1a and 1b with the number indicating the example set. Italicized calculations fall beyond Figure 1 axis limits. “x” = signal does not exceed variability alone, so no additional*U*_{M}noise can be calculated; “-” = not needed for calculation; see text for details; “~” = neglected, assumed small, or dealt with using simplifying assumptions.^{a}From Carter et al. (2017, Appendix B).

### 3.1 *p*CO_{2} Trend Near Unimak Pass, Alaska

Our first example tests whether frequently repeated surface seawater *p*CO_{2} measurements (81 sets of measurements in ~21 years) through Unimak Pass, a 19-km wide and 55-m deep waterway in the Alaskan Aleutian Islands (Figure 2a), are sufficient to establish whether a long-term change has emerged from the envelope of natural variability and how large of a role measurement uncertainty has played in the ability to detect a change there. The signal in this example is the long-term trend *T*, combined with any portion of interannual variability *V*_{I} that acts on timescales comparable to or longer than the ~20-year data record available in version 6 of the Surface Ocean CO_{2} Atlas data set (see Bakker et al., 2016 for a version 3 description). The noise is all other modes of variability combined with a comparatively small (~2 to 5 μatm) *U*_{M}. The constraints of Unimak Pass minimize spatial variability from ship tracks, but this is nevertheless an unresolved additional potential contribution to the signal and noise (Figure 2a).

We use several approaches to reduce the impacts of variability on this record. First, we average all *p*CO_{2} values from each transect of Unimak Pass, keeping only data from 77 transects with at least 10 independent *p*CO_{2} values. This averaging reduces our statistical degrees of freedom to reflect the fact that multiple measurements along brief transects do not provide independent realizations of *V*_{SS}, *V*_{S}, or *V*_{I} or arguably of *U*_{M} or *V*_{D} (with an only 2- to 3-hr transit through the region at ~20 knots). A *U*_{A} term is necessary to account for uncertainty regarding our assumption that the averaged values represent the true mean value expected along each transect. This *U*_{A} equals the standard deviations of the individual *p*CO_{2} measurements propagated through the averaging for each transect (Figure 2b, error bars). There remains enough combined variability that the long-term trend *T* of the transectaveraged data (estimated using an uncertainty weighted regression) is not significant over the ~20-year observational period (*p* = 0.068, trend root-mean-square error [RMSE[ = 49 μatm). However, these data still have an unknown influence from *V*_{D}, *V*_{SS}, and *V*_{S}. Next, an adjustment is applied to “d-seasonalize” the data (see Bates, 2001; Sutton et al., 2018; Takahashi et al., 2009): the average monthly *p*CO_{2} anomalies relative to the long-term mean measured *p*CO_{2} (Figure 2c) are interpolated to the average month of each transect and subtracted from the transect-mean *p*CO_{2}. The observed trend after deseasonalization is 2.3 μatm/year (*p* = 5×10^{−6}; Figure 2d). The 27-μatm RMSE of this fit is due to the combination of unresolved *V*_{D}, *V*_{SS}, the portion of *V*_{I} acting on <20-year timescales, *U*_{M}, and a *U*_{A} term reflecting both errors in the transect averaging and errors in the deseasonalization adjustment (the latter of which averages ~13 μatm, estimated from the standard uncertainty on the average monthly anomalies used for the adjustment).

Our approach allows us to calculate that it would be ~23 years from 1995 before the observed rate of change in Unimak Pass would exceed the background of deseasonalized variability (Figure 2d), so the signal should be detectable within the next few annual releases of the Surface Ocean CO_{2} Atlas. This analysis demonstrates that a significant trend can be identified even before the trend can be shown to have fully “emerged” from the envelope of natural variability provided some of the influences of natural variability can be averaged out with frequent measurements. However, since we did not separately estimate *V*_{I} acting on >20-year timescales in this example, we cannot definitively say whether such a trend owes to very long timescale interannual variability.

We can also use equation 3 to solve for the maximum *U*_{M} that would result in trend detection within 25 years (i.e., by 2020). The framework suggests that the signal could still be detected within 25 years had *U*_{M} been as large as ±11 μatm (Table 1) or the approximate uncertainty of surface *p*CO_{2} estimates made from profiling floats equipped with pH sensors (Williams et al., 2017). Climate quality data are therefore not required for *p*CO_{2} trend detection within 25 years in this highly variable Aleutian pass when given a time series long and dense enough to constrain the seasonal cycle.

### 3.2 Measurement Timescales and Platforms

In the next example we use highly temporally resolved data and model output to estimate all timescales of variability in the framework at two disparate locations, thereby allowing an exploration of the interplay between measurement frequency, measurement uncertainty, natural variability, and signal detection in different regions. We consider surface *p*CO_{2} variability at two locations, both of which are equipped with moored autonomous *p*CO_{2} measurement systems (Sutton et al., 2014) sampling at intervals of 3 hr: the Bermuda Atlantic Time Series (BATS, 64°W, 31.5°N; Joyce & Robbins, 1996) in the oligotrophic North Atlantic subtropical gyre and a mooring in the California Current Ecosystem (CCE2, 121.8°W, 34.324°N; plotted in supporting information S2). The CCE2 location experiences intense variability from upwelling and biological productivity on seasonal and subseasonal timescales, whereas the BATS location is an open ocean site with substantially less high frequency variability. *V*_{D}, *V*_{SS}, and *V*_{S} are estimated as the average moving window standard deviations of sets of eight consecutive three-hourly *p*CO_{2} values, 30 consecutive daily average *p*CO_{2} values, and 12 consecutive monthly average *p*CO_{2} values, respectively. Measurement uncertainty is contributing to the observed daily variability, so *V*_{D} has the ±2 μatm *U*_{M} (Sutton et al., 2014) subtracted from this average moving window standard deviation in quadrature. We supplement these data records with fully coupled Earth system model output from the HadGEM preindustrial control run (r1i1p1) with fixed atmospheric *p*CO_{2} (Collins et al., 2008; supporting information S3). This affords *V*_{I} estimates at these locations as the standard deviations of annually averaged values (see Table 1).

*p*CO

_{2}increase of ~2 μatm/year (Sutton et al., 2018) would be expected to exceed natural variability and a small

*U*

_{M}(±2 μatm) after 26 years at BATS and after 42 years at CCE2 using decadal measurements that resolve no timescales of variability. If we could eliminate all impacts of

*V*

_{S}only,

*V*

_{S}and

*V*

_{SS}only, and all three of

*V*

_{S},

*V*

_{SS}, and

*V*

_{D}, then these detection times would decrease from 26 to 9, 5, and 4 years at BATS and from 42 to 30, 18, and 9 years at CCE2, respectively (not shown in Table 1). Example platforms that are capable of resolving these modes of variability are monthly time series measurements (Joyce & Robbins, 1996), 5- to 10-day cycling profiling biogeochemical Argo floats (Johnson & Claustre, 2016), and continuously measuring autonomous moorings (Sutton et al., 2014), respectively. However, realistically these platforms only gradually improve their resolution of the modes of variability as moreindependent realizations of the variability are recorded. Therefore, for the calculations in Table 1 we assess a

*U*

_{A}equaling the eliminated modes of variability divided by the square root of the number of realizations of each mode of variability after d

*t*years and added in quadrature. For example, for moorings,

*U*

_{A}equals

*t*iteratively (an exact solution is achievable but many termed). These calculations suggest that monthly time series data are nearly as effective as moorings (12 years to emerge with a monthly timeseries compared to 10 years for a mooring) at BATS but would be significantly less effective at CCE2 (40 and 13 years, respectively) due to high daily and subseasonal

*p*CO

_{2}variability. The moored autonomous systems are effective in both locations due to their high sampling frequency and measurement accuracy.

With an ±11-μatm *p*CO_{2} uncertainty (at 400 μatm; Williams et al., 2017), the framework suggests that profiling float observations would require 14 and 22 years to detect a *p*CO_{2} trend at BATS and CCE2, respectively. This is longer than is required for moorings (10 and 13 years, respectively) due primarily to the larger float-based *p*CO_{2} uncertainty at BATS and the inability of floats to resolve daily variability at CCE2. Floats are also less effective than time series measurements at BATS, but the reverse is true at CCE2 where floats can detect a trend sooner by resolving subseasonal variability. While the d*t* required for a profiling float at CCE2 is nearly double the requirement for a mooring, the float d*t* would only be 5% greater at CCE2 had that region exhibited the much larger (±63 μatm) interannual variability seen in the upwelling region offshore Peru in the same HadGEM simulation (Table 1, supporting information S3). The ideal measurement platform therefore depends significantly on the expected magnitudes and timescales of variability. For example, moorings are particularly effective in regions where high-frequency variability dominates, whereas a more cost-effective platform that samples less frequently could be preferable in regions with larger low-frequency variability.

A nearly identical (~13 years) time of detection value is obtained from Weatherhead et al.'s (1998) formulation when applied to deseasonalized monthly mooring data at CCE2 (Sutton et al., 2018, omitting the conservative 40% d*t* increase that they assign to allow for potential additional unresolved interannual variability).

### 3.3 Anthropogenic Carbon Accumulation

Examples so far primarily demonstrate the importance of measurement frequency, but meeting the stringent “climate quality” data requirements for carbonate system measurements (Newton et al., 2015) is critical for some OA monitoring applications. One example of this comes from inferring decadal interior ocean anthropogenic carbon (*C*_{anth}) storage from repeat hydrography (e.g., Carter et al., 2017; Kouketsu et al., 2013; Pardo et al., 2014; Waters et al., 2011; Williams et al., 2015; Woosley et al., 2016). With only one synoptic “snapshot” of ocean chemistry each decade, this observation strategy does not allow any modes of natural variability to be directly resolved through repeated observations. However, researchers have developed means of accounting for the impacts of natural variability on dissolved inorganic carbon (DIC) to resolve the signal of interest. One example approach is the extended multiple linear regression (eMLR) technique (Friis et al., 2005) which uses empirical regressions relating DIC to other measured properties affected by natural variability to estimate and remove the impacts of natural variability on DIC (i.e., using diverse measurements to resolve natural variability instead of frequent measurements). The eMLR method has been shown to be partially effective (Carter et al., 2017; Clement & Gruber, 2018; Plancherel et al., 2013), and the purely methodological error was recently estimated from simulations of model output with known *C*_{anth} to average ±0.55-μmol/kg *C*_{anth} (*U*_{A}) over the 200- to 1,500-m depth range where eMLR is most often applied (Carter et al., 2017). However, this uncertainty increases to 1.45-μmol/kg *C*_{anth} when simulated measurement errors common for “climate quality” repeat hydrographic observations are also included (i.e., *U*_{A} and *U*_{M} combined). These two uncertainties are the only remaining noise terms in the framework since the variability terms are assumed to be accounted for by eMLR.

An example *C*_{anth} signal is the average *C*_{anth} storage over the 200- to 1,500-m depth range of the Pacific Ocean along ~180°E (equaling 4.1-μmol/kg *C*_{anth} from 1995 to 2005 in the model simulation considered by Carter et al., 2017, Appendix B). Measurement uncertainties increase noise from 13% (*U*_{A}) to 35% (*U*_{A} and *U*_{M}) of that signal and increase the time of detection from 2.7 to 7.1 years. The eMLR method is therefore capable of detecting the signal with each decadal repeat, and the simulated DIC measurement uncertainty is the dominant source of uncertainty for the decadal *C*_{anth} estimates. The significant noise contribution from measurement uncertainties both prevents detection in (e.g., deeper) water masses with lower *C*_{anth} accumulation rates and increases the estimate uncertainty (i.e., lowers the *S*/*N* ratio) in water masses with higher accumulation rates. The *N* term could therefore be directly reduced by more accurate measurements or—as measurement uncertainties are mostly statistically independent between synoptic surveys—by more frequent or numerous hydrographic line reoccupations to reduce the DIC trend uncertainty. At the current rate of sampling, even the most stringent climate quality observation uncertainties limit the ability to infer interior ocean decadal *C*_{anth} distribution changes in broad swaths of ocean. Therefore, the highly accurate “climate quality” data produced by repeated hydrographic surveys are indeed needed.

## 4 Conclusions

The framework is intended to quantify the trade-offs between different observing approaches with respect to the impacts of measurement uncertainty, frequency, and duration and to assess how the importance of these factors depends on the potential signal magnitude and regional natural variability. However, a full consideration of the best measurement approach for a climate observing challenge must involve practical concerns not covered by the framework. Example additional considerations include measurement costs, platform suitability for the region(s) of interest, sensor availability and viability for the signals of interest, the availability of personnel with appropriate expertise, and potential synergies with existing, planned, and past observing efforts.

The examples we consider reveal lessons for an observing system planning exercise: (1) highly accurate measurements are needed to resolve some, but not all, critical climate signals, and the impacts of improving measurement confidence are quantifiable within the time-of-detection context; (2) emergent climate trends can generally be identified sooner with more frequent measurements, particularly for systems that are highly variable on shorter timescales; (3) records that do not identify emergent climate signals still provide important constraints for future oceanographic research; and (4) the comparative merits of different observing approaches (e.g., repeat hydrography, time series, float arrays, and moorings) depend significantly on the expected magnitudes of variabilty over a range timescales, on the uncertainties of the measurements produced, and on the uncertainties of the data reduction strategies applied during data workup. Broadly, repeat hydrography can be ideal when data transformations can limit the impacts of natural variability on climate signals, monthly time series are effective at detecing climate signals in areas with low daily and subseasonal variability, float arrays are effective for a range of environments where broad regional trends and patterns are of interest, and moorings can be ideal when resources can be directed at a single question or region, especially when the region experiences high-frequency natural variability.

## Acknowledgments

B. R. C. and A. J. S. are supported by the National Oceanic and Atmospheric Administration Global Ocean Monitoring and Observing Program (Data Management and Synthesis Grant: N8R3CEA-PDM). N. L. W. is supported by the National Academies of Sciences through the Research Associateship Programs Postdoctoral Fellowship. W. E. is supported by the Tula Foundation. C. H. acknowledges support from the National Science Foundation (OPP-1603116 and OCE-1459834). A. J. F. is supported by the David and Lucile Packard Foundation/MBARI. L. B. and A. J. S. received support from NOAA's Ocean Acidification Program. This research was carried out in part under the auspices of the Cooperative Institute for Marine and Atmospheric Studies under cooperative agreement NA10OAR4320143. This is JISAO contribution 2018-0170 and PMEL contribution 4839. Data are from the NOAA website (https://www.nodc.noaa.gov/ocads/oceans/time_series_moorings.html).