Press "Enter" to skip to content

Observations of the seiche that shook the world

SWOT data and processing

Pixelcloud data from the SWOT mission are obtained through open Earth Access (last accessed on February 3rd, 2025). We utilize the version 2.0, HR pixecloud data with shortname “SWOT_L2_HR_PIXC_2.0” in the earthaccess API. The SWOT high-rate pixel cloud data is also available at: https://doi.org/10.5067/SWOT-PIXC-2.0. We note that all data utilized in the present manuscript is included in the replication materials also. At the time of access, the level 2.0 data constitutes the highest processing level available. The Dickson fjord landslides in September and October of 2023 occurred shortly after the transition of SWOT to the Science phase. The SWOT Science phase is characterized by a 20.86 day repeat orbit with 10-day sub-cycles. The orbit is at an inclination of 77.6°, and is thus non-sun-synchronous, which reduces tidal aliasing. A consequence of the 10 day sub-cycles is that repeat measurements occur in groups leading to relatively short measurement gaps at some portions of the cycle, and complete gaps in the other. Data for the September and October events were obtained by filtering all available passes between September 16–26, and October 10–18th, respectively. We select all passes which fall into the bounding box given in Table S1. For the September period, this yields 5 initial passes occuring 0.5 days, 2 days, 3 days, 4 days, and 5 days after the event, respectively. Similarly, for the October period, 2 initial passes are identified at 0.5 days and 6 days post-tsunami. We process the SWOT data in two stages, first through manual inspection and then using a standard preprocessing procedure.

Manual inspection

Prior to processing each pass we perform a manual inspection. Due to limitations in the L2 processing algorithms applied to SWOT data, entire passes can be contaminated to the point where the data becomes unusable. An example of this contamination can be seen in Supplementary Fig. 2. These errors are visually obvious and are manually flagged and removed through visual inspection. This yields a final set of observations for the September event 0.5 days, 2 days, and 5 days, and only a single observation for the October event taken 0.5 days post-tsunami.

Data processing

All standard geophysical corrections are applied including wet and dry troposphere delays and cross-over corrections. Unlike the Cal/Val SWOT orbit, the cross-over correction for the Science phase is much more accurate due to the significant reduction in time between crossovers. Geoid corrections are applied based on the EGM2008 geoid model. Due to interpolation errors with the default FES2022 tidal corrections, a specialized approach is needed to handle tides. A comprehensive discussion on how tides are dealt with is given in Sections “Tidal Estimation” and “Tidal Validation”. After applying the geophysical corrections, we filter out all measurement values with sea-surface heights exceeding  ±4 m. This threshold was empircally determined following manual inspection of the dataset employed for tidal estimation. Due to retracking issues, spurious observations exist over the fjord which exceed this 4 m threshold and are thus filtered out to avoid biasing estimates. To obtain perceptually uniform sea-surface height maps which can be compared between passes, we set the center of each colormap to be the midpoint of the cross-channel slope. Due to small inaccuracies with the tidal corrections described in Sections “Tidal Estimation” and “Tidal Validation”, we found this procedure to be more robust.

Cross-channel slope computation

Due to the presence of noise artifacts in some of the SWOT measurements, we rely on estimates of the cross-channel slope between the points X1 and X2 for our analysis. We note that the slice between these points is perpendicular to both the long-axis of the fjord, and the Love node shown in Fig. 2. As a consequence of the nonuniform sampling of the pixelcloud data, it is necessary to define a tolerance which defines how far a point can be from the defined cross-section to be included. We experimented with several tolerance values but converged on a value of 333 m on either side of the defined cross-section. An example of this is shown in Supplementary Fig. 3, with ~8000 measurements shown. This choice provides a representative 666 m swath (≈5% of the fjord length) with which to estimate the variability/uncertainty of the cross-channel slope. A description of the Bayesian linear model employed for this procedure is given in Section “Bayesian Regression”.

SWOT data for tidal estimation

Pixelcloud data is also used to estimate the M2 tide throughout the Dickson fjord. Due to the severe aliasing produced by the irregular temporal sampling, we make use of all available data. The presence of winter sea ice creates further difficulties as the inclusion of these data will severely degrade the accuracy of tidal estimation. As before, we locate all passes which intersect with the study region (Table S1) between October 20th 2023, and November 1st, 2024. This yields an initial dataset of 241 passes. After manual filtering and removal of passes containing sea-ice, this leaves only 23 high-quality passes which are employed for tidal estimation. We apply the same geophysical corrections as before. In order to obtain time-series which can be fed to our spatially coherent Bayesian harmonic analysis we bin the nonuniformly sampled pixel cloud data into a fixed grid with a 100 m × 100 m resolution. Experiments were conducted at variable resolutions and this was found to yield a good balance between high resolution, and reduced sensitivity to noise. Data points in each bin are averaged. The total number of data per bin is variable, thus, we restrict our analysis to bins which contain at least 23 measurements. This threshold is empirically defined based on the fact that the usage of fewer than 23 measurements in testing produced spurious tidal estimates in some regions.

Seismic data

Seismic data are accessed using the Federation of Digital Seismic Networks web service client for Obspy23. As both4 and5 provide a comprehensive analysis of the long-period seismic signal, we choose to focus on two representative stations: II.ALE24, and IU.SFJD25. In order to isolate the energy associated with the VLP, data are shown after being bandpassed between 10 and 12 mHz. All additional filtering parameters needed to replicate our results are provided in the “seismic_attribution.ipynb” notebook.

Seismic attribution

Seismic attribution is performed using data from the II.ALE seismic station. The station is situated 1322.9 km away from the landslide source at 82.5033°N 62.3500°W. The station sits directly adjacent to the Love node (160°) which runs perpendicular to the major-axis of the fjord (Fig. 1B). As a consequence, this station receives almost exclusively Rayleigh waves which is reflected by the dominant vertical component of the VLP signal shown in Fig. 1D. Referencing the SWOT observations to these waveforms requires the accurate identification of the phase velocity and source location of the VLP signal.

Phase velocity

To compute the Rayleigh wave phase velocity, we utilize a heterogeneous Earth model as in ref. 5. To obtain the phase velocity of the 10.88 mHz signal we interpolate from the 10 mHz and 15 mHz LITHO1.0 velocity models14. Integrating over the path between the Dickson fjord and the II.ALE station we obtain a mean phase velocity of 4.0393 km s−1.

Source location

Due to uncertainties in the computed phase velocity, the source location of the VLP signal is not necessarily given by the center of the Dickson fjord. Indeed, both4 and ref. 5 identify source locations which are nearby, but away from the fjord itself. Using the same heterogeneous Earth model and a Fast Marching method for beamforming, a source location 92.4 km from the landslide location at 72.2°N 25.1°W is identified5. This source location is ~1409.5 km away from the II.ALE seismic station. Combined with the computed phase velocity, this yields a travel time of 348.97 s.

Validation with synthetic observations

To test our hypothesis that the phase velocity and source location obtained using the heterogeneous Earth model are appropriate for seismic attribution we compare our approach to an independent forward modeling exercise carried out in ref. 5. Using their 3 m HySEA simulation of the seiche as a source time function, synthetic 3-component Green’s functions are computed using the Syngine web service and convolved to approximate the displacement signals at the II.ALE station. In order to align the two signals, the authors identify an ~350 s shift empirically. This value is in agreement with the 348.97 s travel-time computed using the heterogeneous Earth model and beam forming. This agreement between two completely independent methods confirms the validity of the 348.97 s Rayleigh wave travel time to II.ALE for seismic attribution.

Seismic uncertainties

While our estimated Rayleigh wave travel time is in good agreement with the empirical estimate by ref. 5, this represents an important source of uncertainty in the estimation of the seiche amplitude. No comprehensive approach exists for the quantification of Rayleigh wave phase velocity or beamforming location uncertainties26. As such, we provide a lower-bound on our uncertainty estimate using the discrepancy between our estimate of 348.97 s and the empirical estimate of 350 s by ref. 5. If we assume a 92-s period of oscillation, this yields an approximate error of 2.12 %. Further uncertainties exist due to the fact that near the point source, the phase is governed by a Bessel function rather than a complex exponential as utilized. Whilst the uncertainty this introduces is difficult to estimate, it is important that this is acknowledged27. Additionally, testing of homogeneous Earth models yielded different estimates of both source location and velocities, yielding variable results. We found the agreement between these models and the empirically estimated wave travel time to generally be worse and thus did not consider them further.

Bayesian regression

Both cross-channel slope and tidal estimation are performed using a Bayesian linear model. Our selection of a Bayesian approach reflects our objective to accurately quantify the uncertainty in the estimated parameters and the SWOT data themselves. Unlike conventional least-squares estimation and other frequentist variants, which provide point estimates of the parameters, a Bayesian approach considers (and computes with) the probability distributions of the parameters. By representing our parameters of interest as probability distributions, the uncertainty associated with parameters in our model is explicitly represented, thus informing a calibrated measure of uncertainty over the target variables. As will be shown, the selection of appropriate priors can yield further advantages such as natural parameter shrinkage and increased robustness to noise. Shrinkage refers to reducing the magnitude of parameters which do not contribute to the solution. This favors simpler models unless the data justify additional complexity.

Consider the linear model given by \({y}_{i}={w}^{{\mathsf{T}}}{x}_{i}+{\epsilon }_{i}\). Here, yi is the ith SWOT sea-surface height measurement, w is a vector of the estimated weights (corresponding to the entries of the design matrix X), xi is the ith row of an M × N design matrix X, and ϵi is the residual. Here, M is the total number of measurements, and N is the number of input functions (described below). Linear models of this form are employed for both cross-channel slope and tidal estimation. As will be described, in our Bayesian formulation, the inferred elements of the weight vector w govern the shape of the various distributions used in the model. The parametric form for these distributions, and their justification, is given at length below alongside the variational inference procedure. The tidal estimation procedure is provided in Section “Tidal Estimation”.

Cross-channel slope estimation

To estimate the cross-channel slope we perform a standard Bayesian linear regression on the instantaneous sea-surface height cross-sections from each pass. The ith row of the design matrix X is simply given by xi = [1, xi], where the entry 1 corresponds to the bias (intercept), and xi the distance along the line \(\overline{{X}_{1}{X}_{2}}\). Here, the observation yi is the corresponding SWOT measurement with distance xi along \(\overline{{X}_{1}{X}_{2}}\).

Bayesian analysis

Here we provide a brief overview of the Bayesian model utilized and the variational inference method. A complete derivation of variational inference can be found in ref. 28 and a more detailed exposition of the Bayesian linear model, used here, can be found in the appendix of ref. 29. The basis of our analysis is Bayes’ theorem, which provides a framework for updating our prior beliefs p(θ) about the distribution of all parameters (and hyperparameters), based on a set of observations \(Y={[{y}_{0},{y}_{1},\ldots,{y}_{N}]}^{T}\) and the design matrix X. A hyper-parameter is itself a parameter, that governs the probability distribution of another parameter. For example, the mean and variance of a Gaussian are the hyperparameters which define the distribution over another variable. By extension, a hyper-hyperparameter describes a variable that controls the distribution over a hyper-parameter. This yields a posterior distribution which describes our final beliefs over θ and is given by

$$p(\theta | Y)=\frac{p(\theta )p(Y| X,\theta )}{p(Y)},$$

(1)

where θ is the set of parameters and hyperparameters of the model, p(θ) is the prior, p(YXθ) the (data) likelihood, and p(Y) the marginal likelihood which acts as a normalizing term in the inference (as it does not depend upon θ). Here we provide a brief overview of the different components of Bayesian analysis.

Likelihood

The likelihood p(YXθ) describes the likelihood that the observed data occurred, given our model. Here, the model is defined by both the design matrix, and our prior assumptions about the parameters θ. We take the likelihood term to be of Gaussian form, equivalent to a least-squares error assumption. In our analysis, we weight the squared residual between observed yi and model prediction, by a scalar hyper-parameter β, which represents the precision, or inverse (co)variance of the noise. Discussion and validation of this modeling decision in the context of SWOT based tidal analysis is provided in ref. 20. This has the effect of weighting how tightly our model should fit the data based on how noisy the data is. This assumption yields a Gaussian likelihood term of the form:

$$p(Y| X,\theta )=p(Y| X,w,\beta )={\left(\frac{\beta }{2\pi }\right)}^{N/2}\exp \left\{-\frac{\beta }{2}{E}_{Y}(w)\right\}$$

(2)

where we see the likelihood only depends upon w and β. The error functional Ey is given by

$${E}_{Y}={\sum}_{i=0}^{N}{(\,{y}_{i}-{w}^{{\mathsf{T}}}{x}_{i})}^{2}.$$

(3)

Priors

Central to Bayesian inference is the usage of priors. Priors are distributions over the parameters included in a model, and reflect our initial expectation of the functional forms and values the parameters should have. Here we describe the choice of these priors for both parameters and hyperparameters, and describe how they impact the resultant model. Under the mean-field approximation, we assume the prior over all parameters θ can be factorized as

$$p(\theta )=p(w| \alpha )p(\alpha )p(\beta ),$$

(4)

where α = {αj} is a set of hyperparameters which governs the scale of the multi-variate Gaussian over the weights w. We now treat each term individually.

The model weights p(wα) come from a zero-mean Gaussian prior, with precisions (inverse variance) α. This choice serves two purposes. First, a Gaussian is the least informative distribution for a quantity which can be either positive or negative, and is thus unbiased in this way29. This is important as both cross-channel slopes and quadrature tidal amplitudes can be positive or negative. Second, weights will only be significantly non-zero if the data requires it. Conventionally, this is referred to as an Automatic Relevance Determination prior, as it induces shrinkage over the model weights, which do not significantly aid the model in fitting the data. Using this, for an individual weight wj, the prior has the form

$$p({w}_{j}| {\alpha }_{j})={\left(\frac{{\alpha }_{j}}{2\pi }\right)}^{1/2}\exp \left\{-\frac{{\alpha }_{j}}{2}{w}_{j}^{2}\right\}.$$

(5)

The set of weight precisions α, which govern the scale of the weights w, are drawn from a Gamma distribution, which models the distribution over non-negative precisions. In addition to non-negativity, this choice is made for several reasons. First, a Gamma hyperprior is conjugate with the Gaussian prior over the weights which will be useful when performing inference. A conjugate prior for a given likelihood function is a prior that results in a posterior distribution that is of the same family as the prior. A Gamma prior also implicitly encourages smaller weight magnitudes, leading to natural shrinkage and promoting sparsity in the inferred weights. Uninformative hyper-hyperparameters a0 = 10−2 and b0 = 10−4 which set the shape and scale of the Gamma are selected to yield a vague prior over each αj defined as

$$p({\alpha }_{j})=\Gamma ({\alpha }_{j};{a}_{0},b,0).$$

(6)

A vague or uninformative prior simply means the assumed distribution is broad. This imposes minimal assumptions regarding the parameter values, whilst still providing natural parameter shrinkage30. This is favorable to a uniform prior which implicitly restricts parameter values and can lead to improper posteriors.

The scalar noise precision β (inverse variance) of the residual ϵ is also modeled as a hyperparameter within the model. Given the least-squares assumption of a Gaussian residual, we once again adopt a Gamma prior over β. The values of the shape and scale parameters are identical to those used in Equation (6), but are defined by the hyper-hyperparameters c0 = 10−2 and d0 = 10−4 such that

$$p(\, \beta )=\Gamma (\,\beta ;{c}_{0},d,0).$$

(7)

Initialization

Models are initialized using a maximum-likelihood (ML) solution such that

$${w}_{ML}={X}^{{\mathsf{T}}}Y{({X}^{{\mathsf{T}}}X)}^{-1}.$$

(8)

The ML solution is then used to initialize the residual precision hyper-parameter, β, such that:

$${\beta }^{-1}=\frac{1}{N}{\sum}_{i=1}^{N}{({y}_{i}-{w}_{ML}^{{\mathsf{T}}}{x}_{i})}^{2}$$

(9)

The ML solution provides an initial estimate that follows from the likelihood, ensuring that we start in an informative region of parameter space. This is useful for initializing the noise precision, β, and helps the variational updates from settling into poor local minima. While absolute guarantees depend on problem-specific properties, ML initialization is a well-established heuristic that improves the stability and efficiency of variational inference.

Variational inference

Fully Bayesian solutions are obtained by marginalizing over the posterior distributions of the parameters. The difficulty arises when computing the posterior distribution, which analytically is almost always intractable. Hence, sample based approaches such as Markov-Chain Monte Carlo are often employed. While these methods are good at approximating the true posterior, they scale poorly with the number of parameters included. Further, convergence is not easily assessed. Here, we adopt an approximate inference approach, called variational Bayes, referred to herein as VB. VB is a computationally tractable alternative to MCMC, bringing both scalability and a principled approach to assess whether convergence has been achieved. These attributes are particularly important for the tidal estimation procedure, as with over 300,000 individual locations, assessing convergence manually is intractable. The objective of our analysis is to infer the distributions over the individual elements of θ. The basic idea of VB is to adopt analytical approximations for each distribution, which can be optimized in an iterative and computationally tractable way. We first introduce an approximate posterior q(θY). The functional form of this posterior is chosen to be conjugate with the prior over θ such that q(θY) factorizes as

$$q(\theta | Y)=q(w| Y)q(\alpha | Y)q(\beta | Y).$$

(10)

This choice follows from the mean-field approximation, which assumes our latent parameters to be mutually independent. This selection is deliberate, as it allows for analytical marginalization over the parameter posteriors, making inference computationally efficient. However, it also neglects correlations between parameters, potentially underestimating uncertainty in the presence of strong multicollinearity. Despite this, the approach has been extensively validated for both trend extraction and tidal estimation and is thus sufficient for these tasks20,29.

Our objective in VB is to minimize the difference between the approximate posterior q(θY), and the true posterior p(θY). This difference can be assessed by considering our observable, the data evidence p(Y). Using our approximate posterior, we can rewrite the log evidence p(Y) as the sum of two separable terms such that

$$\log p(Y)=F\left(p(\theta | Y),q(\theta | Y)\right)+{{{\rm{KL}}}}\left(p(\theta | Y),q(\theta | Y)\right).$$

(11)

This is the fundamental equation of VB and is composed of two terms. The first term is the negative variational free energy, referred to as the evidence lower bound. This provides a strict lower bound on the model evidence. The second term is the Kullback-Liebler divergence between the approximate and true posteriors over θ. This term provides natural model shrinkage as it increases with the number of free parameters θ. It can be seen that maximizing F(pq) will result in the approximate posterior being as close as possible to the true posterior. Due to the fact that q(θY) can be factored as Eq. (10), F(pq) can be maximized by iteratively optimizing each of q(θY), q(αY), q(βY) separately. Update equations for this procedure can be found in ref. 31. An implementation of this approach can be found in the replication notebooks “Fjord_Tides.ipynb” and “seismic_attribution.ipynb”.

Bayesian R-squared

To evaluate the quality of the Bayesian regression we utilize the Bayesian R2 proposed in ref. 15. This is necessary as the variance of the predicted values, can be greater than the variance of the data, thus rendering the conventional R2 definition nonsensical. The modified Bayes R2 is simply given by

$$\,{\mbox{Bayes}}\,\,{R}^{2}=\frac{{{{\rm{Var}}}}({{{\rm{predicted}}}})}{{{{\rm{Var}}}}({{{\rm{predicted}}}})+{{{\rm{Var}}}}({{{\rm{residual}}}})},$$

(12)

where Var(residual) is the expected variance of the errors as given by the model.

Tidal estimation

Due to the extreme sparsity of available SWOT data (less than 25 measurements over a full year), extreme care is needed when performing tidal harmonic analysis20. Harmonic analysis assumes tides can be described by the superposition of waves at discrete tidal frequencies. These frequencies exist at harmonics of the motions between the Earth, Moon, and Sun and are described as constituents. In total, there are hundreds of possible constituents, however, for practical purposes we need only concern ourselves with a few. Given n constituents, the kth constituent with frequency ωk has a corresponding tidal wave of

$${C}_{k}\cos {w}_{k}t-{\phi }_{k}={A}_{k}\sin {\omega }_{k}t+{B}_{k}\cos {\omega }_{k}t.$$

(13)

Comparisons of tidal constituents are done in terms of the amplitude \({C}_{k}=\sqrt{{A}_{k}^{2}+{B}_{k}^{2}}\) and phase \({\phi }_{k}=\arctan {A}_{k}/{B}_{k}\). Modern tidal analysis is carried out in the time-domain using least-squares estimation, and can thus be applied to irregularly sampled time-series. To accomplish this we define the tidal estimation problem as a general linear model, such that the observed sea-level, yi, at any time is given by \({y}_{i}={w}^{{\mathsf{T}}}{x}_{i}+{\epsilon }_{i}\). Here, xi is the ith row of an M × N matrix of basis functions where M is the number of measurements and N = 2n + 1 with n equal to the number of constituents, w is a set of inferred weights, and ϵi is the non-tidal residual. From this, we define a design matrix X given by

$$X={[1,\sin {\omega }_{0}{t}_{i},\cos {\omega }_{0}{t}_{i},\ldots,\sin {\omega }_{k}{t}_{i},\cos {\omega }_{k}{t}_{i}]}^{{\mathsf{T}}}$$

(14)

where 1 corresponds to the bias, and the remaining values the quadrature amplitudes. The extreme sparsity of the Dickson fjord reference series is such that only the dominant lunar tide, M2, can be reliably estimated as shown in ref. 18. Even though we are only interested in M2, it is necessary to include additional constituents in the analysis to “soak” up the contributions of additional constituents18,20. After considerable experimentation, and validation against the in situ gauge measurements, it was found that the best M2 estimation was obtained by using the semi-diurnal M2, N2, S2, and diurnal K1, and O1 constituents in the analysis. Supplementary Table 2 provides the associated periods and origins of each constituent. Section “Tidal Validation”, discusses validation with the in-situ CTD observations and the FES2022 tidal model.

Conventional harmonic analysis only considers measurements from a single spatial location. Due to the spatial coherence of the oceanic response to tidal forcing32, this procedure leaves considerable information out. SWOT data provides a complete picture of the instantaneous sea-surface height throughout the Dickson fjord which can be exploited using an appropriate method. Here, we adopt the spatially coherent harmonic analysis procedure in ref. 20. Readers are referred to ref. 20 for a complete description of the procedure. However, the basic idea is to simultaneously estimate the quadrature amplitudes across a set of adjacent points by assuming that the amplitude at any point Pj,k is given by the amplitude w0,0 of the central point P0,0 with a small offset denoted wj,k. The linear model can be expanded to

$${Y}_{j,k}={X}_{0,0}{w}_{0,0}+\rho {X}_{j,k}{w}_{j,k}$$

(15)

where Yj,k is the observation at point Pj,k, X0,0 and Xj,k are the design matrices for points P0,0 and Pj,k respectively, and ρ represents the probability that the observations Yj,k are correlated with y0,0. The scalar ρ is obtained by taking the Fischer transform of the Pearson’s correlation coefficient r between Y0,0 and Yj,k, such that \({z}^{{\prime} }=.5[\ln (1+r)-\ln (1-r)]\), and \(\rho=1-\Phi ({z}^{{\prime} })\) with \(\Phi={{{\mathcal{N}}}}(0,1)\). By including the probability that the observations are correlated, ρ, we impose an assumption that points with more similar time-series will have similar tides. Furthermore, in the limit where Yj,k = Y0,0, the quadrature amplitude Yj,k = X0,0w0,0. We can further restrict the functional form of the relationship between adjacent locations to boost the ratio of data to parameters. As validated in ref. 20, we approximate the relationship between quadrature amplitudes in adjacent locations to be linear, such that

$${Y}_{j,k}=\rho \cdot ({d}_{x}\cdot {X}_{j,0}{w}_{j,0}+{d}_{y}\cdot {X}_{0,k}{w}_{0,k})+{X}_{0,0}{w}_{0,0}$$

(16)

where dx and dy are the normalized distances between points along the horizontal and vertical direction, respectively, taken to be (−1, 0, 1) for convenience when using gridded SWOT data. This approach effectively doubles the ratio of data to parameters and was found to improve uncertainty estimation20. By virtue of using gridded data with a resolution of 100 m, the assumption of linearity between neighboring quadrature amplitudes is more than sufficient.

While the spatially coherent harmonic analysis can be used in tandem with any estimator, here we make use of the variational Bayesian (VB) estimator described above in Section “Bayesian Regression”. This choice is based on the following reasons. First,20 find the VB approach to be less sensitive to both stationary (Gaussian) and non-stationary noise artifacts. Second, VB provides natural parameter shrinkage, which is helpful for reducing so-called cross-talk between constituents left out of the analysis. Lastly, the implicit uncertainty information is helpful in assessing the quality of the tidal estimates. The analysis shown in Fig. 4 only includes locations which have at least 23 observations. Supplementary Fig. 4 shows the distribution of SWOT measurements, which can be used for tidal estimation through the fjord. A complete implementation of this approach, and code to replicate all tidal estimation is given in “Fjord_Tides.ipynb”.

Tidal validation

Tidal validation is carried out for both the SWOT derived tidal estimates, and the standard tidal estimates from FES202233 using the in-situ CTD gauge as a ground truth34. For each constituent k, models (VB or FES) are assessed using the root-mean-squared error, defined as

$$ {{\mbox{RMS}}}_{k} \\ =\overline{\sqrt{{[({A}_{k,{{{\rm{mod}}}}}\sin ({\omega }_{k}t)+{B}_{k,{{{\rm{mod}}}}}\cos ({\omega }_{k}t))-({A}_{k,{{{\rm{CTD}}}}}\sin ({\omega }_{k}t)+{B}_{k,{{{\rm{CTD}}}}}\cos ({\omega }_{k}t))]}^{2}}},$$

(17)

where the overbar indicates the average over a complete cycle (e.g., 0 → 2π). This metric reflects the combined error in predicted sea-level height introduced by both amplitude and phase errors. We also introduce a secondary metric, the relative RMS error, which weights the RMS error relative to the magnitude of the CTD gauge constituent RRMSk = RMSk/Ak,CTD. Comparisons between RMS and relative RMS error are provided in Supplementary Fig. 6. For the SWOT estimates, the median amplitude and phase for the entire fjord are used. As noted above, we only include the constituents M2, N2, S2, K1, and O1 in the analysis due to the limited SWOT data. For these constituents, it can be seen that M2 yields significant improvements over FES2022 of 52%. The SWOT derived N2, S2, K1, and O1 in contrast are relatively inaccurate, due to the smaller amplitudes and longer alias periods. Similar results have been found for early SWOT data by ref. 18. In their work additional constituents were also included to yield superior M2 accuracy. When correcting the SWOT data, we utilize just the M2 from SWOT, alongside S2, N2, O1, and K1 from FES2022. As can be seen in the relative RMS panel, many of the additional FES2022 constituents exceed 20% relative RMS error within the Dickson fjord. In order to avoid introducing potential biases from incorrect tidal corrections, we do not include constituents above this 20% threshold. To combat any residual tidal variability we instead define the center of the colormap to be the midpoint of the cross-channel slope.

It is important to note that there are large seasonal changes in tides within arctic regions35. Due to the limited SWOT data available, and the relatively short CTD gauge deployment (≈2 months), the extent of this variability is difficult to estimate. However, since the CTD gauge was deployed during both events, validation with this gauge is appropriate as we wish to have accurate tidal estimates during this period. As more SWOT data becomes available, the accuracy of tidal estimates will improve. It is unlikely that these improved estimates will change the conclusions made by this study, as we are primarily interested in the cross-channel slope, and find no evidence of this being tidally driven based on inspection of the M2 tide. However, improved tidal estimation could prove useful for understanding how the tide itself modulates the period of the seiche, and thus impacts its dynamics and dissipation.

Simple analytical seiche

In order to estimate the total seiche amplitude, and to relate the September and October events using the observed seismic observations, we consider a theoretical seiche acting on a simplified fjord geometry. We adopt the notation and fjord geometry employed in ref. 5 for consistency. Here, we assume the seiche to act as an oscillating horizontal force directed N160°E (perpendicular to the Dickson fjord). Due to discrepancies between the defined geometries given in ref. 4, 5 we avoid prescribing values for the precise fjord dimensions except where necessary. The sloshing of the seiche produces a shift of the center of mass of the body of water x, and can be written as

$$x(t)=\Delta x\sin \omega t.$$

(18)

Here Δx is the amplitude of the horizontal oscillation and ω ≈ 2π/92 Hz is the frequency of oscillation. As described in ref. 5, the amplitude of the Rayleigh waves produced is proportional to the magnitude of the horizontal force. Hence, we write Δx in terms of the total force F. Taking the second derivative of the position of the center of mass we find

$$F=\Delta x{\omega }^{2}\sin \omega t.$$

(19)

It can be seen that the maximum force occurs at the maximum displacement Δx of the center of mass. Using SWOT data we can only observe the cross channel slope. As such, it is useful to relate the force back to the surface displacement Δz such that

$$\Delta x=\frac{L}{3}\frac{\Delta z}{h+{h}_{s}}$$

(20)

Equation (19) can be rewritten in terms of the vertical displacement Δz as

$$F=\frac{Lm\omega }{3}\frac{\Delta z}{h+{h}_{s}}$$

(21)

We recognize that the surface displacement Δz = SL where S is the cross-channel slope. Thus, the force F can be written in terms of the cross-channel slope with

$$F=\frac{{L}^{2}m\omega }{3}\frac{S}{h+{h}_{s}}.$$

(22)

Thus, we have shown that the force is directly proportional to the cross-channel slope. This allows for the direct comparison between events.

In situ measurements

In-situ measurements are provided by the CTD station located in the inner portion of the Dickson fjord (as shown in Fig. 1A and can be accessed at ref. 34). The station provides both standard meteorological and oceanic variables. The CTD gauge is mounted on the fjord wall, inside a HDPE tube to protect it from ice in winter. Here, we only make use of the wind speed and direction, density, and water depth measurements. Data are sampled at 15-min intervals, which creates severe aliasing issues for observing the 92 s VLP signal. Due to the location of the device in the inner fjord, the seiche signal magnitude decreases beneath pre-event noise levels after only a few hours and is thus unobservable in the data (See Supplementary Fig. 5).

Dickson fjord

Here we present a brief overview of the Dickson fjord. A more comprehensive description of the physiography and climate of the fjord system can be found in ref. 5. The Dickson fjord sits at the terminus of the Hissinger Glacier in the northernmost area of the Kong Oscar fjord system situated in East Greenland (See Supplementary Fig. 1). The fjord itself sits deep in the Arctic Circle, and is thus characterized by sea-ice over much of the year. Sea ice dissipates in July and then forms again in October. The fjord fills a U-shaped valley basin, with multiple smaller glaciers situated on each side. The fjord itself is 38 km long and between 2.5 and 3.2 km wide. The depth ranges from 150–200 m to 700 m from West to East with an approximate depth of 540 m in the center of the fjord across from the landslide location. Bathymetry estimates are taken from a 2018 survey by the National Danish Hydrographic Office at a resolution of 15 m. We note that no data exists between 150 and 300 m of the coast due to the limitations of the survey vessel. This missing data creates large uncertainties in these regions, which can significantly influence numerical simulations.

Tsunami information

Both tsunamis originated from landslides occurring in the same gully situated beneath an unnamed glacier5. These landslides were caused by debuttressing of the glacier following glacial thinning over the past decade. Direct observation of landslide scarring and dirtying of the glacier using satellite imagery by both4,5 confirms this theory. Additionally,5 evaluate the landslide dynamics of the September 16th landslide via seismic inversion. While the October 11th event did not produce a new landslide scar, a Sentinel-2 image showed significantly more erosion than was present after the September 16th event.

Empirical evidence of the two tsunamis is given by a combination of nearby in-situ sensors, and observed run-up height. Using satellite imagery, both4,5 observe an initially 200 m run-up height near the location of the slide, with an average of 60 m run-ups being observed through the remainder of the fjord for the September 16th event. Tsunami run-up for the October 11th even was only observed 200 m west of the gully ~75% of the magnitude of the September event in this location (60 m vs 80 m). Almost 72 km at the Ella \(\varnothing\) research station, the initial run-up height was in excess of 4 km creating significant local damage. The location of Ella \(\varnothing\) relative to the Dickson fjord is shown in Supplementary Fig. 1. To our knowledge, no information exists regarding the run-up at Ella \(\varnothing\) for the October 11th event due to arctic winter darkness.

Tidal modulation of the seiche

While the following results do not impact the conclusions made in this manuscript regarding the initial size of the seiche, they point to an interesting mechanism of seiche dissipation. The authors of ref. 4 observed an ~6 h modulation of the seiche frequency ranging from 10.874 to 10.879 mHz which they attribute to tidal influences (their Supplementary Fig. S1C). They suggest a possible mechanism is the change of depth modulating the speed of the wave c through the relationship \(c=\sqrt{gH}\) where g is the gravitational acceleration and H is the depth. However, a simple inspection of both the September and October VLP signals shows that the seiche actually behaves in the opposite manner (e.g., longer periods at higher tides). We here suggest an alternative mechanism; stratification. In semi-enclosed basins (defined as having an open-boundary for waves to radiate into the sea), the seiche wave speed is modulated by stratification through \(\sqrt{gH\Delta \rho /\rho }\) where ρ is the average density, and Δρ = ρ2 − ρ1 is the density difference from top (ρ1) to bottom (ρ2)36. Inspection of the correlation between surface-water elevation and density ρ shows a strong positive correlation (Supplementary Fig. 7). Based on this observation we proposed the following explanation. At low tide, the water column is strongly stratified with fresh (less dense) glacial runoff sitting on top, and denser salty water sitting beneath Δρ > 1. As the tide comes in, the water column mixes, leading to the observed higher density ρ at the surface and thus less stratification Δρ < 1. Due to the fact that the fjord is quite deep  ≈500 m, the impact of Δρ ΔH as observed. Due to the fact that the CTD gauge only provides observations at the surface, it is impossible to conclude definitively that the density fluctuations observed are indicative of the stratification between ρ1 and ρ2. Hence, further simulated study is needed to validate this hypothesis.

Source link

More from NewsMore posts in News »