Skip to content

Instantly share code, notes, and snippets.

@wd15
Last active February 17, 2026 03:57
Show Gist options
  • Select an option

  • Save wd15/08f793250734488fcfffadff6543cc03 to your computer and use it in GitHub Desktop.

Select an option

Save wd15/08f793250734488fcfffadff6543cc03 to your computer and use it in GitHub Desktop.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Data Assimilation Mechanics</title>
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']],
displayMath: [['$$', '$$'], ['\\[', '\\]']]
},
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<style>
body {
font-family: 'Segoe UI', Roboto, Helvetica, Arial, sans-serif;
background-color: #f4f7f6;
color: #333;
margin: 0;
padding: 40px;
display: flex;
justify-content: center;
}
.slide-container {
width: 100%;
max-width: 1400px;
background-color: #ffffff;
border-radius: 12px;
box-shadow: 0 10px 30px rgba(0,0,0,0.1);
padding: 40px;
box-sizing: border-box;
}
.header {
text-align: center;
margin-bottom: 40px;
}
.header h1 {
margin: 0;
color: #2c3e50;
font-size: 32px;
}
.header p {
color: #7f8c8d;
font-size: 18px;
margin-top: 10px;
}
.grid {
display: grid;
grid-template-columns: 1fr 1fr;
gap: 30px;
}
.panel {
background-color: #fdfdfd;
border: 1px solid #e0e6ed;
border-top: 4px solid #3498db;
border-radius: 8px;
padding: 25px;
box-shadow: 0 4px 6px rgba(0,0,0,0.02);
}
.panel.full-width {
grid-column: 1 / -1;
border-top-color: #2ecc71;
}
.panel.warning {
border-top-color: #e74c3c;
}
.panel.math-heavy {
border-top-color: #9b59b6;
}
.panel h2 {
margin-top: 0;
font-size: 20px;
color: #2c3e50;
border-bottom: 1px solid #eee;
padding-bottom: 10px;
}
.equation-box {
background-color: #f8f9fa;
padding: 15px;
border-radius: 6px;
margin: 15px 0;
overflow-x: auto;
text-align: center;
font-size: 110%;
}
.description {
font-size: 15px;
line-height: 1.6;
color: #555;
}
ul {
margin: 0;
padding-left: 20px;
}
li {
margin-bottom: 8px;
}
strong {
color: #2c3e50;
}
</style>
</head>
<body>
<div class="slide-container">
<div class="header">
<h1>The Mathematical Mechanics of Data Assimilation</h1>
<p>Bridging the gap between the Digital Twin forecast and Physical Twin reality</p>
</div>
<div class="grid">
<div class="panel full-width">
<h2>1. The Core DA Loop: Forecast & Update</h2>
<div class="description">
Data Assimilation is a continuous two-step process. First, we predict the future state using our physical models (or fast FNO surrogates). Second, we correct that prediction using real-time sensor observations.
</div>
<div style="display: flex; gap: 20px;">
<div style="flex: 1;">
<div class="equation-box">
<strong>Step 1: The Forward Forecast</strong><br>
$$ \mathbf{x}_{f, k} = \mathcal{M}(\mathbf{x}_{a, k-1}) + \mathbf{w}_{k} $$
</div>
<ul class="description">
<li><strong>$\mathcal{M}(\cdot)$</strong>: Non-Linear Forward Operator (Adamantine / FNO)</li>
<li><strong>$\mathbf{w}_{k}$</strong>: Process noise (model error)</li>
</ul>
</div>
<div style="flex: 1;">
<div class="equation-box">
<strong>Step 2: The State Update (Analysis)</strong><br>
$$ \mathbf{x}_a = \mathbf{x}_f + \mathbf{K} (\mathbf{y} - \mathbf{H} \mathbf{x}_f) $$
</div>
<ul class="description">
<li><strong>$\mathbf{K}$</strong>: Kalman Gain (Weighting Matrix)</li>
<li><strong>$(\mathbf{y} - \mathbf{H} \mathbf{x}_f)$</strong>: The "Innovation" (Sensor Data minus Expected Data)</li>
</ul>
</div>
</div>
</div>
<div class="panel warning">
<h2>2. The Error Covariance Problem ($\mathbf{P}_f$)</h2>
<div class="description">
To find the optimal weight ($\mathbf{K}$), we must track the uncertainty and spatial correlations of every variable in the mesh. The theoretical definition relies on the unknown "true" state:
</div>
<div class="equation-box">
$$ \mathbf{P}_f = \mathbb{E}[(\mathbf{x}_f - \mathbf{x}_{true})(\mathbf{x}_f - \mathbf{x}_{true})^T] $$
</div>
<div class="description">
For a thermal mesh with $N$ degrees of freedom, this creates a massive, computationally impossible $N \times N$ matrix:
</div>
<div class="equation-box">
$$ \mathbf{P}_f = \begin{bmatrix}
\sigma_{1}^2 & \dots & \text{cov}(1,N) \\
\vdots & \ddots & \vdots \\
\text{cov}(N,1) & \dots & \sigma_{N}^2
\end{bmatrix} \in \mathbb{R}^{N \times N} $$
</div>
</div>
<div class="panel math-heavy">
<h2>3. Deriving $\mathbf{K}$ and the EnKF Solution</h2>
<div class="description">
We find $\mathbf{K}$ by defining the Analysis Error Covariance ($\mathbf{P}_a$) and taking the derivative of its trace to minimize total system uncertainty:
</div>
<div class="equation-box">
$$ \frac{\partial \text{Tr}(\mathbf{P}_a)}{\partial \mathbf{K}} = 0 \implies \mathbf{K} = \mathbf{P}_f \mathbf{H}^T (\mathbf{H} \mathbf{P}_f \mathbf{H}^T + \mathbf{R})^{-1} $$
</div>
<div class="description" style="margin-top: 20px;">
<strong>The EnKF Solution:</strong> Because $\mathbf{x}_{true}$ is unknown and $\mathbf{P}_f$ is too large, the Ensemble Kalman Filter approximates it dynamically using the sample covariance of $N_{ens}$ parallel model runs:
</div>
<div class="equation-box" style="background-color: #e8f4f8; border: 1px solid #b3e0ff;">
$$ \mathbf{P}_f \approx \frac{1}{N_{ens}-1} \sum_{i=1}^{N_{ens}} (\mathbf{x}_{f}^{(i)} - \bar{\mathbf{x}}_f)(\mathbf{x}_{f}^{(i)} - \bar{\mathbf{x}}_f)^T $$
</div>
</div>
</div>
</div>
</body>
</html>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment