DTE & AICOMAS 2025
LMS, École Polytechnique, Paris, France
February 21, 2025
Objectives
Requirements
Weak formulation :
\[ \begin{cases} \int_{\Omega} \boldsymbol{\Sigma} : d_{\boldsymbol{u}}\boldsymbol{E}\left( \boldsymbol{u^*} \right) ~\mathrm{d}\Omega= W_{\text{ext}}\left(\boldsymbol{u},\boldsymbol{u^*}\right) ~\forall \boldsymbol{u^*} \in H^1_0\left(\Omega \right) \\[4mm] \int_{\Omega} \left(pf + \frac{\partial \Psi \left(\boldsymbol{u},\Phi_f\right)}{\partial \Phi_s}\right) \Phi_f^* ~\mathrm{d}\Omega= 0 ~\forall \Phi_f^* \end{cases} \label{eq:MechPb_lung} \]
Kinematics:
Behaviour
\[ \definecolor{BleuLMS}{RGB}{1, 66, 106} \definecolor{VioletLMS}{RGB}{77, 22, 84} \definecolor{TealLMS}{RGB}{0, 103, 127} % \definecolor{BleuLMS2}{RGB}{0, 103, 202} \definecolor{BleuLMS2}{RGB}{0, 169, 206} \definecolor{BleuLMPS}{RGB}{105, 144, 255} \definecolor{accentcolor}{RGB}{1, 66, 106} \definecolor{GreenLMS}{RGB}{0,103,127} \definecolor{LGreenLMS}{RGB}{67,176,42} \definecolor{RougeLMS}{RGB}{206,0,55} \]
\[ \boldsymbol{u} = \mathop{\mathrm{arg\,min}}_{H^1\left(\Omega \right)} \int_{\Omega} \Psi\left( \boldsymbol{E}\left(\boldsymbol{u}\left(\textcolor{BleuLMPS}{\boldsymbol{x}}, \textcolor{LGreenLMS}{\left\{\mu_i\right\}_{i \in \mathopen{~[\!\![~}1, \beta \mathclose{~]\!\!]}}}\right)\right) \right) ~\mathrm{d}\Omega- W_{\text{ext}}\left(\textcolor{BleuLMPS}{\boldsymbol{x}}, \textcolor{LGreenLMS}{\left\{\mu_i\right\}_{i \in \mathopen{~[\!\![~}1, \beta \mathclose{~]\!\!]}}}\right) \]
Kinematics:
Behaviour - Saint Venant-Kirchhoff
\(\Psi = \frac{\lambda}{2} \text{tr}\left(\boldsymbol{E}\right)^2 + \mu \boldsymbol{E}:\boldsymbol{E}\)
Reduced-order model (ROM)
|
|
\[ \definecolor{BleuLMS}{RGB}{1, 66, 106} \definecolor{VioletLMS}{RGB}{77, 22, 84} \definecolor{TealLMS}{RGB}{0, 103, 127} % \definecolor{BleuLMS2}{RGB}{0, 103, 202} \definecolor{BleuLMS2}{RGB}{0, 169, 206} \definecolor{BleuLMPS}{RGB}{105, 144, 255} \definecolor{accentcolor}{RGB}{1, 66, 106} \definecolor{GreenLMS}{RGB}{0,103,127} \definecolor{LGreenLMS}{RGB}{67,176,42} \definecolor{RougeLMS}{RGB}{206,0,55} \]
|
|
|
|
\[ \mathcal{U}_h = \left\{\boldsymbol{u}_h \; | \; \boldsymbol{u}_h \in \text{Span}\left( \left\{ N_i^{\Omega}\left(\boldsymbol{x} \right)\right\}_{i \in \mathopen{~[\!\![~}1,N\mathclose{~]\!\!]}} \right)^d \text{, } \boldsymbol{u}_h = \boldsymbol{u}_d \text{ on }\partial \Omega_d \right\} \]
\[ \boldsymbol{u} \left(x_{0,0,0} \right) = \sum\limits_{i = 0}^C \sum\limits_{j = 0}^{N_i} \sigma \left( \sum\limits_{k = 0}^{M_{i,j}} b_{i,j}+\omega_{i,j,k}~ x_{i,j,k} \right) \]
Solving the mechanical problems amounts to finding the continuous displacement field minimising the potential energy \[E_p\left(\boldsymbol{u}\right) = \frac{1}{2} \int_{\Omega}\boldsymbol{E} : \mathbb{C} : \boldsymbol{E} ~\mathrm{d}\Omega- \int_{\partial \Omega_N}\boldsymbol{F}\cdot \boldsymbol{u} ~\mathrm{d}A- \int_{\Omega}\boldsymbol{f}\cdot\boldsymbol{u}~\mathrm{d}\Omega. \]
In practice
Degrees of freedom (Dofs)
h-adaptivity: red-green refinement
technical details:
Dofs (initially) | \(569~438\) |
---|---|
Training time (s) | \(7\) |
GPU (V100) | 1 |
CPUs | 40 |
Low-rank approximation of the solution to avoid the curse of dimensionality with \(\beta\) parameters
Full-order discretised model
Reduced-order model
Finding the reduced-order basis
PGD: (Chinesta et al., 2011; Ladeveze, 1985)
Tensor decomposition
\[\textcolor{VioletLMS}{\boldsymbol{u}}\left(\textcolor{BleuLMPS}{\boldsymbol{x}}, \textcolor{LGreenLMS}{\mu}\right) = \sum\limits_{i=0}^{m}\textcolor{BleuLMPS}{\overline{u}_i\left(x\right)}\textcolor{LGreenLMS}{\lambda_i\left(\mu\right)}\]
Discretised problem
PGD: (Chinesta et al., 2011; Ladeveze, 1985)
Tensor decomposition - extra-coordinates
\[ \textcolor{VioletLMS}{\boldsymbol{u}}\left(\textcolor{BleuLMPS}{\boldsymbol{x}}, \textcolor{LGreenLMS}{\left\{\mu_i\right\}_{i \in \mathopen{~[\!\![~}1, \beta \mathclose{~]\!\!]}}}\right) = \sum\limits_{i=1}^m \textcolor{BleuLMPS}{\overline{\boldsymbol{u}}_i(\boldsymbol{x})} ~\textcolor{LGreenLMS}{\prod_{j=1}^{\beta}\lambda_i^j(\mu^j)} \]
Discretised problem
\[ \boldsymbol{u}\left(\boldsymbol{x}, \left\{\mu_i\right\}_{i \in \mathopen{~[\!\![~}1, \beta \mathclose{~]\!\!]}}\right) = \overline{\boldsymbol{u}}(\boldsymbol{x}) ~\prod_{j=1}^{\beta}\lambda^j(\mu^j) \]
\[ \boldsymbol{u}\left(\boldsymbol{x}, \left\{\mu_i\right\}_{i \in \mathopen{~[\!\![~}1, \beta \mathclose{~]\!\!]}}\right) = \textcolor{RougeLMS}{\sum\limits_{i=1}^{2}} \overline{\boldsymbol{u}}_{\textcolor{RougeLMS}{i}}(\boldsymbol{x}) ~\prod_{j=1}^{\beta}\lambda_{\textcolor{RougeLMS}{i}}^j(\mu^j) \]
\[ \boldsymbol{u}\left(\boldsymbol{x}, \left\{\mu_i\right\}_{i \in \mathopen{~[\!\![~}1, \beta \mathclose{~]\!\!]}}\right) = \sum\limits_{i=1}^{\textcolor{RougeLMS}{m}} \overline{\boldsymbol{u}}_i(\boldsymbol{x}) ~\prod_{j=1}^{\beta}\lambda_i^j(\mu^j) \]
Greedy algorithm
The PGD is
Note
Graphical implementation of Neural Network PGD
\[ \boldsymbol{u}\left(\textcolor{BleuLMPS}{\boldsymbol{x}}, \textcolor{LGreenLMS}{\left\{\mu_i\right\}_{i \in \mathopen{~[\!\![~}1, \beta \mathclose{~]\!\!]}}}\right) = \sum\limits_{i=1}^m \textcolor{BleuLMPS}{\overline{\boldsymbol{u}}_i(\boldsymbol{x})} ~\textcolor{LGreenLMS}{\prod_{j=1}^{\beta}\lambda_i^j(\mu^j)} \]
Interpretable NN-PGD
Illustration of the surrogate model in use (preprint: Daby-Seesaram et al., 2024)
Parameters
NN-PGD
Loss convergence
Loss decay
Construction of the ROB
Solution | Error |
---|---|
\(E = 3.8 \times 10^{-3}\mathrm{ MPa}, \theta = 241^\circ\) |
\(E = 3.8 \times 10^{-3}\mathrm{ MPa}, \theta = 241^\circ\) |
\(E = 3.1 \times 10^{-3}\mathrm{ MPa}, \theta = 0^\circ\) |
\(E = 3.1 \times 10^{-3}\mathrm{ MPa}, \theta = 0^\circ\) |
\(E\) (kPa) | \(\theta\) (rad) | Relative error |
---|---|---|
3.80 | 1.57 | \(1.12 \times 10^{-3}\) |
3.80 | 4.21 | \(8.72 \times 10^{-4}\) |
3.14 | 0 | \(1.50 \times 10^{-3}\) |
4.09 | 3.70 | \(8.61 \times 10^{-3}\) |
4.09 | 3.13 | \(9.32 \times 10^{-3}\) |
4.62 | 0.82 | \(2.72 \times 10^{-3}\) |
5.01 | 2.26 | \(5.35 \times 10^{-3}\) |
6.75 | 5.45 | \(1.23 \times 10^{-3}\) |
Strategy
Note
|
|
Computing the mappings
\[\omega = \arg \min\underbrace{ \int_{\omega} I\left( x \right) \mathrm{d}\omega}_{\Psi\left(\omega\right)}\]
With requirement that \(I\left( x \right) \begin{cases} < 0 \forall x \in \mathcal{S} \\ > 0 \forall x \not \in \mathcal{S} \end{cases}\), with \(\mathcal{S}\) the image footprint of the shape to be registered
The domain \(\omega\) can be described by mapping \[\Phi\left(X\right) = I_d + u\left(X\right)\] from a reference shape \(\Omega\), leading to \[\int_{\omega} I\left( x \right) \mathrm{d}\omega = \int_{\Omega} I \circ \Phi \left( X \right) J \mathrm{d}\Omega\]
\[D\Psi\left(u\right)\left(v^*\right) = \left(\nabla^H \Psi \left(u\right), v^* \right)_H\]
Encoding the shape of the lung in low-dimensional spaces
Encoding the shape of the lung in low-dimensional spaces
First investigations - SVD
Conclusion
Perspectives for patient-specific applications
Technical perspectives
\[ \boldsymbol{u}_{m+1} = \sum_{i=1}^{m} \underbrace{(\lambda_i + \lambda_{m+1} ~ \overline{\boldsymbol{u}}_i^T \overline{\boldsymbol{u}})}_{\mathring{\lambda}_i} \overline{\boldsymbol{u}}_i + \lambda_{m+1} \underbrace{\left(\overline{\boldsymbol{u}}_{m+1} - \sum_{i=1}^{m} ( \overline{\boldsymbol{u}}_i^T \overline{\boldsymbol{u}}) \overline{\boldsymbol{u}}_i\right)}_{\mathring{\overline{\boldsymbol{u}}}_{m+1}} \]
\[\overline{\boldsymbol{u}}_{m+1} \longleftarrow \frac{\mathring{\overline{\boldsymbol{u}}}_{m+1}}{\Vert\mathring{\overline{\boldsymbol{u}}}_{m+1} \Vert_{\mathbb{K}}} \]
\[ \lambda_{m+1} \longleftarrow \lambda_{m+1} \Vert\mathring{\overline{\boldsymbol{u}}}_{m+1} \Vert_{\mathbb{K}} \] \[ \{\lambda_i\}_{i\in\mathopen{~[\!\![~}1,m\mathclose{~]\!\!]}} \gets \{\mathring{\lambda}_i\}_{i\in\mathopen{~[\!\![~}1,m\mathclose{~]\!\!]}} \]
\[\Leftrightarrow\int_{I\times\Omega}\left(\boldsymbol{u} - \boldsymbol{\overline{u}}_1 \lambda_1 \right)\left( \boldsymbol{\overline{u}}^* \lambda^* \right) ~\mathrm{d}\Omega\mathrm{d}t = 0 \forall \lambda^* \in \mathcal{I}, \forall \boldsymbol{\overline{u}}^* \in \mathcal{U}\]
\[\Leftrightarrow\begin{cases} \lambda_1\left(t \right) = \frac{\int_{\Omega}\boldsymbol{u}\left(\boldsymbol{x},t \right)\boldsymbol{\overline{u}}_1 ~\mathrm{d}\Omega}{\int_{\Omega} \boldsymbol{\overline{u}}_1^2 ~\mathrm{d}\Omega} = f\left( \lambda_1 \right) \\ \boldsymbol{\overline{u}}_1\left(\boldsymbol{x} \right) =\frac{\int_{I}\boldsymbol{u}\left(\boldsymbol{x},t \right) \lambda_1 \mathrm{d}t}{\int_{I} \lambda_1^2 \mathrm{d}t} = g\left( \boldsymbol{\overline{u}}_1 \right) \end{cases}\]
\[ \Leftrightarrow \lambda_1 = f \circ g \left( \lambda_1 \right)\]
\[ \Leftrightarrow \underbrace{\int_{\Omega} \boldsymbol{u}\int_{I}\boldsymbol{u}\lambda_1 \mathrm{d}t ~\mathrm{d}\Omega}_{T\left(\lambda_1 \right)} = \underbrace{\frac{\int_{\Omega \left(\int_I \boldsymbol{u}\lambda_1\mathrm{d}t \right)^2 ~\mathrm{d}\Omega}}{\int_I \lambda_1^2\mathrm{d}t}}_ {\omega^2}\lambda_1\]
with
\(R\left( \lambda_1 \right) = \omega^2\)
Eigen value problem
In practice
Writing the linear elasticity with the einstein notation:
Computing the non-linear loss using einsum relies on the tensor decomposition and broadcasting to remain efficient
Although the displacement is represented through a tensor decomposition \(\boldsymbol{u}\left(\boldsymbol{x}, \left\{\mu_i\right\}_{i \in \mathopen{~[\!\![~}1, \beta \mathclose{~]\!\!]}}\right) = \boldsymbol{\overline{u}}_i \prod_{j=1}^{\beta}\lambda_i^j(\mu^j)\), some non-linear mapping of that function cannot be easily computed on the independent modes but requires building the full-order solution tensor which leads to prohibitive training costs.
More physically sound behaviour laws
Sparse sampling of the hypercube
Convergence process
Minimisation problem
\[\omega = \arg \min\underbrace{ \int_{\omega} I\left( x \right) \mathrm{d}\omega}_{\Psi\left(\omega\right)}\]
With requirement that \(I\left( x \right) \begin{cases} < 0 \forall x \in \mathcal{S} \\ > 0 \forall x \not \in \mathcal{S} \end{cases}\), with \(\mathcal{S}\) the image footprint of the shape to be registered
The domain \(\omega\) can be described by mapping \[\Phi\left(X\right) = I_d + u\left(X\right)\] from a reference shape \(\Omega\), leading to \[\int_{\omega} I\left( x \right) \mathrm{d}\omega = \int_{\Omega} I \circ \Phi \left( X \right) J \mathrm{d}\Omega\]
\(D\Psi\left(\omega\right)\left(\boldsymbol{u}^*\right) = \int_{\Omega} \nabla I \cdot u^* J \mathrm{d}\Omega + \underbrace{\int_{\Omega}I \circ \Phi\left(X\right) (F^T)^{-1} : \nabla u^* J \mathrm{d}\Omega}_{\int_{\omega}I\left(x\right) \mathrm{div}\left( u^* \circ \Phi \right) \mathrm{d}\omega}\)
Note that \[\int_{\omega}I\left(x\right) \mathrm{div}\left( u^* \circ \Phi \right) \mathrm{d}\omega = \int_{\partial \omega} I u^* \cdot n \partial \omega\]
\[D\Psi\left(u\right)\left(v^*\right) = \left(\nabla^H \Psi \left(u\right), v^* \right)_H\]