VPH 2024 - Stuttgart
LMS, École Polytechnique, Paris, France
September 4, 2024
Objectives
Requirements
\[ \begin{cases} \boldsymbol{\nabla}\cdot \mathcal{\boldsymbol{\sigma}}+ \boldsymbol{f} = 0 \qquad & \mathrm{in} \ \Omega \\ \mathcal{\boldsymbol{\sigma}}\cdot \boldsymbol{n} = \boldsymbol{F} & \text{on } \partial \Omega_{N} \\ \boldsymbol{u} = \boldsymbol{u}_d \qquad \qquad & \text{on } \partial \Omega_{d} \\ % \sigm = \ftensor{C}\left(\para\right):\eps(\vect{u}) & \mathrm{in} \ \Omega \end{cases} \label{eq:MechPb} \]
Behaviour
\[ \begin{cases} \boldsymbol{\nabla}\cdot \mathcal{\boldsymbol{\sigma}}+ \boldsymbol{f}\left(\left\{\mu_i\right\}_{i \in \mathopen{~[\!\![~}1, \beta \mathclose{~]\!\!]}}\right) = 0 \qquad & \mathrm{in} \ \Omega \\ \mathcal{\boldsymbol{\sigma}}\cdot \boldsymbol{n} = \boldsymbol{F}\left(\left\{\mu_i\right\}_{i \in \mathopen{~[\!\![~}1, \beta \mathclose{~]\!\!]}}\right) & \text{on } \partial \Omega_{N} \\ \boldsymbol{u} = \boldsymbol{u}_d \qquad \qquad & \text{on } \partial \Omega_{d} \\ \end{cases} \label{eq:MechPb2} \]
Behaviour
Reduced-order model (ROM)
\[ \definecolor{BleuLMS}{RGB}{1, 66, 106} \definecolor{VioletLMS}{RGB}{77, 22, 84} \definecolor{TealLMS}{RGB}{0, 103, 127} % \definecolor{BleuLMS2}{RGB}{0, 103, 202} \definecolor{BleuLMS2}{RGB}{0, 169, 206} \definecolor{BleuLMPS}{RGB}{105, 144, 255} \definecolor{accentcolor}{RGB}{1, 66, 106} \definecolor{GreenLMS}{RGB}{0,103,127} \definecolor{LGreenLMS}{RGB}{67,176,42} \definecolor{RougeLMS}{RGB}{206,0,55} \]
|
|
|
\[ \mathcal{U}_h = \left\{\boldsymbol{u}_h \; | \; \boldsymbol{u}_h \in \text{Span}\left( \left\{ N_i^{\Omega}\left(\boldsymbol{x} \right)\right\}_{i \in \mathopen{~[\!\![~}1,N\mathclose{~]\!\!]}} \right)^d \text{, } \boldsymbol{u}_h = \boldsymbol{u}_d \text{ on }\partial \Omega_d \right\} \]
\[ \boldsymbol{u} \left(x_{0,0,0} \right) = \sum\limits_{i = 0}^C \sum\limits_{j = 0}^{N_i} \sigma \left( \sum\limits_{k = 0}^{M_{i,j}} b_{i,j}+\omega_{i,j,k}~ x_{i,j,k} \right) \]
Solving the mechanical problems amounts to finding the continuous displacement field minimising the potential energy \[E_p\left(\boldsymbol{u}\right) = \frac{1}{2} \int_{\Omega}\mathcal{\boldsymbol{\varepsilon}}: \mathbb{C} : \mathcal{\boldsymbol{\varepsilon}}~\mathrm{d}\Omega- \int_{\partial \Omega_N}\boldsymbol{F}\cdot \boldsymbol{u} ~\mathrm{d}A- \int_{\Omega}\boldsymbol{f}\cdot\boldsymbol{u}~\mathrm{d}\Omega. \]
In practice
Degrees of freedom (Dofs)
Dofs (initially) | \(569~438\) |
---|---|
Training time (s) | \(7\) |
GPU (V100) | 1 |
CPUs | 40 |
More details:
Katerina Skardova at 4pm tomorrow in 02.011
Low-rank approximation of the solution to avoid the curse of dimensionality with \(\beta\) parameters
Full-order discretised model
Reduced-order model
Finding the reduced-order basis
PGD: (Chinesta et al., 2011; Ladevèze, 1985)
Tensor decomposition
\[\textcolor{VioletLMS}{\boldsymbol{u}}\left(\textcolor{BleuLMPS}{\boldsymbol{x}}, \textcolor{LGreenLMS}{\mu}\right) = \sum\limits_{i=0}^{m}\textcolor{BleuLMPS}{\overline{u}_i\left(x\right)}\textcolor{LGreenLMS}{\lambda_i\left(\mu\right)}\]
Discretised problem
PGD: (Chinesta et al., 2011; Ladevèze, 1985)
Tensor decomposition
\[ \textcolor{VioletLMS}{\boldsymbol{u}}\left(\textcolor{BleuLMPS}{\boldsymbol{x}}, \textcolor{LGreenLMS}{\left\{\mu_i\right\}_{i \in \mathopen{~[\!\![~}1, \beta \mathclose{~]\!\!]}}}\right) = \sum\limits_{i=1}^m \textcolor{BleuLMPS}{\overline{\boldsymbol{u}}_i(\boldsymbol{x})} ~\textcolor{LGreenLMS}{\prod_{j=1}^{\beta}\lambda_i^j(\mu^j)} \]
Discretised problem
\[ \boldsymbol{u}\left(\boldsymbol{x}, \left\{\mu_i\right\}_{i \in \mathopen{~[\!\![~}1, \beta \mathclose{~]\!\!]}}\right) = \overline{\boldsymbol{u}}(\boldsymbol{x}) ~\prod_{j=1}^{\beta}\lambda^j(\mu^j) \]
\[ \boldsymbol{u}\left(\boldsymbol{x}, \left\{\mu_i\right\}_{i \in \mathopen{~[\!\![~}1, \beta \mathclose{~]\!\!]}}\right) = \textcolor{RougeLMS}{\sum\limits_{i=1}^{2}} \overline{\boldsymbol{u}}_{\textcolor{RougeLMS}{i}}(\boldsymbol{x}) ~\prod_{j=1}^{\beta}\lambda_{\textcolor{RougeLMS}{i}}^j(\mu^j) \]
\[ \boldsymbol{u}\left(\boldsymbol{x}, \left\{\mu_i\right\}_{i \in \mathopen{~[\!\![~}1, \beta \mathclose{~]\!\!]}}\right) = \sum\limits_{i=1}^{\textcolor{RougeLMS}{m}} \overline{\boldsymbol{u}}_i(\boldsymbol{x}) ~\prod_{j=1}^{\beta}\lambda_i^j(\mu^j) \]
Greedy algorithm
The PGD is
Note
Graphical implementation of Neural Network PGD
\[ \boldsymbol{u}\left(\textcolor{BleuLMPS}{\boldsymbol{x}}, \textcolor{LGreenLMS}{\left\{\mu_i\right\}_{i \in \mathopen{~[\!\![~}1, \beta \mathclose{~]\!\!]}}}\right) = \sum\limits_{i=1}^m \textcolor{BleuLMPS}{\overline{\boldsymbol{u}}_i(\boldsymbol{x})} ~\textcolor{LGreenLMS}{\prod_{j=1}^{\beta}\lambda_i^j(\mu^j)} \]
Interpretable NN-PGD
|
|
Illustration of the surrogate model in use
Parameters
NN-PGD
Strategy
Note
Conclusion
Perspectives for patient-specific applications
Technical perspectives