In this thesis, we investigate the combination of Multigrid methods and Neural Networks, starting from Finite Element discretizations of Partial Differential Equations. Multigrid methods are among the fastest numerical methods employed to solve elliptic equations. They use different levels of approximation in a multilevel hierarchy to compute the solution. The keypoint is to appropriately define transfer operators to transfer information between these different levels. These operators are crucial for fast convergence of Multigrid, but they are generally unknown. Here, we propose Neural Network models for learning transfer operators, and we build a multilevel hierarchy based on the output of the predictive model. After a preliminary study in one-dimensional scenarios, we define our training set by extracting information from geometry and operator matrices. We take the features from the mass matrix and the target from the L2-projection. Then, we customize the model loss function in order to include knowledge about the transfer operators: in this way, our network solves a constrained problem, forcing some domain properties on the predictions. The application of this model in a Multigrid context results in good convergence, motivating the passage to two-dimensional problems. Given the increased complexity of the data, we first investigate the accuracy of the predictions, by testing it with different network architectures and with different combinations of parameters. We focus on the study of convergence, where we compare our strategy with existing Multigrid methods. More specifically, we consider the Semi-Geometric Multigrid and the Algebraic Multigrid. A big issue that needs to be faced is the constraint given by feedforward Neural Networks of working only with fixed input and output dimensions. Therefore, we implement and compare different solution to address this problem, by extending the node patches when the neighborhood of a node has less nodes than expected, and by decomposing our feature extraction when the node patch is bigger than the network input. In order to validate this procedure in more general settings, we test our method using several geometries, considering structured and unstructured grids, but without the need of using a specific implementation for each grid. In the last part of this work, we focus on problems with variable diffusion coefficients, where stiffness information is added in the training process. This strategy allows to achieve faster convergence than using transfer operators based only on geometric data. Future discussion should be devoted to the extension of this Neural Network approach to three-dimensional scenarios and to the construction of grid operators for an automatic definition of multilevel solvers, allowing a portable solution in scientific computing.

In this thesis, we investigate the combination of Multigrid methods and Neural Networks, starting from Finite Element discretizations of Partial Differential Equations. Multigrid methods are among the fastest numerical methods employed to solve elliptic equations. They use different levels of approximation in a multilevel hierarchy to compute the solution. The keypoint is to appropriately define transfer operators to transfer information between these different levels. These operators are crucial for fast convergence of Multigrid, but they are generally unknown. Here, we propose Neural Network models for learning transfer operators, and we build a multilevel hierarchy based on the output of the predictive model. After a preliminary study in one-dimensional scenarios, we define our training set by extracting information from geometry and operator matrices. We take the features from the mass matrix and the target from the L2-projection. Then, we customize the model loss function in order to include knowledge about the transfer operators: in this way, our network solves a constrained problem, forcing some domain properties on the predictions. The application of this model in a Multigrid context results in good convergence, motivating the passage to two-dimensional problems. Given the increased complexity of the data, we first investigate the accuracy of the predictions, by testing it with different network architectures and with different combinations of parameters. We focus on the study of convergence, where we compare our strategy with existing Multigrid methods. More specifically, we consider the Semi-Geometric Multigrid and the Algebraic Multigrid. A big issue that needs to be faced is the constraint given by feedforward Neural Networks of working only with fixed input and output dimensions. Therefore, we implement and compare different solution to address this problem, by extending the node patches when the neighborhood of a node has less nodes than expected, and by decomposing our feature extraction when the node patch is bigger than the network input. In order to validate this procedure in more general settings, we test our method using several geometries, considering structured and unstructured grids, but without the need of using a specific implementation for each grid. In the last part of this work, we focus on problems with variable diffusion coefficients, where stiffness information is added in the training process. This strategy allows to achieve faster convergence than using transfer operators based only on geometric data. Future discussion should be devoted to the extension of this Neural Network approach to three-dimensional scenarios and to the construction of grid operators for an automatic definition of multilevel solvers, allowing a portable solution in scientific computing.

A Neural Network approach for the generation of Transfer Operators in Multilevel Solvers

TOMASI, CLAUDIO
2022-02-24

Abstract

In this thesis, we investigate the combination of Multigrid methods and Neural Networks, starting from Finite Element discretizations of Partial Differential Equations. Multigrid methods are among the fastest numerical methods employed to solve elliptic equations. They use different levels of approximation in a multilevel hierarchy to compute the solution. The keypoint is to appropriately define transfer operators to transfer information between these different levels. These operators are crucial for fast convergence of Multigrid, but they are generally unknown. Here, we propose Neural Network models for learning transfer operators, and we build a multilevel hierarchy based on the output of the predictive model. After a preliminary study in one-dimensional scenarios, we define our training set by extracting information from geometry and operator matrices. We take the features from the mass matrix and the target from the L2-projection. Then, we customize the model loss function in order to include knowledge about the transfer operators: in this way, our network solves a constrained problem, forcing some domain properties on the predictions. The application of this model in a Multigrid context results in good convergence, motivating the passage to two-dimensional problems. Given the increased complexity of the data, we first investigate the accuracy of the predictions, by testing it with different network architectures and with different combinations of parameters. We focus on the study of convergence, where we compare our strategy with existing Multigrid methods. More specifically, we consider the Semi-Geometric Multigrid and the Algebraic Multigrid. A big issue that needs to be faced is the constraint given by feedforward Neural Networks of working only with fixed input and output dimensions. Therefore, we implement and compare different solution to address this problem, by extending the node patches when the neighborhood of a node has less nodes than expected, and by decomposing our feature extraction when the node patch is bigger than the network input. In order to validate this procedure in more general settings, we test our method using several geometries, considering structured and unstructured grids, but without the need of using a specific implementation for each grid. In the last part of this work, we focus on problems with variable diffusion coefficients, where stiffness information is added in the training process. This strategy allows to achieve faster convergence than using transfer operators based only on geometric data. Future discussion should be devoted to the extension of this Neural Network approach to three-dimensional scenarios and to the construction of grid operators for an automatic definition of multilevel solvers, allowing a portable solution in scientific computing.
24-feb-2022
In this thesis, we investigate the combination of Multigrid methods and Neural Networks, starting from Finite Element discretizations of Partial Differential Equations. Multigrid methods are among the fastest numerical methods employed to solve elliptic equations. They use different levels of approximation in a multilevel hierarchy to compute the solution. The keypoint is to appropriately define transfer operators to transfer information between these different levels. These operators are crucial for fast convergence of Multigrid, but they are generally unknown. Here, we propose Neural Network models for learning transfer operators, and we build a multilevel hierarchy based on the output of the predictive model. After a preliminary study in one-dimensional scenarios, we define our training set by extracting information from geometry and operator matrices. We take the features from the mass matrix and the target from the L2-projection. Then, we customize the model loss function in order to include knowledge about the transfer operators: in this way, our network solves a constrained problem, forcing some domain properties on the predictions. The application of this model in a Multigrid context results in good convergence, motivating the passage to two-dimensional problems. Given the increased complexity of the data, we first investigate the accuracy of the predictions, by testing it with different network architectures and with different combinations of parameters. We focus on the study of convergence, where we compare our strategy with existing Multigrid methods. More specifically, we consider the Semi-Geometric Multigrid and the Algebraic Multigrid. A big issue that needs to be faced is the constraint given by feedforward Neural Networks of working only with fixed input and output dimensions. Therefore, we implement and compare different solution to address this problem, by extending the node patches when the neighborhood of a node has less nodes than expected, and by decomposing our feature extraction when the node patch is bigger than the network input. In order to validate this procedure in more general settings, we test our method using several geometries, considering structured and unstructured grids, but without the need of using a specific implementation for each grid. In the last part of this work, we focus on problems with variable diffusion coefficients, where stiffness information is added in the training process. This strategy allows to achieve faster convergence than using transfer operators based only on geometric data. Future discussion should be devoted to the extension of this Neural Network approach to three-dimensional scenarios and to the construction of grid operators for an automatic definition of multilevel solvers, allowing a portable solution in scientific computing.
File in questo prodotto:
File Dimensione Formato  
tomasi_thesis.pdf

accesso aperto

Descrizione: A Neural Network approach for the generation of Transfer Operators in Multilevel Solvers
Tipologia: Tesi di dottorato
Dimensione 12.98 MB
Formato Adobe PDF
12.98 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11571/1450893
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact