In classical machine learning (ML), “overfitting” is the phenomenon occurring when a given model learns the training data excessively well, and it thus performs poorly on unseen data. A commonly employed technique in ML is the so called “dropout,” which prevents computational units from becoming too specialized, hence reducing the risk of overfitting. With the advent of quantum neural networks (QNNs) as learning models, overfitting might soon become an issue, owing to the increasing depth of quantum circuits as well as multiple embedding of classical features, which are employed to give the computational nonlinearity. Here, a generalized approach is presented to apply the dropout technique in QNN models, defining and analyzing different quantum dropout strategies to avoid overfitting and achieve a high level of generalization. This study allows to envision the power of quantum dropout in enabling generalization, providing useful guidelines on determining the maximal dropout probability for a given model, based on overparametrization theory. It also highlights how quantum dropout does not impact the features of the QNN models, such as expressibility and entanglement. All these conclusions are supported by extensive numerical simulations and may pave the way to efficiently employing deep quantum machine learning (QML) models based on state-of-the-art QNNs.

A General Approach to Dropout in Quantum Neural Networks

Scala F.;Gerace D.
2023-01-01

Abstract

In classical machine learning (ML), “overfitting” is the phenomenon occurring when a given model learns the training data excessively well, and it thus performs poorly on unseen data. A commonly employed technique in ML is the so called “dropout,” which prevents computational units from becoming too specialized, hence reducing the risk of overfitting. With the advent of quantum neural networks (QNNs) as learning models, overfitting might soon become an issue, owing to the increasing depth of quantum circuits as well as multiple embedding of classical features, which are employed to give the computational nonlinearity. Here, a generalized approach is presented to apply the dropout technique in QNN models, defining and analyzing different quantum dropout strategies to avoid overfitting and achieve a high level of generalization. This study allows to envision the power of quantum dropout in enabling generalization, providing useful guidelines on determining the maximal dropout probability for a given model, based on overparametrization theory. It also highlights how quantum dropout does not impact the features of the QNN models, such as expressibility and entanglement. All these conclusions are supported by extensive numerical simulations and may pave the way to efficiently employing deep quantum machine learning (QML) models based on state-of-the-art QNNs.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11571/1498297
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact