Sequential Model Correction for Nonlinear Inverse Problems
Arjas, Arttu; Sillanpää, Mikko J.; Hauptmann, Andreas S. (2023-10-19)
Arjas, Arttu
Sillanpää, Mikko J.
Hauptmann, Andreas S.
Society for industrial and applied mathematics
19.10.2023
Arjas, A., Sillanpää, M. J., & Hauptmann, A. S. (2023). Sequential model correction for nonlinear inverse problems. SIAM Journal on Imaging Sciences, 16(4), 2015–2039. https://doi.org/10.1137/23M1549286
https://rightsstatements.org/vocab/InC/1.0/
© by SIAM.
https://rightsstatements.org/vocab/InC/1.0/
© by SIAM.
https://rightsstatements.org/vocab/InC/1.0/
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:oulu-202311293432
https://urn.fi/URN:NBN:fi:oulu-202311293432
Tiivistelmä
Abstract
Inverse problems are in many cases solved with optimization techniques. When the underlying model is linear, first-order gradient methods are usually sufficient. With nonlinear models, due to nonconvexity, one must often resort to second-order methods that are computationally more expensive. In this work we aim to approximate a nonlinear model with a linear one and correct the resulting approximation error. We develop a sequential method that iteratively solves a linear inverse problem and updates the approximation error by evaluating it at the new solution. This treatment convexifies the problem and allows us to benefit from established convex optimization methods. We separately consider cases where the approximation is fixed over iterations and where the approximation is adaptive. In the fixed case we show theoretically under what assumptions the sequence converges. In the adaptive case, particularly considering the special case of approximation by first-order Taylor expansion, we show that with certain assumptions the sequence converges to a critical point of the original nonconvex functional. Furthermore, we show that with quadratic objective functions the sequence corresponds to the Gauss–Newton method. Finally, we showcase numerical results superior to the conventional model correction method. We also show that a fixed approximation can provide competitive results with considerable computational speed-up.
Inverse problems are in many cases solved with optimization techniques. When the underlying model is linear, first-order gradient methods are usually sufficient. With nonlinear models, due to nonconvexity, one must often resort to second-order methods that are computationally more expensive. In this work we aim to approximate a nonlinear model with a linear one and correct the resulting approximation error. We develop a sequential method that iteratively solves a linear inverse problem and updates the approximation error by evaluating it at the new solution. This treatment convexifies the problem and allows us to benefit from established convex optimization methods. We separately consider cases where the approximation is fixed over iterations and where the approximation is adaptive. In the fixed case we show theoretically under what assumptions the sequence converges. In the adaptive case, particularly considering the special case of approximation by first-order Taylor expansion, we show that with certain assumptions the sequence converges to a critical point of the original nonconvex functional. Furthermore, we show that with quadratic objective functions the sequence corresponds to the Gauss–Newton method. Finally, we showcase numerical results superior to the conventional model correction method. We also show that a fixed approximation can provide competitive results with considerable computational speed-up.
Kokoelmat
- Avoin saatavuus [37920]