Numerical approximation methods for semilinear partial differential equations with gradient-dependent nonlinearities

In this thesis we study two numerical approximation methods for the estimation of solutions of a class of semilinear partial differential equations (PDEs) with gradient-dependent nonlinearities. In both cases the main goal is to overcome the so called curse of dimensionality which means that the computational effort for the solution grows exponentially in the dimension of the PDE to be solved. The first considered method is a Multilevel Picard (MLP) approximation scheme which is based on the reformulation of the PDE as stochastic fixed point equation (SFPE). For this transfer from PDEs to the corresponding SFPEs we develop an adjusted Bismut-Elworthy-Li formula. Then we analyse SFPEs in an abstract setting and prove existence and uniqueness of solutions by using a Banach fixed point argument. We successfully re-transfer the SFPE to the considered PDE by proving that the SFPE solution is the unique viscosity solution of the PDE. Thus, we can write the PDE solutions as SFPE solutions and therefore justify the constructions of a MLP approximation scheme. We define and study this MLP approximation scheme and establish an upper bound on the approximation error. Moreover, in the setting of a smooth solution we prove - under certain assumptions - that the MLP approximation scheme does not suffer from the curse of dimensionality. As a second approach to numerically approximate PDE solutions, we consider stochastic gradient descent (SGD) type optimization methods in the training of deep neural networks (DNNs) with Rectified Linear Unit activation function. We prove that under the assumption of a constant target function and sufficiently small but not $L^1$-summable SGD step sizes, the expected value of the risks of the considered SGD process in the training of DNNs converges to zero if the number of SGD steps goes to infinity.

Cite

Citation style:
Could not load citation form.

Rights

Use and reproduction:
All rights reserved