Skip to main content
Department of Information Technology

Efficient Time-Marching Methods for the TDSE

We develop and study unitary time-propagation schemes for the TDSE with an explicitly time-dependent potential. In one approach, the TDSE is viewed as a classical Hamiltonian system and partitioned Runge-Kutta methods are used. In an alternative approach, exponential integrators are exploited in combination with a Magnus series representation of the propagator.
In [1], we refine exponential integrators based on the Magnus series representation for the case where the Hamiltonian only consists of blocks that are either time or space dependent. These kind of Hamiltonian arise for molecule-laser interactions within the so called Franck-Condon approximation. For this case, we can show that the exponential integrator of order four is very efficient and outperforms partitioned Runge-Kutta methods in a numerical comparison.

Adaptivity and Error Control

We have developed an h,p-adaptive Magnus-Lanczos propagator, where the step size is chosen to control the error in the Magnus expansion and the size of the Krylov space for the control of the error in the Lanczos operator.

We also consider the global error of the time-marching scheme using a posteriori error estimation theory. Based on the self-adjointness of the Hamiltonian, we have devised a global error estimate that does not require to actually solve the dual problem (on which the a posteriori estimate is based).

Replacing the Lanczos algorithm by the Arnoldi algorithm, we can combine this theory with the optimized PML developed in the group, even though the self-adjointness of the Hamiltonian is truncated by the absorbing layer.

Currently, we are also interested in combining the results obtained for time-propagation with an adaptive solver in the spacial coordinate.

Parallelization of the Lanczos Algorithm

When solving large scale problems there is a need for distribution of the node points. Unfortunately, in a straight forward implementation of the Lanczos algorithm the inner products that need to be computed in each iteration hamper the parallel scalability. We therefore, improve the parallel performance of exponential integrators by reducing the synchronization.

Updated  2017-02-04 16:20:18 by Kurt Otto.