This paper revisits the parallel in time algorithm using a nonlinear optimization approach. Like in the traditional 鈥楶arareal鈥?method, the time interval is partitioned into subintervals, and local time integrations are carried out in parallel. The objective cost function quantifies the mismatch of local solutions between adjacent subintervals. The optimization problem is solved iteratively using gradient-based methods. All the computational steps - forward solutions, gradients, and Hessian-vector products - involve only ideally parallel computations and therefore are highly scalable.
The feasibility of the proposed algorithm is studied on three different model problems, namely, heat equation, Arenstorf's orbit, and the Lorenz model.