LQR Smoothing: Locally-Optimal Feedback Control for Systems with Non-Linear Dynamics and Non-Quadratic Cost (pdf)

Jur van den Berg


This paper introduces the novel concept of LQR smoothing, which is the LQR-equivalent of Kalman smoothing and consists of both a backward pass and a forward pass. In the backward pass the cost-to-go function is computed using the standard LQR Riccati equation that runs backward in time, and in the forward pass the cost-to-come function is computed using a Riccati equation that runs forward in time. The sum of the cost-to-go and the cost-to-come function gives the total-cost function, and we will show that the states for which the totalcost function is minimal constitute the minimum-cost trajectory for the linear-quadratic optimal control problem. This insight is used to construct a fast-converging iterative procedure to compute a locally-optimal feedback control policy for systems with non-linear dynamics and non-quadratic cost, where in each iteration the current minimal-total-cost states provide natural points about which the dynamics can be linearized and the cost quadratized. We demonstrate the potential of our approach on two illustrative non-linear control problems involving physical differential-drive robots and simulated quadrotor helicopters in environments with obstacles, and show that our approach converges in only about a third of the number of iterations required by existing approaches such as Iterative LQR.


Source Code

C++ source code of Extended LQR is available for download. The Extended LQR Library comes as a MS Visual Studio 2010 project that has no external dependencies, so it should be very easy to compile. Please note the copyright notice contained in each of the files: the source code is free to use for academic and non-profit purposes only.


Shown in the video is the Extended LQR algorithm implemented on a quadrotor helicopter simulation and a iRobot Create differential-drive robot in simulation and physical experiments.