Line search#
The projected gradient method as part of the GRAMPC algorithm Algorithm 1 requires the solution of the line search problem [2]
GRAMPC implements two efficient strategies to solve this problem in an approximate manner. The following options apply to both line search methods that are detailed below (Adaptive line search and Explicit line search):
LineSearchType
: This option selects either the adaptive line search strategy (valueadaptive
) or the explicit approach (valueexplicit1
orexplicit2
).LineSearchExpAutoFallback
: If this option is activated, the automatic fallback strategy is used in the case that the explicit formulas result in negative step sizes.LineSearchMax
: This option sets the maximum value \(\alpha_{\max}\) of the step size \(\alpha\).LineSearchMin
: This option sets the minimum value \(\alpha_{\min}\) of the step size \(\alpha\).LineSearchInit
: Indicates the initial value \(\alpha_{\text{init}}>0\) for the step size \(\alpha\). If the adaptive line search is used, the sample point \(\alpha_2\) is set to \(\alpha_2 = \alpha_{\text{init}}\).OptimParamLineSearchFactor
: This option sets the adaptation factor \(\gamma_{\mb{p}}\) that weights the update of the parameter vector \(\mb{p}\) against the update of the control \(\mb{u}\).OptimTimeLineSearchFactor
: This option sets the adaptation factor \(\gamma_{T}\) that weights the update of the end time \(T\) against the update of the control \(\mb{u}\).
Adaptive line search#
An appropriate way to determine the step size is the adaptive line search approach from [3], where a polynomial approximation of the cost \(\bar J\) is used and an adaptation of the search intervals is performed. More precisely, the cost functional is evaluated at three sample points \(\alpha_1 < \alpha_2 < \alpha_3\) with \(\alpha_2=\frac{1}{2}\left(\alpha_1+\alpha_3\right)\), which are used to construct a quadratic polynomial of the cost according to
Fig. 2 Adaptation of line search interval.#
Subsequently, a step size \(\alpha^{j}\) is computed by minimizing the cost approximation (7). If necessary, the interval \([\alpha_1,\alpha_3]\) is adapted for the next gradient iteration in the following way
with the adaptation factor \(\kappa > 1\), the interval tolerance \(\varepsilon_{\alpha} \in (0,0.5)\), the absolute cost tolerance \({\varepsilon_{\phi}\in[0,\infty)}\) for adapting the interval and the interval bounds \(\alpha_{\max}>\alpha_{\min}>0\). The modification (8) of the line search interval tracks the minimum point \(\alpha^{j}\) of the line search problem in the case when \(\alpha^{j}\) is either outside of the interval \([\alpha_1,\alpha_3]\) or close to one of the outer bounds \(\alpha_1\), \(\alpha_3\), as illustrated in Fig. 2. The adaptation factor \(\kappa\) accounts for scaling as well as shifting of the interval \([\alpha_1,\alpha_3]\) in the next gradient iteration, if \(\alpha^{j}\) lies in the vicinity of the interval bounds \([\alpha_1,\alpha_3]\) as specified by the interval tolerance \(\varepsilon_\alpha\). This adaptive strategy allows one to track the minimum of the line search problem (6) over the gradient iterations \(j\) and MPC steps \(k\), while guaranteeing a fixed number of operations in view of a real-time MPC implementation. The absolute tolerance \(\varepsilon_{\phi}\) of the difference in the (scaled) costs at the interval bounds \(|\Phi(\alpha_1)-\Phi(\alpha_3)|\) avoids oscillations of the interval width in regions where the cost function \(\bar{J}\) is almost constant.
The following options apply specifically to the adaptive line search strategy:
LineSearchAdaptAbsTol
: This option sets the absolute tolerance \(\varepsilon_{\phi}\) of the difference in costs at the interval bounds \(\alpha_1\) and \(\alpha_2\). If the difference in the (scaled) costs on these bounds falls below \(\varepsilon_{\phi}\), the adaption of the interval is stopped in order to avoid oscillations.LineSearchAdaptFactor
: This option sets the adaptation factor \(\kappa > 1\) in (8) that determines how much the line search interval can be adapted from one gradient iteration to the next.LineSearchIntervalTol
: This option sets the interval tolerance \(\varepsilon_{\alpha} \in (0,0.5)\) in (8) that determines for which values of \(\alpha\) the adaption is performed.LineSearchIntervalFactor
: This option sets the interval factor \(\beta \in (0,1)\) that specifies the interval bounds \([\alpha_1,\alpha_3]\) according to \(\alpha_1 = \alpha_2 (1 - \beta)\) and \(\alpha_3 = \alpha_2 (1 + \beta)\), whereby the mid sample point is initialized as \(\alpha_2 = \alpha_\text{init}\).
Explicit line search#
An alternative way to determine the step size in order to further reduce the computational effort for time-critical problems is the explicit line search approach originally discussed in [4] and adapted in [5] for the optimal control case. The motivation is to minimize the difference between two consecutive control updates \(u_k^{i|j}(\tau)\) and \(u_k^{i|j+1}(\tau)\) in the unconstrained case and additionally assuming the same step size \(\alpha^{i|j}\), i.e.
with \(\| z \|^2_{L^2_m[0,T]} = \langle z,z \rangle :=\int_0^T z^\mathsf{T}(t) z(t) \, {\rm d}t\).
Fig. 3 Motivation for the explicit line search strategy.#
Fig. 3 illustrates the general idea behind (9). To solve (9), consider the following function
The minimum has to satisfy the stationarity condition
A suitable step size \(\alpha^{j}\) then follows to
Another way to compute an appropriate step size for the control update can be achieved by reformulating (10) in the following way:
In the subsequent, the new function \(\bar q(\alpha)\) is minimized w.r.t. the step size leading to a similar solution
For the original problem (9), the solution
follows.
In the GRAMPC implementation, both approaches (11) and (12) are available. In addition, the step size \(\alpha^{j}\) is bounded by the upper and lower values \(\alpha_{\max} > \alpha_{\min} > 0\). However, if the originally computed step size \(\alpha^{j}\) is less than zero [1], either the initial step size \(\alpha^{j} = \alpha_\text{init}\) or the automatic fallback strategy that is detailed in the next subsection is used in order to achieve a valid step size. The fallback strategy is set with the following option (only available for the explicit line search strategies):
LineSearchExpAutoFallback
: If this option is activated, the automatic fallback strategy is used in the case that the explicit formulas result in negative step sizes.
Fallback strategy for explicit line search#
While the initial step size can be used as fallback solution if the
explicit step size computation yields negative values, it often requires
problem-specific tuning of \(\alpha_\text{init}\) for achieving
optimal performance. As alternative, GRAMPC implements an automatic
fallback strategy that is based on the idea of using at most 1% of the
control range defined by umax
and umin
. For this purpose, the maximum absolute
value
\(\mb{d}^{i|j}_{\mb{u},\text{max}} = \| \mb{d}_{\mb{u}} ^{i|j}(t) \|_{L^\infty}\)
of the search direction
\(\mb{d}_{\mb{u}} ^{i|j}(t)\) over the horizon is
determined. Subsequently, the step size
follows as the minimal step size required to perform a step of 1% with
respect to the range of at least one control in at least one time step.
Additionally, the step size is limited to 10% of the maximum step size
\(\alpha_{\max}\). Since this strategy requires reasonable limits
for the controls, it is only executed if LineSearchExpAutoFallback
is activated and if these
limits are defined by the user. Furthermore, this strategy can only be
used if OptimControl
is switched on
. In all other case, the initial step size
\(\alpha^{j}=\alpha_\text{init}\) will be used as fallback solution.
Footnotes
T. Englert, A. Völz, F. Mesmer, S. Rhein, and K. Graichen. A software framework for embedded nonlinear model predictive control using a gradient-based augmented Lagrangian approach (GRAMPC). Optimization and Engineering, 20(3):769–809, 2019. doi.org/10.1007/s11081-018-9417-2. doi:10.1007/s11081-018-9417-2.
K. Graichen and B. Käpernick. A real-time gradient method for nonlinear model predictive control. In T. Zheng, editor, Frontiers of Model Predictive Control, pages 9–28. InTech, 2012.
J. Barzilai and J. M. Borwein. Two-point step size gradient methods. SIAM Journal on Numerical Analysis, 8(1):141–148, 1988.
B. Käpernick and K. Graichen. Model predictive control of an overhead crane using constraint substitution. In Proceedings of the American Control Conference (ACC), 3973–3978. 2013.