crocoddyl  1.9.0
Contact RObot COntrol by Differential DYnamic programming Library (Crocoddyl)
SolverAbstract Class Referenceabstract

Abstract class for optimal control solvers. More...

#include <crocoddyl/core/solver-base.hpp>

Inheritance diagram for SolverAbstract:

Public Member Functions

EIGEN_MAKE_ALIGNED_OPERATOR_NEW SolverAbstract (boost::shared_ptr< ShootingProblem > problem)
 Initialize the solver. More...
 
virtual void computeDirection (const bool recalc)=0
 Compute the search direction \((\delta\mathbf{x}^k,\delta\mathbf{u}^k)\) for the current guess \((\mathbf{x}^k_s,\mathbf{u}^k_s)\). More...
 
double computeDynamicFeasibility ()
 Compute the dynamic feasibility \(\|\mathbf{f}_{\mathbf{s}}\|_{\infty,1}\) for the current guess \((\mathbf{x}^k,\mathbf{u}^k)\). More...
 
virtual const Eigen::Vector2d & expectedImprovement ()=0
 Return the expected improvement \(dV_{exp}\) from a given current search direction \((\delta\mathbf{x}^k,\delta\mathbf{u}^k)\). More...
 
double get_cost () const
 Return the total cost.
 
const Eigen::Vector2d & get_d () const
 Return the LQ approximation of the expected improvement.
 
double get_dV () const
 Return the cost reduction \(dV\).
 
double get_dVexp () const
 Return the expected cost reduction \(dV_{exp}\).
 
double get_ffeas () const
 Return the feasibility of the dynamic constraints \(\|\mathbf{f}_{\mathbf{s}}\|_{\infty,1}\) of the current guess.
 
const std::vector< Eigen::VectorXd > & get_fs () const
 Return the gaps \(\mathbf{f}_{s}\).
 
bool get_inffeas () const
 Return the norm used for the computing the feasibility (true for \(\ell_\infty\), false for \(\ell_1\))
 
bool get_is_feasible () const
 Return the feasibility status of the \((\mathbf{x}_s,\mathbf{u}_s)\) trajectory.
 
std::size_t get_iter () const
 Return the number of iterations performed by the solver.
 
const boost::shared_ptr< ShootingProblem > & get_problem () const
 Return the shooting problem.
 
double get_steplength () const
 Return the step length \(\alpha\).
 
double get_stop () const
 Return the value computed by stoppingCriteria()
 
double get_th_acceptstep () const
 Return the threshold used for accepting a step.
 
double get_th_gaptol () const
 Return the threshold for accepting a gap as non-zero.
 
double get_th_stop () const
 Return the tolerance for stopping the algorithm.
 
double get_ureg () const
 Return the control regularization value.
 
const std::vector< Eigen::VectorXd > & get_us () const
 Return the control trajectory \(\mathbf{u}_s\).
 
double get_xreg () const
 Return the state regularization value.
 
const std::vector< Eigen::VectorXd > & get_xs () const
 Return the state trajectory \(\mathbf{x}_s\).
 
const std::vector< boost::shared_ptr< CallbackAbstract > > & getCallbacks () const
 Return the list of callback functions using for diagnostic.
 
virtual void resizeData ()
 Resizing the solver data. More...
 
void set_inffeas (const bool inffeas)
 Modify the current norm used for computed the feasibility.
 
void set_th_acceptstep (const double th_acceptstep)
 Modify the threshold used for accepting step.
 
void set_th_gaptol (const double th_gaptol)
 Modify the threshold for accepting a gap as non-zero.
 
void set_th_stop (const double th_stop)
 Modify the tolerance for stopping the algorithm.
 
void set_ureg (const double ureg)
 Modify the control regularization value.
 
void set_us (const std::vector< Eigen::VectorXd > &us)
 Modify the control trajectory \(\mathbf{u}_s\).
 
void set_xreg (const double xreg)
 Modify the state regularization value.
 
void set_xs (const std::vector< Eigen::VectorXd > &xs)
 Modify the state trajectory \(\mathbf{x}_s\).
 
void setCallbacks (const std::vector< boost::shared_ptr< CallbackAbstract > > &callbacks)
 Set a list of callback functions using for the solver diagnostic. More...
 
void setCandidate (const std::vector< Eigen::VectorXd > &xs_warm=DEFAULT_VECTOR, const std::vector< Eigen::VectorXd > &us_warm=DEFAULT_VECTOR, const bool is_feasible=false)
 Set the solver candidate trajectories \((\mathbf{x}_s,\mathbf{u}_s)\). More...
 
virtual bool solve (const std::vector< Eigen::VectorXd > &init_xs=DEFAULT_VECTOR, const std::vector< Eigen::VectorXd > &init_us=DEFAULT_VECTOR, const std::size_t maxiter=100, const bool is_feasible=false, const double reg_init=1e-9)=0
 Compute the optimal trajectory \(\mathbf{x}^*_s,\mathbf{u}^*_s\) as lists of \(T+1\) and \(T\) terms. More...
 
virtual double stoppingCriteria ()=0
 Return a positive value that quantifies the algorithm termination. More...
 
virtual double tryStep (const double steplength=1)=0
 Try a predefined step length \(\alpha\) and compute its cost improvement \(dV\). More...
 

Protected Attributes

std::vector< boost::shared_ptr< CallbackAbstract > > callbacks_
 Callback functions.
 
double cost_
 Total cost.
 
Eigen::Vector2d d_
 LQ approximation of the expected improvement.
 
double dV_
 Cost reduction obtained by tryStep()
 
double dVexp_
 Expected cost reduction.
 
double ffeas_
 Feasibility of the dynamic constraints.
 
std::vector< Eigen::VectorXd > fs_
 Gaps/defects between shooting nodes.
 
bool inffeas_
 
bool is_feasible_
 Label that indicates is the iteration is feasible.
 
std::size_t iter_
 Number of iteration performed by the solver.
 
boost::shared_ptr< ShootingProblemproblem_
 optimal control problem
 
double steplength_
 Current applied step-length.
 
double stop_
 Value computed by stoppingCriteria()
 
double th_acceptstep_
 Threshold used for accepting step.
 
double th_gaptol_
 Threshold limit to check non-zero gaps.
 
double th_stop_
 Tolerance for stopping the algorithm.
 
double tmp_feas_
 Temporal variables used for computed the feasibility.
 
double ureg_
 Current control regularization values.
 
std::vector< Eigen::VectorXd > us_
 Control trajectory.
 
bool was_feasible_
 Label that indicates in the previous iterate was feasible.
 
double xreg_
 Current state regularization value.
 
std::vector< Eigen::VectorXd > xs_
 State trajectory.
 

Detailed Description

Abstract class for optimal control solvers.

A solver resolves an optimal control solver of the form

\begin{eqnarray*} \begin{Bmatrix} \mathbf{x}^*_0,\cdots,\mathbf{x}^*_{T} \\ \mathbf{u}^*_0,\cdots,\mathbf{u}^*_{T-1} \end{Bmatrix} = \arg\min_{\mathbf{x}_s,\mathbf{u}_s} && l_T (\mathbf{x}_T) + \sum_{k=0}^{T-1} l_k(\mathbf{x}_t,\mathbf{u}_t) \\ \operatorname{subject}\,\operatorname{to} && \mathbf{x}_0 = \mathbf{\tilde{x}}_0\\ && \mathbf{x}_{k+1} = \mathbf{f}_k(\mathbf{x}_k,\mathbf{u}_k)\\ && \mathbf{x}_k\in\mathcal{X}, \mathbf{u}_k\in\mathcal{U} \end{eqnarray*}

where \(l_T(\mathbf{x}_T)\), \(l_k(\mathbf{x}_t,\mathbf{u}_t)\) are the terminal and running cost functions, respectively, \(\mathbf{f}_k(\mathbf{x}_k,\mathbf{u}_k)\) describes evolution of the system, and state and control admissible sets are defined by \(\mathbf{x}_k\in\mathcal{X}\), \(\mathbf{u}_k\in\mathcal{U}\). An action model, defined in the shooting problem, describes each node \(k\). Inside the action model, we specialize the cost functions, the system evolution and the admissible sets.

The main routines are computeDirection() and tryStep(). The former finds a search direction and typically computes the derivatives of each action model. The latter rollout the dynamics and cost (i.e., the action) to try the search direction found by computeDirection. Both functions used the current guess defined by setCandidate(). Finally, solve() function is used to define when the search direction and length are computed in each iterate. It also describes the globalization strategy (i.e., regularization) of the numerical optimization.

See also
solve(), computeDirection(), tryStep(), stoppingCriteria()

Definition at line 51 of file solver-base.hpp.

Constructor & Destructor Documentation

◆ SolverAbstract()

SolverAbstract ( boost::shared_ptr< ShootingProblem problem)
explicit

Initialize the solver.

Parameters
[in]problemshooting problem

Definition at line 18 of file solver-base.cpp.

Member Function Documentation

◆ solve()

virtual bool solve ( const std::vector< Eigen::VectorXd > &  init_xs = DEFAULT_VECTOR,
const std::vector< Eigen::VectorXd > &  init_us = DEFAULT_VECTOR,
const std::size_t  maxiter = 100,
const bool  is_feasible = false,
const double  reg_init = 1e-9 
)
pure virtual

Compute the optimal trajectory \(\mathbf{x}^*_s,\mathbf{u}^*_s\) as lists of \(T+1\) and \(T\) terms.

From an initial guess init_xs, init_us (feasible or not), iterate over computeDirection() and tryStep() until stoppingCriteria() is below threshold. It also describes the globalization strategy used during the numerical optimization.

Parameters
[in]init_xsinitial guess for state trajectory with \(T+1\) elements (default [])
[in]init_usinitial guess for control trajectory with \(T\) elements (default [])
[in]maxitermaximum allowed number of iterations (default 100)
[in]isFeasibletrue if the init_xs are obtained from integrating the init_us (rollout) (default false)
[in]regInitinitial guess for the regularization value. Very low values are typical used with very good guess points (init_xs, init_us)
Returns
A boolean that describes if convergence was reached.

Implemented in SolverDDP, SolverFDDP, and SolverKKT.

◆ computeDirection()

virtual void computeDirection ( const bool  recalc)
pure virtual

Compute the search direction \((\delta\mathbf{x}^k,\delta\mathbf{u}^k)\) for the current guess \((\mathbf{x}^k_s,\mathbf{u}^k_s)\).

You must call setCandidate() first in order to define the current guess. A current guess defines a state and control trajectory \((\mathbf{x}^k_s,\mathbf{u}^k_s)\) of \(T+1\) and \(T\) elements, respectively.

Parameters
[in]recalctrue for recalculating the derivatives at current state and control
Returns
The search direction \((\delta\mathbf{x},\delta\mathbf{u})\) and the dual lambdas as lists of \(T+1\), \(T\) and \(T+1\) lengths, respectively

Implemented in SolverDDP, and SolverKKT.

◆ tryStep()

virtual double tryStep ( const double  steplength = 1)
pure virtual

Try a predefined step length \(\alpha\) and compute its cost improvement \(dV\).

It uses the search direction found by computeDirection() to try a determined step length \(\alpha\). Therefore, it assumes that we have run computeDirection() first. Additionally, it returns the cost improvement \(dV\) along the predefined step length \(\alpha\).

Parameters
[in]steplengthapplied step length ( \(0\leq\alpha\leq1\))
Returns
the cost improvement

Implemented in SolverDDP, and SolverKKT.

◆ stoppingCriteria()

virtual double stoppingCriteria ( )
pure virtual

Return a positive value that quantifies the algorithm termination.

These values typically represents the gradient norm which tell us that it's been reached the local minima. The stopping criteria strictly speaking depends on the search direction (calculated by computeDirection()) but it could also depend on the chosen step length, tested by tryStep().

Implemented in SolverDDP, and SolverKKT.

◆ expectedImprovement()

virtual const Eigen::Vector2d& expectedImprovement ( )
pure virtual

Return the expected improvement \(dV_{exp}\) from a given current search direction \((\delta\mathbf{x}^k,\delta\mathbf{u}^k)\).

For computing the expected improvement, you need to compute the search direction first via computeDirection().

Implemented in SolverFDDP, SolverDDP, and SolverKKT.

◆ resizeData()

void resizeData ( )
virtual

Resizing the solver data.

If the shooting problem has changed after construction, then this function resizes all the data before starting resolve the problem.

Reimplemented in SolverDDP, SolverBoxDDP, and SolverBoxFDDP.

Definition at line 56 of file solver-base.cpp.

◆ computeDynamicFeasibility()

double computeDynamicFeasibility ( )

Compute the dynamic feasibility \(\|\mathbf{f}_{\mathbf{s}}\|_{\infty,1}\) for the current guess \((\mathbf{x}^k,\mathbf{u}^k)\).

The feasibility can be computed using the computed using the \(\ell_\infty\) and \(\ell_1\) norms. By default we use the \(\ell_\infty\) norm; however, we can use the \(\ell_1\) norm via set_inffeas(). Note that \(\mathbf{f}_{\mathbf{s}}\) are the gaps on the dynamics, which are computed at each node as \(\mathbf{x}^{'}-\mathbf{f}(\mathbf{x},\mathbf{u})\).

Definition at line 66 of file solver-base.cpp.

◆ setCandidate()

void setCandidate ( const std::vector< Eigen::VectorXd > &  xs_warm = DEFAULT_VECTOR,
const std::vector< Eigen::VectorXd > &  us_warm = DEFAULT_VECTOR,
const bool  is_feasible = false 
)

Set the solver candidate trajectories \((\mathbf{x}_s,\mathbf{u}_s)\).

The solver candidates are defined as a state and control trajectories \((\mathbf{x}_s,\mathbf{u}_s)\) of \(T+1\) and \(T\) elements, respectively. Additionally, we need to define the dynamic feasibility of the \((\mathbf{x}_s,\mathbf{u}_s)\) pair. Note that the trajectories are feasible if \(\mathbf{x}_s\) is the resulting trajectory from the system rollout with \(\mathbf{u}_s\) inputs.

Parameters
[in]xsstate trajectory of \(T+1\) elements (default [])
[in]uscontrol trajectory of \(T\) elements (default [])
[in]isFeasibletrue if the xs are obtained from integrating the us (rollout)

Definition at line 103 of file solver-base.cpp.

◆ setCallbacks()

void setCallbacks ( const std::vector< boost::shared_ptr< CallbackAbstract > > &  callbacks)

Set a list of callback functions using for the solver diagnostic.

Each iteration, the solver calls these set of functions in order to allowed user the diagnostic of its performance.

Parameters
callbacksset of callback functions

Definition at line 160 of file solver-base.cpp.

Member Data Documentation

◆ inffeas_

bool inffeas_
protected

True indicates if we use l-inf norm for computing the feasibility, otherwise false represents the l-1 norm

Definition at line 331 of file solver-base.hpp.


The documentation for this class was generated from the following files: