When working with robocop you will basically need to use models of robots, called descriptions. Descriptions define the mechanical properties of robots like their joints and bodies for rigid multi body robots. Descriptions are written inside files called URDF files, that follow the common URDF convention adopted by the robotics community.

So the first step of robotic software development consists in using a robot description.

Preparing the work

First we create a dedicated pid workspace to develop our project:

git clone https://gite.lirmm.fr/pid/pid-workspace.git
cd pid-worskspace
./pid configure

Then in this project we install the last version of the robocop core library:

pid deploy package=robocop-core

Now we can start working with robocop.

Install a package for working with franka emika panda robot

Now let’s suppose we want to work with a simple roboctic arm, we use the panda robot from franka emika. As explained the first step consists in provding a description for this robot. Hopefully this description is already made, so we simply install it in the workspace:

pid deploy package=robocop-franka-description

Then we go into this package repository:

pid cd robocop-franka-description

And open the content of the robocop-franka-description folder with your favorite code editor (e.g. code).

That’s it we have installed the description for the panda robot, but for now it is not much functional. Let’s first have a look at the content.

The root CMakeLists.txt looks like:

cmake_minimum_required(VERSION 3.19.8)
set(WORKSPACE_DIR ${CMAKE_SOURCE_DIR}/../.. CACHE PATH "root of the PID workspace directory")
list(APPEND CMAKE_MODULE_PATH ${WORKSPACE_DIR}/cmake) # using generic scripts/modules of the workspace
include(Package_Definition NO_POLICY_SCOPE)

project(robocop-franka-description)

PID_Package(
    ...
    PUBLIC_ADDRESS     https://gite.lirmm.fr/robocop/description/robocop-franka-description.git
    DESCRIPTION        "Franka panda robot and hand description (model + meshes)"
    VERSION            1.0.1
)

PID_Dependency(robocop-core VERSION 1.0)

...
#NOTE: explained later !


PID_Publishing(
    PROJECT https://gite.lirmm.fr/robocop/description/robocop-franka-description
    DESCRIPTION "Franka panda robot and hand description (model + meshes)"
    FRAMEWORK robocop
    CATEGORIES description/robot
    ...
)

build_PID_Package()

This is a quite common pattern for a PID package, nothing very special. We have deployed the version 1.0.1 of the robocop-franka-description package.

For now the only important part is the dependency: robocop-franka-description depends on the robocop-core version 1.0 or any compatible version. This dependency is mandatory for any package contributing to the robocop framework.

Another important point is the naming of projects: by convention any robocop package should be prefixed with robocop-. Packages providing description should be suffixed with -description, so the prackage providing description for franka robots is named robocop-franka-description.

Also, whener a package provides new contributions to the robocop framework, like new description, drivers, simulators or controllers, it is a good idea to declare it as a member of robocop framework using PID_Publishing function. This way it will be referenced/documented as a framework extension and share same package namespace with other packages of the framework.

Now let’s have a look at the src/CMakeListst.txt:

PID_Component(
    HEADER
    NAME franka-description
    DESCRIPTION "Franka panda robot and hand description (model + meshes)"
    RUNTIME_RESOURCES
        robocop-franka-description
)

It defines a header library called franka-description (a library without implementation), that in fact has no interface (no public header in the include folder). The name of libraries in desription package should always follow the convention: same name as the package but without the robocop- prefix. Tehe only purpose of this “fake library” is to export files and folders containing the description of franka robots. This is achieved using the RUNTIME_RESOURCES argument of PID_Component function.

Have a look into the share/resources subfolder of the project you will find a folder named robocop-franka-description. This folder contains the exported description files. As a convention use the same name as the package to export runtime resources of a description package that is why for this package the folder name is robocop-franka-description.

In this folder you can find folders mesh containing assets used in descriptions and models that contains description files of the panda robot and other elements. There are three URDF files in the models one for panda arm, one for panda hand and one grouping these two objects together. These URDF files are mostly standard except for path referencing resources, that do not use ROS based path resolution. Lets’as have a look at an extract of franka_panda.urdf:

<link name="panda_link0">
    <visual>
      <geometry>
        <mesh filename="robocop-franka-description/meshes/visual/link0/link0_0.obj"/>
      </geometry>
      <material name="link0_0_material">
        <color rgba="0.901961 0.921569 0.929412 1.000000"/>
      </material>
    </visual>
    <visual>
      <geometry>
        <mesh filename="robocop-franka-description/meshes/visual/link0/link0_1.obj"/>
      </geometry>
      <material name="link0_1_material">
        <color rgba="0.250980 0.250980 0.250980 1.000000"/>
      </material>
    </visual>
...
</link>    

The path to the first visual of panda_link0 is "robocop-franka-description/meshes/visual/link0/link0_0.obj". This path is a PID runtime resource path that target the content of the declared runtime resource of franka-description library (robocop-franka-description). Any package using franka-description library will so be able to find the corresponding files (in the example meshes/visual/link0/link0_1.obj starting from the root folder robocop-franka-description).

Except from this subtle difference URDF description can remain standard.

Robocop also provide other kind of descriptions, in YAML format, to help predefine important joint groups. These files are also locate din models folder:

  • franka_hand.yaml includes the corresponding URDF and define the hand joint group.
  • franka_panda.yaml includes the corresponding URDF and define the arm joint group.
  • franka_panda_with_hand.yaml includes the previous YAML files and predefine how the hand is mounted on the arm.

Here is the code of franka_panda.yaml:

models:
  - path: robocop-franka-description/models/franka_panda.urdf

joint_groups:
  arm: panda_joint[1-7]

Inclusion of a URDF file is achieved using path keyword under models entry. models is a sequence of included models, either URDF and/or YAML. Joints groups are defined under the joint_groups entry. This entry contains a map with each element key (e.g. arm) defines a joint group identifer and value (e.g. panda_joint[1-7]) defines the selected joints that are part of the group. The value can be a regular expression like in the example, to ease selection of joints defined in the models entry.

Here is the code of franka_panda_with_hand.yaml:

models:
  - include: robocop-franka-description/models/franka_panda.yaml
  - include: robocop-franka-description/models/franka_hand.yaml
    joint:
      parent: panda_link8
      position:
        euler: [0, 0, -0.7853981633974483]

joint_groups:
  all_joints: [arm, hand]

This description simply consists in composing the previous models. The hand is attached to the panda arm given the following pattern:

  • Each imported model (URDF or YAML) has a unique root body. In the example one for the arm and one for the hand. This root body is used to attach the description of the robot to the world or to another body.
  • When we compose a description the resulting global model needs to have only one root body. In the example we attach the root body of the end with the endeffector body of the arm, named panda_link8, so finally the root body of the resulting model is the remaining root body: the one of the arm.
  • The attachment is used by defining a fixed joint to the hand description (using joint entry under the same element of the sequence that imports the hand). This joint defines the connection between the hand root body and its parent body that is panda_link8 defined in the arm description.
  • Attachment can also define the position of the attached body relative to the parent body frame. In the example the hand is rotated (radians, trigonometric wise) around panda_link8 z axis in order to match panda arm and hand predefined connection specifications.

From imported models we can also define joint groups, here all_joints that defines a vector with arm’s joints first and hand’s joint after.

Now we have a complete description its time to test it.

Testing the description

To test the description we will basically use a simulator. The simulator helps to understand if there are errors in the URDF files, in meshes or in description composition. So let’s first create an example application that will be used just for that.

Defining the example application

As our example application needs a simulator we must import it and import as well the model library used to compute kinematics/dynamics. In the root CMakeLists.txt by adding new dependencies:

...
PID_Dependency(robocop-core VERSION 1.0)

if(BUILD_EXAMPLES)
    PID_Dependency(robocop-sim-mujoco VERSION 1.0)
    PID_Dependency(robocop-model-pinocchio VERSION 1.0)
endif()

build_PID_Package()

We use MuJoCo as simulator and pinocchio library for model computation so we define dependencies on the corresponding robocop packages that wraps these two software.

Then we can declare our example in apps/CMakeLists.txt:

PID_Component(
    franka-panda-sim-example
    EXAMPLE
    DESCRIPTION Loads a Franka Panda robot into the MuJoCo simulator
    DEPEND
        robocop/franka-description
        robocop/sim-mujoco
        robocop/model-pinocchio
    CXX_STANDARD 17
    WARNING_LEVEL ALL
)

Robocop_Generate(
    franka-panda-sim-example
    robocop-franka-description-examples/config.yaml
)

The example franka-panda-sim-example depends on the description library already defined (franka-description) and libraries interfacing robocop with the MujoCO simulator and the pinocchio model. As all these libraries belongs to packages that are published into robocop framework, they can use robocop instead of package name in the dependency description (e.g. we can define the dependency as robocop/franka-description instead of robocop-franka-description/franka-description). This is mostly cosmetic but simplifies greatly the dependencies expressions.

After declaration of franka-panda-sim-example we have to call the Robocop_Generate macro. This macro is a code generate that uses an application description to generate base code needed to use robocop functions. First argument is the name of the application we want to generate code for (franka-panda-sim-example). Second argument is the file describing application configuration: robocop-franka-description-examples/config.yaml.

The application configuration file looks like:

models:
  - include: robocop-franka-description/models/franka_panda_with_hand.yaml

data:
  joint_groups:
    - name: all_joints
      command: [JointPosition, JointForce]

processors:
  model:
    type: robocop-model-ktm/processors/model-ktm
    options:
      implementation: pinocchio
      input: state
      forward_kinematics: true
      forward_velocity: true
  simulator:
    type: robocop-sim-mujoco/processors/sim-mujoco
    options:
      gui: true
      mode: real_time
      target_simulation_speed: 1
      joints:
        - group: arm
          command_mode: position
          gravity_compensation: true
        - group: hand
          command_mode: force
          gravity_compensation: true
      filter:
        exclude:
          panda_link0: panda_link1
          panda_link1: panda_link2
          panda_link2: panda_link3
          panda_link3: panda_link4
          panda_link5: panda_link6
          panda_link6: panda_link7
          panda_link7: panda_hand
          panda_hand: [panda_leftfinger, panda_rightfinger]
  • The models section has same meaning as for robot description files. By default all imported models root body is attached to the world frame, except if a joint is specified. The imported model is the one describing a panda arm with a franka hand attached.
  • The processors entry contains a YAML map with each entry being a configuration of a processor. In the example we have two processors: the model library named model and the simulator named simulator. The type entry refers to code generation plugins used and this topic will be discussed later. Simply remember that each processor can participate to code generation process by adding new properties to joints and bodies. The options entry contains specific options used to configure the processor at runtime.
  • the data entry can be used to add new properties to joints, bodies or joint groups. In the example we add JointPosition and JointForce properties to the properties container named command for all joints of the robots (because we use all_joints joint group already defined).

Let’s keep it simple for now, our description is complete, we can now have a look at the application code:

#include "robocop/world.h"

#include <robocop/sim/mujoco.h>
#include <robocop/model/pinocchio.h>
//...

int main(int argc, const char* argv[]) {
    using namespace std::literals;
    using namespace phyq::literals;
    robocop::World world;

    constexpr auto time_step = phyq::Period{1ms};
    auto model = robocop::ModelKTM{world, "model"};
    auto sim = robocop::SimMujoco{world, model, time_step, "simulator"};

    sim.set_gravity(
        phyq::Linear<phyq::Acceleration>{ {0., 0., -9.81}, "world"_frame});

    const auto arm_init_position =
        robocop::JointPosition{ {0, 0, 0, -2.1, 0, 2.1, 0.785} };

    auto& arm = world.joint_groups().get("arm");
    arm.state().set(arm_init_position);
    arm.command().set(arm_init_position);

    const auto hand_init_position = robocop::JointPosition{ {0.04, 0.04} };
    auto& hand = world.joint_groups().get("hand");
    hand.state().set(hand_init_position);
    hand.command().set(robocop::JointForce{ {0.1, 0.2} });

    sim.init();

    bool has_to_pause{};
    bool manual_stepping{};
    if (argc > 1 and std::string_view{argv[1]} == "paused") {
        has_to_pause = true;
    }
    if (argc > 1 and std::string_view{argv[1]} == "step") {
        has_to_pause = true;
        manual_stepping = true;
    }

    while (sim.is_gui_open()) {
        if (sim.step()) {
            sim.read();
            sim.write();
            if (has_to_pause) {
                sim.pause();
                if (not manual_stepping) {
                    has_to_pause = false;
                }
            }
        } else {
            std::this_thread::sleep_for(100ms);
        }
    }
}

The first include (#include "robocop/world.h") is for now “virtual” because the generator has not been executed yet. The file robocop/world.h is resulting from robocop generation process and it provides the definitions of the robocop::World object. The World is the cornerstone of robocop, it is used mostly everywhere because it contains the complete description of the robotic system and its known environment. This includes all the structural objects describing what we know from robot and environment: joints and bodies, joints groups and their parameters. The World also includes all the data fields. Data is modeled by properties contained in containers of structural objects. Joints and joint groups have three properties container : limits, state and command ; Bodies have two: state and command. state generally refer to current state of the element and command to the current command applied to this element. As an example let’s suppose the simulator provides the current position of all joints: then the property JointPosition will exists in the state container of all Joints objects defined in the World. Depending on application configuration (and so both on models and on processors used) the description of the world can vary al lot.

After instanciating the World object we instanciate all processors used:

robocop::World world;

constexpr auto time_step = phyq::Period{1ms};
auto model = robocop::ModelKTM{world, "model"};
auto sim = robocop::SimMujoco{world, model, time_step, "simulator"};

As you can notice we pass the World as first argument to all processors and the last argument corresponds to the name entry used to configured options of the processor. If you pass invalid names (e.g. model for the simulator object) then the code will throw an exception. Please note that in this very simple application the model is just used by the simulator so it is no more used after simulator instanciation.

Following part of the code consists in initializing values for joints positions and initializing commands applied to joint.

Last part ocnsist in initializing simulator and launching the simulation loop. The loop is quite simple: as we do not change values for commands the robot will keep its position in the simulator.

Building and running the example

Now we build the example in order to test the description:

pid configure -DBUILD_EXAMPLES=ON
pid build force=true

And then we can run the example application:

./build/realease/apps/robocop-franka-description_franka-panda-sim-example

And you should now see the panda robot in the simulator !!

panda in MUJOCO

With this simulator you can get a raw visualition of your robot and verify your description is correct. You should immediately see if there is a problem in structural elements (misplaced joints or meshes for instance).

You can also for instance use “pause” button and right panel to check rotations and limits of each joints.

Once done you are ready to play with robocop controllers !

Preparing next steps

In th following steps, we will write a controller from our simulated robot. To fo that we need to create a dedicated application package. We use PID tool to create the package that we name my-robocop-test-application. In the same workspace where you deployed robocop-franka-description package:

pid cd
pid create package=my-robocop-test-application
pid cd my-robocop-test-application
# Open the current folder with your favorite code editor e.g.:
code .

If you look at the root CMakeLists.txt it is mostly empty. In a first time we need to add the required dependencies, in the end your CMakeLists.txt should look like:

cmake_minimum_required(VERSION 3.19.8)
...
project(my-robocop-test-application)

PID_Package(
    ...
    VERSION            0.1.0
)

PID_Dependency(robocop-core VERSION 1.0)
PID_Dependency(robocop-sim-mujoco VERSION 1.0)
PID_Dependency(robocop-model-pinocchio VERSION 1.0)


PID_Dependency(robocop-franka-description VERSION 1.0)
PID_Dependency(robocop-qp-controller VERSION 1.0)

build_PID_Package()

The three first dependencies are straightforward to unedrstand, they the same as in robocop-franka-description package, because we want to do simulation using MuJoCO simulator. We also want to reuse descriptions of the panda robot defined in robocop-franka-description, so we add a dependency to this package. Finally as we want to control the robot, we need a controller. We choose a versatile controller based on QP optimization, provided by the robocop-qp-controller package.

Now we want to define a new application, that we call my_sim_panda_controller_app. So we need to:

  • create the folder apps/my_sim_panda_controller_app and put an empty cpp file my_sim_panda_controller_app.cpp inside. This file will notably contain the definition of the application main function.
  • create the folder share/resources/my_sim_panda_controller_app and put an empty config.yaml file inside. This file will contain the application confguration file as previously explained.
  • write some CMake code in apps/CMakeLists.txt, in order to declare the application:
PID_Component(
    my_sim_panda_controller_app
    DEPEND
        robocop/core
        robocop/franka-description
        robocop/sim-mujoco
        robocop/model-pinocchio
        robocop/kinematic-tree-qp-controller

    CXX_STANDARD 17
    WARNING_LEVEL ALL
    RUNTIME_RESOURCES
        rocobop-gesture-based-control
)

Robocop_Generate(
    my_sim_panda_controller_app
    my_sim_panda_controller_app/config.yaml
)

Nothing really new, it is the same logic as previously explained: my_sim_panda_controller_app is the application (executable) we want to create and it depends one needed components. It is declared using PID_Component. Robocop_Generate tells the application where to find the application configuration file relative to share/resources folder.

The only new thing is the dependency to the robocop/kinematic-tree-qp-controller library. This library is provided by the robocop-qp-controller package. It implements a QP controller used to control kinematic trees. A robotic arm like the panda robot is a simple form of kinematic tree so we can use this controller to do the job.

Define application configuration

In a first step we need to define everythng we will need in the World and processors used. To do this we have to edit my_sim_panda_controller_app/config.yaml file. To do this we restart from description used in robocop-franka-description example application. We just need to add the use of the controller to this application:

models:
  - include: robocop-franka-description/models/franka_panda_with_hand.yaml

data:
  joint_groups:
    - name: arm
      command: [JointPosition]

processors:
  model:
    type: robocop-model-ktm/processors/model-ktm
    options:
      implementation: pinocchio
      input: state
      forward_kinematics: true
      forward_velocity: true
  simulator:
    type: robocop-sim-mujoco/processors/sim-mujoco
    options:
      gui: true
      mode: real_time
      target_simulation_speed: 1
      joints:
        - group: arm
          command_mode: position
          gravity_compensation: true
        - group: hand
          command_mode: force
          gravity_compensation: true
      filter:
        exclude:
          panda_link0: panda_link1
          panda_link1: panda_link2
          panda_link2: panda_link3
          panda_link3: panda_link4
          panda_link5: panda_link6
          panda_link6: panda_link7
          panda_link7: panda_hand
          panda_hand: [panda_leftfinger, panda_rightfinger]

  controller:
    type: robocop-qp-controller/processors/kinematic-tree-qp-controller
    options:
      joint_group: all_joints
      velocity_output: true
      force_output: true
      include_bias_force_in_command: false
      solver: osqp
      hierarchy: strict

As you may have noticed the only modifications are:

  • the addition of a processor called controller. We configure it to generate commands in force (force_output: true) and velocity (velocity_output: true). It controls all joints of the robot (arm’s and hand’s joints).
  • the data entry is modified, it just add a JointPosition command to the arm’s joints. Indeed the controller already generates JointForce and JointVelocity commands but no JointPosition commands. and this property is required by the simulator to control the arm (command_mode: position).

For now keep the application code simple, we just want to ensure code generation is correct. Edit the source file apps/my_sim_panda_controller_app/my_sim_panda_controller_app.cpp:

#include "robocop/world.h"

#include <robocop/sim/mujoco.h>
#include <robocop/model/pinocchio.h>

int main(int argc, const char* argv[]) {
    robocop::World world;
    //more to come
}

Then launch a first build:

pid configure
pid build

The process should deploy and (re)build dependencies as needed, which should take more or less time depending on the dependencies resolution process performd by PID build system. Once done you can deactivate automatic depedencies rebuilding to save time for next builds:

pid configure -DBUILD_DEPENDENT_PACKAGES=OFF

If everything went well, you should now see that a folder robocop has been created in the apps/my_sim_panda_controller_app folder: this is the result of the code generation. You should find robocop/world.h and robocop/world.cpp files that contain the C++ description of the World object.

Writing application code

Now let’s start writing the code for moving the robot in smulation. We start from robocop-franka-description example application code and will enrich it with a controller.

We modify a bit hte previous code:

#include "robocop/world.h"

#include <robocop/core.h>
#include <robocop/sim/mujoco.h>
#include <robocop/model/pinocchio.h>
#include <robocop/controllers/kinematic_tree_qp_controller.h>

int main(int argc, const char* argv[]) {
    using namespace std::literals;
    using namespace phyq::literals;
    robocop::World world;

    constexpr auto time_step = phyq::Period{1ms};
    auto model = robocop::ModelKTM{world, "model"};
    auto sim = robocop::SimMujoco{world, model, time_step, "simulator"};
    sim.set_gravity(model.get_gravity());

    auto controller = robocop::qp::KinematicTreeController{
        world, model, time_step, "controller"};

    auto arm_position = robocop::JointPosition0;

    auto& arm = world.joint_groups().get("arm");
    arm.state().set(arm_position);
    arm.command().set(arm_position);

    auto hand_position = robocop::JointPosition0.04;

    auto& hand = world.joint_groups().get("hand");
    hand.state().set(hand_position);
    hand.command().set(robocop::JointForce0.1);


    //MARK 1: here we will define tasks and constraints

    sim.init();
    //remaining part of the code is the same as in `robocop-franka-description`
}

We simply added the include fo using the controller and instanciated it. Build it a verify that everything is OK. It should do exactly the same as in robocop-franka-description example application:

pid build
./build/release/apps/my-robocop-test-application_my_sim_panda_controller_app

For now the controller is not used so nothing special happen. Please note the MARK1 comment in the code. We will define the features of the controller at this place in code. Features are the tasks and constraints that the controller will be capable of (de)activating. This will be explained later.

For now, we modify the simulation loop to use the controller:

#include "robocop/world.h"

#include <robocop/core.h>
#include <robocop/sim/mujoco.h>
#include <robocop/model/pinocchio.h>
#include <robocop/controllers/kinematic_tree_qp_controller.h>

int main(int argc, const char* argv[]) {
   //same as previously

    sim.init();

    bool has_to_pause{};
    bool manual_stepping{};
    if (argc > 1 and std::string_view{argv[1]} == "paused") {
        has_to_pause = true;
    }
    if (argc > 1 and std::string_view{argv[1]} == "step") {
        has_to_pause = true;
        manual_stepping = true;
    }

    int iter = 0;
    int state = 0;//current state of state machine
    while (sim.is_gui_open()) {
        if (sim.step()) {
            sim.read();
            auto curr_pos = arm.state().get<robocop::JointPosition>();

            if (iter >= 100) {
                // updating the model
                model.forward_kinematics();
                model.forward_velocity();

                //MARK 2: activate tasks and constraints, set targets

                // computing next command
                switch (controller.compute()) {
                case robocop::ControllerResult::NoSolution:
                    // right_arm_joint_position_task.enable();
                    fmt::print("No solution:\n {}\n",
                               controller.qp_problem_to_string());

                    break;
                case robocop::ControllerResult::PartialSolutionFound:
                    fmt::print("Partial solution: {}\n",
                               controller.qp_problem_to_string());
                    break;
                default: // found solution
                    break;
                }
            }
            iter++;
            //updating the position
            auto cmd_vel = arm.command().get<robocop::JointVelocity>();
            arm.command().update(
                [&time_step, &cmd_vel](robocop::JointPosition& position) {
                    position += cmd_vel * time_step;
                });
            
             if (iter % 100 == 0) {
                fmt::print("--------------------------------------\n");
                fmt::print("state: {}\n", state);
                fmt::print("arm current position:\n {}\n", curr_pos);
                fmt::print("arm command position:\n {}\n",
                           arm.command().get<robocop::JointPosition>());
                fmt::print("task target position:\n {}\n",
                           arm_joint_pos_task.target().input());
                fmt::print("interpolator output position:\n {}\n",
                           arm_joint_pos_task.target()
                               .interpolator()
                               .interpolator()
                               .output());
                fmt::print("arm command velocity:\n {}\n", cmd_vel);
                fmt::print("hand command force:\n {}\n",
                           hand.command().get<robocop::JointForce>());
            }
            // sending command to the simulator
            sim.write();
            if (has_to_pause) {
                sim.pause();
                if (not manual_stepping) {
                    has_to_pause = false;
                }
            }
        } else {
            std::this_thread::sleep_for(10ms);
        }
    }
}

What we added:

  • we added a little temporisation (100 ms) at beginning of the execution to let the simulator converge to initial state. This is only to avoid instabilities later in the example. Once this little time elapsed, the controller starts generating commands.
  • the two lines model.forward_kinematics() and model.forward_velocity() are used to update the state (respectively bodies pose and bodies velocities) using the model library. The model object reads position/velocity of all joints of the World, compute current bodies pose/twist, then write this information into the World. Afte this the World state is updated.
  • the controller.compute() tells the controller to compute next joint commnd. As you can see there may be no solution (or only a partial solution) to the control problem, and it is reported using console printings. If a (possibly partial) solution has been found the resulting JointVelocity and JointForce properties of the command container of all controlled joints of the World are updated. The controlled joints are those targetted by the joint_group entry of the controller configuration in the file my_sim_panda_controller_app/config.yaml: this is set to all_joints group so all joints of the World are controlled. If no solution has been found then the command vector is not updated.
  • we need to correctly set the arm’ target position. Indeed the controller generate velocities and force commands but not directly position commands that are needed by the simulator to control the arms. We use the update function and pass a lamba that update the joint position by integrating the generated velocity (read using arm.state().get<robocop::JointVelocity>()).
  • we print the command outputs every 50 control cycles (20 prints per seconds).
  • finally after command vector is computed the sim.write() reads the adequate values of joints properties from World and apply then in the simulation.

Please note that we added the MARK 2 comment tag: this is where we will dynamically change the controller configuration, meaning that we will change active tasks and constraints. We will explain this a bit later.

Now build and run again:

pid build
./build/release/apps/my-robocop-test-application_my_sim_panda_controller_app

The result appears to be exactly the same as previously. Indeed in you look at your console you should see No solution: printed, meaning the controller found no solution. This is perfecly normal as there is no active task so there is basically no control problem to solve. Since the controller does not update commands when no solution had been found, the initial zero velocity command is never changed in the World and so everything keeps working as previously: position does not change.

Try to move a bit the robot by generating forces on robot’s segment in the simulation. To exert a force on a segment, in the MuJoCo interface:

  • left click on a segment to select it.
  • hold Ctrl button and move the mouse in a direction while holding mouse right button.

You should see the robot moving back to its initial position. This is normal because arm’s joints are position controlled in the simulator, meaning that the simulator implements a position regulation feedback loop. Since for now our controller generates no change in arm’s target position, the robot goes back to it initial target position.

Now we are ready to work with a controller, this is explained in next section.

Using the controller

Now comes the interesting part : defining tasks and constraints that will then be use to contol the robot.

About tasks and constraints

General definition:

  • tasks are usd define the behavior of the robot: what it has to achieve.
  • constraints define invariants that the global robot behavior must respect.

Tasks and constraints can be activated or deactivated during program execution. Also their parameter and configuration can be changed online. Among these parameters tasks targets will surely change during program execution. A target is the objecive the task has to fulfill and depends on task type.

Tasks and constraints exist in three flavors:

  • joint: acts on a specific joint group. The target is a vector
  • body: acts on a specific body.
  • generic: act on something different from a single joint group or a single body (e.g system total kinetic energy, all controlled bodies for collision avoidance)

Each controller provides a set of constraints types that belong to one of the three previous categories and that implement specific behaviors. For instance the robocop::qp::KinematicTreeController provides constraints such as:

  • joint position, joint velocity, joint force constraints.
  • body velocity, body acceleration constraints.

Each controller provides a set of task types that belong to one of the three previous categories and that implement specific behaviors. For instance the robocop::qp::KinematicTreeController provides tasks such as:

  • joint velocity control, joint position control.
  • body velocity control, body position control, body external force control.

Tasks all embed an interpolator:

interpolator

  • The goal is the target given to tha task by an external program (hard coded or more sophisticated code). For a position task for instance it is the target position.
  • the interpolator is executed at each time step of the controller (i.e. each time compute() function is called and the task is active). It generates the next target (output) for a given time step. By default the identity interpolator is used on any task, so the goal is simply kept as output target at each time step. Interpolato can be used anytime you need for instance to generate a trajectory.
  • User can configure the interpolator of any task, if required.

Furthermore there are specilizations of tasks called TaskWithFeedBack that have the following pattern:

feedback

These tasks implement a feedback loop: an error between the output target (e.g. target joint position) and the corresponding state (current joint position) is computed and forwarded to the feedback loop algorithm that in turn generates a target (internal target) that is sent to a subtask (internal task) in charge of executing the target. This type of task is usefull to provide implement of tasks that would have been difficult or impossible to write within the given controller. For instance all position tasks (either joint or body) of robocop::qp::KinematicTreeController are implemented using TaskWithFeedBack. Their subtask is the corresponding velocity task type, so the internal target is a velocity (JointVelocity or BodyVelocity).

The direct consequence is that the user has to define which algorithm is used to implement the feedback loop.

That’s enough for concepts, all of this will be demonstrated in next subsections.

First constraint and task

We will now see how to define a simple joint position constraint and a joint position task.

As a first remark please remember that tasks and constraints are defined in code in MARK1 position and (de)activation and tasks targets modification is performed in MARK2 position. To keep the example code simple we will not repeat the whole code but just show he code put in MARK1 and MARK2 positions.

So let’s start defining the joint positoin constraint:

// MARK1
controller.set_auto_enable(true);

// joint position constraint
auto& pos_cst_param =
    controller
        .add_constraint<robocop::qp::kt::JointPositionConstraint>(
            "positin_constraint", controller.controlled_joints())
        .parameters();
pos_cst_param.min_position = controller.controlled_joints()
                                  .limits()
                                  .lower()
                                  .get<robocop::JointPosition>();
pos_cst_param.max_position = controller.controlled_joints()
                                  .limits()
                                  .upper()
                                  .get<robocop::JointPosition>();

First notice the call to controller.set_auto_enable(true): this instructs the contoller to automatically enable (i.e. activate) tasks and constraints when they are created. A call to controller.set_auto_enable(false) would have done the reverse: they would be inactive when they are created.

A constraint is defined by calling add_constraint function of the controller with the desired type (here robocop::qp::kt::JointPositionConstraint). The first argument is the name of the constraint ("position_constraint") and the second is the target joint group. Here we want to specify position limits for all controlled joints so we use controller.controlled_joints() to easily get this joint group.

We then get the access to the constraint parameters (pos_cst_param) in order to set them. The parameters of the constraint allow to set the min and max position of joints. We simply set them to the limits defined in robot model by reading corresponding joint property (e.g. controller.controlled_joints().limits().lower().get<robocop::JointPosition>()).

This constraint ensures that the control solution is kept within a physically feasible range. We could have decided to restict this range more by setting a specific vector of values.

Now it’s time to define our first task (simply write the following code just after this previous lines):

// just after previous lines

// joint position task
auto& arm_joint_pos_task = controller.add_task<robocop::JointPositionTask>(
    "arm_joint_position_task", arm);

arm_joint_pos_task.target()
    .interpolator()
    .set<robocop::CubicConstrainedInterpolator<robocop::JointPosition>>(
        robocop::JointVelocity::constant(arm.dofs(), 1),
        controller.time_step());

auto& arm_joint_fb = arm_joint_pos_task.feedback_loop()
                          .set_algorithm<robocop::ProportionalFeedback>();
arm_joint_fb.gain()->resize(arm.dofs());
arm_joint_fb.gain()->setConstant(5);

A task is defined by calling the add_task function of the controller with the desired type (here robocop::JointPositionTask). The first argument is the name of the task ("arm_joint_position_task") and the second is the target joint group (here arm). By specifying the arm joint group we limit the action of this task to arm’s joints, so hand’s joints will not be influenced by this task.

Then, as explained in introduction subsection we define an interpolator for this task (using target().interpolator().set() function). The interpolator used is provided by default in roboco/core library: it is a cubic polynomial interpolator managing maximum velocity constraints. We so use the type robocop::CubicConstrainedInterpolator<robocop::JointPosition> as template argument of set() function. This type simply tells that the iterpolation applies on a robocop::JointPosition (a vector of joints positions). Due to the use of this type we need to pass two arguments to the set() function:

  • first one is the max velocity that is a robocop::JointVelocity (a vector of joints velocities). We set the max velcity used for interpolation as 1 radian per second.
  • second is the time step used for generating intermediary output targets at each time step. We simply use the same time step as the controller but we could have used a greater one.

The following part consists in defining the algorithm used for feedback loop (using feedback_loop().set_algorithm() function). roboco/core library provide a set of basic feedback algorithms, here we simply use a simple proportional feedback (robocop::ProportionalFeedback) passed as template argument of set_algorithm() function. We can then set the proportional gain of the feedback loop to constant value 5 for all joints, meaning that the computed feedback error is multiplied by 5 to generate a resulting Jointvelocity. Please note that since this is a joint task the value of all quantities (including gains) is a dynamic vector: we first need to allocate the vector memory (using resize function with a many dofs -degrees of freedom- as provided by the target joint group) prior to setting it.

Then we need to correctly activate the task. So we need also to edit code at MARK 2:

// MARK2
switch (state) {
  case 0:
    arm_joint_pos_task.target() = arm_position;
    arm_joint_pos_task.target().reset_interpolator();

    state = 1;
    break;
}

This code implement a simple state machine used to activate tasks. It simply tells that just after the simulation initialization steps we set the target of the task target (goal in task with feedback diagram) that is a JointPosition (a vector of positions). We set it to the same target position as the one used in preliminary steps. We also need to call reset_interpolator() on task target: this sets the initial position of the interpolator to the actual joint position and reset interpolation process to restart from this noew initial position.

That’s enough for now, build and execute the code:

pid build
./build/release/apps/my-robocop-test-application_my_sim_panda_controller_app

Everything appears similar as previously in the simulator: the robot does not move. This is perfecly normal as the target arm position is the same. The only apparent difference is that the controller now computes valid control solutions: there is not more No Solution outputs in the terminal.

But in fact things are not really similar: if you exert some forces on arm’s segments on simulation it now oscillates around the target position.

There is a big problem

First you have to consider that since the target and initial position are the same the interpolator job is immediately finished and it simply forwards the goal position (i.e. no interpolation performed). Then, on next cycles, the interpolator is never reset so it continues to forward the same goal position. What happen when you generate forces is that the actual position change and so the position error between actual position and goal may become big. Since the feedback loop is a simple propotionnal, the generated target velocity (target for the internal velocity task) becomes proportionnally bigger. In the end the controller generates a command velocity that can become high and so next command position may be far away from actual position. This is the cause of the oscillation.

Why is this problem happening ? Answer can be find in the simulator implementation. Indeed dynamics simulators like MuJoCO simulate forces in the environment. When we configured the simulator to control arm joints in position, it means that there is an algorithm, like a PID regulator (or a cascade of regulators), that generates forces to make the joints reach the given command position (the one we generate in our control loop). Depending on the PID gains, convergence to the command position can take more or less time, which also depends on the error between actual position and command position. In the mean time since the simulated robot actual position does not converge quickly to the target position, the task feedback loop generates higher velocities, which in turn generates greater changes in command positions. Those commands are sent to the simulator that in turn cannot track correctly and so on…

Tuning gains of the regulation algorithms used in simulator is theorically possible but it would be a pain. We would have to do that for every particular robot. Hopefully there is a simple solution: directly controlling the robot in force. Doing so we avoid all the regulation algorithms and do direct control of the simulated force.

To do that we first need to modify the application configuration file:

models:
  - include: robocop-franka-description/models/franka_panda_with_hand.yaml

#data entry is no more useful

processors:
  model:
    #same as previously
  simulator:
    type: robocop-sim-mujoco/processors/sim-mujoco
    options:
      gui: true
      mode: real_time
      target_simulation_speed: 1
      joints:
        - group: arm
          command_mode: force # change position to force
          gravity_compensation: true
      #remaining part is the same as previously
  controller:
    #same as previously

We no more need to add a JointPosition command to the arm joint group, so data entry is simply deleted. Then we change command mode from position to force.

Then we need to modify the code to take into account the modifications.

//same includes

int main(int argc, const char* argv[]) {
    //beginning same as previously

    auto arm_position = robocop::JointPosition{ {0, 0, 0, -2.1, 0, 2.1, 0.785} };

    auto& arm = world.joint_groups().get("arm");
    arm.state().set(arm_position);
    arm.command().set(robocop::JointForce{phyq::zero, arm.dofs()});

    auto hand_position = robocop::JointPosition{ {0.04, 0.04} };

    auto& hand = world.joint_groups().get("hand");
    hand.state().set(hand_position);
    hand.command().set(robocop::JointForce{ {0.1, 0.2} });
//... same as previously
}

So at the beginning we simply change the initial arm command. Now it is no more a JointPosition but a JointForce so we changed the code with this line:

arm.command().set(robocop::JointForce{phyq::zero, arm.dofs()});

That simply set a zero torque command for all joints, so arm will not move. Please note that, as we set the option ` gravity_compensation: true` in the simulator configuration options, we let the simulator automatically add torques required to compensate gravitationnal effects.

Thena in the last lines o the code:

  //... previous lines same as previously
  iter++;
  // updating the position
  auto cmd_vel = arm.command().get<robocop::JointVelocity>();
  // removed update of the position
  if (iter % 100 == 0) {
      fmt::print("--------------------------------------\n");
      fmt::print("state: {}\n", state);
      fmt::print("arm current position:\n {}\n", curr_pos);
      // removed print of the arm position command
      fmt::print("task target position:\n {}\n",
                  arm_joint_pos_task.target().input());
      fmt::print("interpolator output position:\n {}\n",
                  arm_joint_pos_task.target()
                      .interpolator()
                      .interpolator()
                      .output());
      // add print of the arm force command
      fmt::print("arm command force:\n {}\n",
                  arm.command().get<robocop::JointForce>());
      fmt::print("arm command velocity:\n {}\n",cmd_vel);
      fmt::print("hand command force:\n {}\n",
                  hand.command().get<robocop::JointForce>());
  }
  // sending command to the simulator
  sim.write();
  //... following lines same as previously

So what we did is simply to remove averything related to the JointPosition command (update() function and printings) and added a print of the JointForce command. In the end the code is even simpler than in previous version.

That’s it, build and execute the code:

pid build
./build/release/apps/my-robocop-test-application_my_sim_panda_controller_app

Now the robot behaves correctly. You can exert forces and it goes back quickly to the initial position without oscillations.

There is still something missing

If you exert forces on segments of the arm you should see hand’s finger moving a bit in one direction or the other depending on the force direction, before finally stabilizing. This phenomenom is normal.

So why is this happening ? This is quite simple: no task is set to control the finger, so any control solution lying within the limits of the fingers min and max positions is valid. Since the fingers are force controlled (see configuration of the simulator in application configuration file), the controller can generate any torque to let them lye within position limits, so if you exert higher force or gravity force is higher than commanded force the fingers move.

It is of course never a good idea to let joints uncontrolled, so we correct this immediately by defining the task to control fingers:

// just after previous lines of MARK1

// joint position task for HAND !!
auto& hand_joint_pos_task = controller.add_task<robocop::JointPositionTask>(
    "hand_joint_pos_task", hand);

auto& hand_joint_fb = hand_joint_pos_task.feedback_loop()
                          .set_algorithm<robocop::ProportionalFeedback>();
hand_joint_fb.gain()->resize(hand.dofs());
hand_joint_fb.gain()->setConstant(3);

The code is similar to arm joint position task. We just removed the interpolator as we do not really care about smoothing the trajectory of fingers and reduced a bit the gains of the feedback loop to avoid too high velocity of finger.

And we change the previous state of the state machine:

// MARK2
switch (state) {
  case 0:
    arm_joint_pos_task.target() = arm_position;
    arm_joint_pos_task.target().reset_interpolator();
    hand_joint_pos_task.target() = hand_target_position;
    state = 1;
    break;
}

Rebuild and execute:

pid build
./build/release/apps/my-robocop-test-application_my_sim_panda_controller_app

Now you can try again moving segment of the arm, fingers keep their target position.

For the last part of this subsection we will finally make the robot move a bit !

Changing the target dynamically

To do this we simply want to change the arm target position, so we modify again the state machine:

// MARK2
switch (state) {
case 0:
  arm_joint_pos_task.target() = arm_position;
  arm_joint_pos_task.target().reset_interpolator();

  hand_joint_pos_task.target() = hand_position;
  state = 1;
  break;
case 1:
  if (iter == 5000) {
      // after 5 seconds change the target
      arm_position << 0_rad, 0_rad, 0_rad, 0_rad, 0_rad,
          1.57_rad, 0_rad;
      fmt::print("changing arm position to: {}\n",
                  arm_position);
      arm_joint_pos_task.target() = arm_position;
      arm_joint_pos_task.target().reset_interpolator();
      state = 2;
  }
  break;
 case 2:
  double current_residual =
      (arm_joint_pos_task.target().input() -
        arm_joint_pos_task.state()
            .get<robocop::JointPosition>())
          ->norm();

  if (current_residual < 0.3) {
      arm_joint_pos_task.target() = curr_pos;
      arm_joint_pos_task.target().reset_interpolator();
      state = 3;
  }
  break;
}

Rebuid and launch. After 5 seconds of execution that keeps the robot in its initial position, we change the target to a new position and regenerate the trajectory using reset_interpolator() (going into state 2). Once done the code wait that the error between target (goal) and state (current joint position) is below a threshold to stop motion.

If you exert forces using simulator interface the arm will go back to this final position.

Body constraints and task

If the previous subsection you learned the basics of using a robocop controller. But obviously, what one expects from an advanced conroller is not simply to control a robot at joint level (even if it is often usefull to implement somes phases of a robotic demonstration).

Now what we want is to control bodies of the simulated arm. To do that, we will define a body task and a body constraint.

So first step consists in defining new task and constraint:

// just after previous lines of MARK1

// add a body velocity constraint on end effector
controller.set_auto_enable(false);
auto& end_effector = world.body("panda_link8");

auto& ee_vel_cstr =
    controller.add_constraint<robocop::qp::kt::BodyVelocityConstraint>(
        "velocity_constraint", end_effector,
        robocop::ReferenceBody{world.world()});
ee_vel_cstr.parameters().max_velocity.set_constant(1);

// add a body position task on end effector
auto& ee_pos_task = controller.add_task<robocop::qp::kt::BodyPositionTask>(
    "end_effector_position_task", end_effector,
    robocop::ReferenceBody{world.world()});

auto& ee_fb = ee_pos_task.feedback_loop()
                  .set_algorithm<robocop::ProportionalFeedback>();
ee_fb.gain().set_constant(2);
auto& ee_pos_task_target = ee_pos_task.target();
ee_pos_task_target.interpolator()
    .set<robocop::LinearTimedInterpolator<robocop::SpatialPosition>>(
        phyq::Duration<>{2s}, time_step);

//will be used in state machine
auto init_time = std::chrono::steady_clock::now();

We use controller.set_auto_enable(false); to avoid those tasks being active at the beginning of the program. Indeed we want to enable them in a second phase of the execution.

When designing a body task we need both a target body (the one being controlled) and a reference body (the one defining the frame of the task target spatial values). So we want to target panda arm end effector, it is performed using world accessors on bodies: auto& end_effector = world.body("panda_link8");. For the reference body we keep it simple: we simply use the world “virtual body” representing the world frame (using world.world()). Please note that the type if the reference body is robocop::ReferenceBody: this is just a wrapper type whose utility is only to discriminate functions arguments is a clean way.

Then we can define a constraint using add_constraint as usual but this time the type passed as parameter is robocop::qp::kt::BodyVelocityConstraint. For body constraints arguments are:

  • the name of the constraint, here "velocity_constraint".
  • the target body, here end_effector.
  • the reference body, here robocop::ReferenceBody{world.world()}.

Then we can set parameters of the constraint. BodyVelocityConstraint is the simplest constraint that has only one parameter specifying the maximum velocity max_velocity. Please notice that in this case the max velocity is a spatial quantity which means it has linear and anguler. Spatial quanties generally have 3 components (x, y, z) for linear part and 3 components (rx, ry, rz) for angular part. So in the example the expression max_velocity.set_constant(1) set the max velocity to 1 meter per second on each component of linear part and 1 radian per second on each component of angular part.

After the constraint we define (using add_task) a body task of type robocop::qp::kt::BodyPositionTask. For body tasks the minimum argument set is the same as for constraints. Please note that depending on the controller some optional parameters can be provided. For instance the robocop::qp::KinematicTreeController provides an optional fourth argument allowing to define the root body. It is used to tell what is the root of the kinematic chain and can be usefull in many situation, but this is an advanced technique and is not discussed in this simple example.

ee_pos_task if a body position task that acts on panda arm end effector and whose target are expressed on world frame (robocop::ReferenceBody{world.world()}). As for joint position tasks we define an interpolator (robocop::LinearTimedInterpolator<robocop::SpatialPosition>) and a feedback loop (robocop::ProportionalFeedback). The LinearTimedInterpolator requires a duration and a time step as arguments so they are passed when calling the set function of the interpolator: the trajectory should take at least 2 seconds to be executed.

Finally we create a variable init_time that will be used to count time.

Then we modify again the state machine, to make it use the body task and constraint:

// MARK2
switch (state) {
case 0:
  arm_joint_pos_task.target() = arm_position;
  arm_joint_pos_task.target().reset_interpolator();

  hand_joint_pos_task.target() = hand_position;
  state = 1;
  break;
case 1:
  if (iter == 5000) {
      // after 5 seconds change the target
      arm_position << 0_rad, 0_rad, 0_rad, 0_rad, 0_rad,
          1.57_rad, 0_rad;
      fmt::print("changing arm position to: {}\n",
                  arm_position);
      arm_joint_pos_task.target() = arm_position;
      arm_joint_pos_task.target().reset_interpolator();
      state = 2;
  }
  break;
 case 2:
  double current_residual =
      (arm_joint_pos_task.target().input() -
        arm_joint_pos_task.state()
            .get<robocop::JointPosition>())
          ->norm();

  if (current_residual < 0.3) {
      arm_joint_pos_task.target() = curr_pos;
      arm_joint_pos_task.target().reset_interpolator();
      init_time = std::chrono::steady_clock::now();//start counting time
      state = 3;
  }
  break;
case 3://new state
  // wait a bit to stabilize
  if (std::chrono::duration_cast<std::chrono::seconds>(
          std::chrono::steady_clock::now() - init_time) >
      2s) {
      state = 4;
      arm_joint_pos_task.disable();
      ee_vel_cstr.enable();
      ee_pos_task.enable();

      // setting position to actual +0.5 on X and -0.5 on Y
      ee_pos_task_target =
          model.get_body_position(end_effector.name());
      ee_pos_task_target->linear().z() += -35_cm;
      ee_pos_task_target->linear().y() += 35_cm;
      ee_pos_task_target.reset_interpolator();
  }
  break;
}

We reset time couting during transition from state 2 to 3. We use time couting in state 3 to wait 2 seconds the robot to be stabilized in the previous joint position. Then we disable joint position task and enable body position task and body velocity constraint. We need to set the task target as usual: we set it as the actual pose with a 35 cm linear offset on z and y axis. Please note we use the arrow operator (->) to implicitly access the target input value. Input is also explicilty accessible using the input() function. We finally reset the interpolator to make it start from current position.

That’s it, build and execute !

You should see the robot executing a smooth trajectory to go to target pose but then moving fast to the target pose and finally oscillating a bit around the target before stabilizing.

What happens exactly ? First to better understand what happens we add printings (in the same section where other prints are):

fmt::print("task target: {:r{euler}}\n", ee_pos_task_target.input());
fmt::print("task interpolated: {:r{euler}}\n", ee_pos_task_target.output());
fmt::print("task state: {:r{euler}}\n",ee_pos_task.feedback_state());
fmt::print("task error: {}\n", ee_pos_task.feedback_error());

Note: The fmt decorator :r{euler} allows to print poses (spatial positions) with a human readible representation.

What you need to understand is that:

  • the interpolator generates a trajectory in cartesian space, without knowledge of the robot kinematics.
  • the feedback loop of the BodyPositionTask tries to track at best the current target output spatial position at each time step, by generating spatial velocities that are in turn manages by its internal velocity task. Again the feedback loop has no knowledge of the robot kinematics.
  • all of this make the trajectory following theoritical because you cannot ensure it will be perfectly tracked. Indeed kinematics constraints (such as joint velocity or acceleration) or other constraints may apply (for instance the BodyVelocityConstraint) and so the controller solution may not perfectly track this spatial trajectory.

So in the end, you will need to find a good configuration of your task in order to make it correctly track a position or a trajectory. That is what we will try to find.

So let’s try to correct the problem.

If you look at task interpolated and task state outputs on two or three successive frames is is clear that the current target output (i.e. output of the interpolator) is not so well tracked. Ideally we would like the next pose to be very close to the previous task interpoltaed target output. The way to reduce the error would be to augment gains. We can do this directly in the state machine sate 3 by increasing gains of the feedback loop:

ee_fb.gain().set_constant(5);//previously it was 2

It will generate higher values for the target of the internal velocity task used by position task. After rebuild and launch again you should see that there is no improvement, the robot if even oscillating more at the end. This is prefectly normal as we increased the gains the regulator correction is higher, correcting too much the position. It then takes a higher time to stabilize. But we can see that the position is better tracked now (compare task interpolated and task state successive outputs) but only at the beginning. In the second half of the execution the error increase. If the trajectory was perfectly tracked the error output should be more or less “constant” and close to zero. We increase again the gains (let’s say to value 10) to see the result after rebuild and relaunch. We can see that nothing really changed and that the oscillation if even worst.

What we can observe is that the trajectory tracking ends “quickly” (when task interpolated converges to same value as task target) while the error is still quite high. So what we can do is to let more time to the LinearTimedInterpolator to converge (let’s say 5 seconds itsead of 2). This can be achieved by changing the value when the set function of the interpolator is called. We can also put back the value of the gain to their original value. Let’s rebuild/relaunch and see.

If you look at task outputs it is now clear that the y component is not so badly tracked while the z one error keeps increasing. This exokain the behavior: once the trajectory has finished to be played (interpotator output equals interpolator inputs) the error has become quite big on z axis (something more than 20 cm of error). At this moment only the feedback loop is influencing the behavior of the position task. The correction being propotionnaly big as error the robot finally converges super fast to the target position (error is close to zero when robot stabilizes).

If the behavior we see in sumulation can be explained we still not understand why it happens. Indeed tracking value does not seem so bad and the trajectory is well generated.

It is always a bit difficult to understand and explain but here is a possible explanation:

  • if you look at first steps of trajectory execution the first axis rotate in one direction and its value is near its lower limit.
  • then once the trajectory play is finished the system seems to “hesitate” then the first axis fastly goes in the reverse direction, all other axis adapting their position consequently.

The most probable explanation is that the controller was trapped in a local minmimum: he chose to converge to a joint configuration that was locally good in a first time but that makes it reach a state where it cannot correct the z axis correctly due to its limits. The error on z continues to increase and the controller finally found a solution that consists in rotating the first axis in the reverse direction. This is a drawback of QP controllers that only find immediate local solutions without looking at long term goals (i.e. target input). Generally keep in mind that this kind of situation becomes more probable as the distance between state and input target increases.

Let’s validate this by simply reducing the distance between state and target. In state 3, we simply reduce the offset of the target relative to the current position (state 3).

case 3:
  // wait a bit to stabilize
  if (std::chrono::duration_cast<std::chrono::seconds>(
          std::chrono::steady_clock::now() - init_time) >
      2s) {
      state = 4;
      arm_joint_pos_task.disable();
      ee_vel_cstr.enable();
      ee_pos_task.enable();

      // setting position to actual +0.5 on X and -0.5 on Y
      ee_pos_task_target =
          model.get_body_position(end_effector.name());
      ee_pos_task_target->linear().z() += -10_cm;
      ee_pos_task_target->linear().y() += 15_cm;
      ee_fb.gain().set_constant(2);
      ee_pos_task_target.reset_interpolator();
  }
  break;
case 4:
  if (phyq::abs(ee_pos_task.feedback_state().linear().y() -
                ee_pos_task_target.input().linear().y()) <
      1_cm) {
      ee_pos_task_target->linear().y() += 20_cm;
      ee_pos_task_target.reset_interpolator();
      state = 5;
  }
  break;
case 5:
  if (phyq::abs(ee_pos_task.feedback_state().linear().y() -
                ee_pos_task_target.input().linear().y()) <
      1_cm) {
      ee_pos_task_target->linear().z() += -20_cm;
      ee_pos_task_target.reset_interpolator();
      state = 6;
  }
  break;

Indeed it better converge at state 4 and 5 but once state 6 is active the same behavior finally appear.

There is no simple way to overcome this situation. The best would be to use a kinematic based trajectory planner (e.g. moveit) that generates a full trajectory in joint space to reach goal in cartesian space. Then you can use trajectory following interpolators plugged to a joint position task (this is already existing in robocop). Native robocop interfaces to movit like algorithms is not implemented yet but one can imagine to plug robocop code with a movit ros node that would do the job.

Another way would be to develop a model predictive controller that takes into account a long enough time window, but it also not implemented in robocop yet and far beyond this tutorial.

So the best way to overcome this is to change your robot starting configuration and maybe also changing tasks targets to find ones that work by incremental testing.

For instance with state machine:

switch (state) {
case 0:
    arm_joint_pos_task.target() = arm_position;
    arm_joint_pos_task.target().reset_interpolator();

    hand_joint_pos_task.target() = hand_position;
    state = 1;
    break;
case 1:
    if (iter == 2000) {
        // after 5 seconds change the target
        arm_position << 1.57_rad, 0_rad, 1.57_rad, -1.57_rad,
            0_rad, 1.57_rad, 0_rad;
        arm_joint_pos_task.target() = arm_position;
        arm_joint_pos_task.target().reset_interpolator();
        state = 2;
    }
    break;
case 2: {
    double current_residual =
        (arm_joint_pos_task.target().input() -
          arm_joint_pos_task.state()
              .get<robocop::JointPosition>())
            ->norm();

    if (current_residual < 0.3) {
        arm_joint_pos_task.target() = curr_pos;
        arm_joint_pos_task.target().reset_interpolator();
        state = 3;
        init_time = std::chrono::steady_clock::now();
    }
} break;
case 3:
    // wait a bit to stabilize
    if (std::chrono::duration_cast<std::chrono::seconds>(
            std::chrono::steady_clock::now() - init_time) >
        2s) {
        state = 4;
        arm_joint_pos_task.disable();
        ee_vel_cstr.enable();
        ee_pos_task.enable();

        // setting position to actual +0.5 on X and -0.5 on Y
        ee_pos_task_target =
            model.get_body_position(end_effector.name());
        ee_pos_task_target->linear().z() += -25_cm;
        ee_pos_task_target->linear().y() += 25_cm;
        ee_fb.gain().set_constant(2);
        ee_pos_task_target.reset_interpolator();
    }
    break;
//more to come...
}

We changed robot initial target joint position and modified a bit the task targets and now you should see everything working as expected.

Modifying starting initial joint configuration before starting body tasks can change a lot the behavior. This is particularly true for overactuated robots like 7+ dofs robotic arms.

Multiple body tasks and controlled DoFs selection

In this last part we show another feature that is quite common to use: selection of degrees of freedom controlled by tasks. It is particularly important to understand how to use this feature in order to combine task to produce more complex behaviors.

The basic idea is that some degrees of freedom of a body can be controlled using a specific behavior and other degrees of freedom with other behaviors. To say in robocop terminology we want to combine two or more tasks to implement the behavior of a given body.

We define a new task used to control end effector in velocity:

// just after previous lines of MARK1


// add a body velocity task on end effector
auto& ee_vel_task = controller.add_task<robocop::qp::kt::BodyVelocityTask>(
    "end_effector_velocity_task", end_effector,
    robocop::ReferenceBody{world.world()});

And here is the final state machine code (added just after code of the preceding section):

case 4: {
  auto task_error_norm = (ee_pos_task_target.input() -
                          ee_pos_task.feedback_state())
                              .to_compact_representation()
                              .norm();
  if (task_error_norm < 0.02) {
      state = 5;
      // reconfigure the position to keep position in aff dofs
      // execpt Y
      ee_pos_task_target =
          model.get_body_position(end_effector.name());
      ee_pos_task.selection_matrix().y() = 0;
      ee_pos_task_target.reset_interpolator();

      // control the Y direction in world in velocity
      ee_vel_task.enable();
      ee_vel_task.target()->set_zero();
      ee_vel_task.target()->linear().y() = -0.1_mps;
      ee_vel_task.selection_matrix().clear_all();
      ee_vel_task.selection_matrix().y() = 1;
      init_time = std::chrono::steady_clock::now();
  }
} break;
case 5:
  // wait a bit to stabilize
  if (std::chrono::duration_cast<std::chrono::seconds>(
          std::chrono::steady_clock::now() - init_time) >
      10s) {
      state = 6;
      ee_vel_task.disable();
      ee_pos_task_target =
          model.get_body_position(end_effector.name());
      ee_pos_task.selection_matrix().set_all();
      ee_pos_task_target.reset_interpolator();
  }
  break;

When is state 4 we check that the position target has been reached (i.e. ee_pos_task_target.input() - ee_pos_task.feedback_state() is under a threshold). You can note that we use the to_compact_representation() which transform a SpatialPosition into its equivalent 6 DoF vector representation. This is a better representation to compute the norm of position error, otherwise the norm is computed on a vector AND a 3x3 rotation matrix, which leads to hard to understand error values for rotations.

Once state 4 completed, we first change the position task:

  • as usual we reset the interpolator.
  • we set the target spatial position to the actual spatial position, simply meaning “do not move”.
  • we also deactivate control of linear translation along Y axis (in reference frame) by modifyng the selection matrix (ee_pos_task.selection_matrix().y() = 0). By default all DoF of a task are controlled so all values of the selection matrix is 1.

Then we enable the velocity task, but only for linear translation along Y axis:

  • all target velocities are set to 0 except the one for Y axis that is set to 0.1 meter per second.
  • we unselect all components controlled by task (ee_vel_task.selection_matrix().clear_all()) before setting again the one for Y linear axis (ee_vel_task.selection_matrix().y() = 1).

We start counting time and after 10 seconds we deactivate the velocity task (ee_vel_task.disable();) and ask the position task to controll again all degree of freedom (ee_pos_task.selection_matrix().set_all()) and ensure the robot will not move (target equals current position).

After building and launching you can see the result.

During state 5 the robot moves only along the Y axis in world frame during 10 seconds, while it does not change orientation nor position along X and Z axis, in world frame.

By playing with selection matrix we can then combine behaviors in a nice way.

Conclusion

That’s it for this tutorial, you can access the final code of this example here.

There is of course much more to learn. You can ask for new tutorials one more specific topics, but the faster way to understand is to read other tutorials and have a look at robocop packages source code.