Application development
Application development in RoboCoP follows a set of rules you have to understand in order to be capable of understanding what needs to be done. This tutorial gives you general guidelines for application development. When we talk about application develoment we mean “complex application development”, not just little library usage examples.
As a story arc we take as reference the package robocop-sophia-app that is a real demo implementing a complex scenario using a composite robot with arms and head and a sophisticated QP based controller. You can deploy it in your workspace to understand what this tutorial is speaking about.
World description
As mentionned in the methodology overview, describing the world is the first mandatory part. For complex applications it can be a really fastidious task if everything has to be described. The good point is that once you have correcly described parts of your system it is easy to compose and to reuse accross different applications.
In a first time of application development the only thing you should care of is the description of your robot. You can have a look at share/resource/robocop-sophia-app/common/world.yaml to get the complete final description of the robot, but you should consiedr we started with a rather simple description:
models:
#robot description (without camera)
- include: robocop-bazar-description/models/fixed_bazar.yaml
namespace: bazar
- path: robocop-sophia-app/common/models/hankamp_tool.urdf
namespace: bazar_left
joint:
parent: bazar_left_tool_adapter_tool_side
- path: robocop-sophia-app/common/models/hankamp_tool.urdf
namespace: bazar_right
joint:
parent: bazar_right_tool_adapter_tool_side
As we started from beginning, we had first of all to describe the BAZAR robot from individual elements (arms, head, sensors, etc.) which took most of the description time. From these decriptions we wrote driver libraries for these individual elements and finally once the base robot description has been done by composing these elements, we created a driver group that manage all the individual drivers at once, to get a easy to use robot driver. Please refer to the robocop-bazar-description and robocop-bazar-driver packages to have examples. Precise process for this is described in writing description and create drivers tutorials.
Then by iterations we added elements that coplements the robot description, typically application specific tooling when needed, described using models that are probably local to the application package. In the example below we instanciate a tool description twice and attach it to our robot end effectors (XXX_tool_adapter_tool_side):
models:
#robot description (without camera)
...
- path: robocop-sophia-app/common/models/hankamp_tool.urdf
namespace: bazar_left
joint:
parent: bazar_left_tool_adapter_tool_side
- path: robocop-sophia-app/common/models/hankamp_tool.urdf
namespace: bazar_right
joint:
parent: bazar_right_tool_adapter_tool_side
Then we iteratively added model elements to the world to manage environment objecs (workbench, manipulated object), to deal with collision avoidance or visualization of known objects, but also to represent virtual or concrete elements that will ease controller description. Here is a simplified view of what have been added:
models:
...
#environment
- path: robocop-sophia-app/common/models/workbench.urdf
joint:
position:
linear: [0.7, 0, 0.5]
#virtual links for tasks description
...
- path: robocop-core/point.urdf
namespace: pickup
- path: robocop-core/point.urdf
namespace: drop
...
Workbench object is used for vizuaization and collision avoidance while pickup_point and drop_point are virtual links that are used for tasks description.
RobCoP core library offers default URDF files for describing those virual links: point.urdf and body.urdf. They are usually namespaced to obtain meaningfull names.
Finally we want to adapt the camera used for the same setup, we cancustomize the application description at hte very last moment. For instance in the file share/resource/robocop-sophia-app/hankamp_debug/config.yaml we attach the kinect2 camera on the pan-tilt head of the robot (connecting teh camera to the body bazar_ptu_mount_link):
models:
- include: robocop-sophia-app/common/world.yaml
- path: robocop-camera-description/models/kinect2.urdf
namespace: bazar
joint:
parent: bazar_ptu_mount_link
...
Note that for virtual and environment elements they do not necessarily have to be part of the decription, they can be added (and even removed) directly using code, so in the end this is a design decision.
That’s it for description, there is no much more to say. Simpler remember to start simple and to complexify your application step by step.
Processors configuration, code generation and processors instanciation
During desription you shoud also iteratively add the processors needed for your application. We advise beginning even without any driver with a “virtual” application either based on a simualtor or even simpler using a visualization tool like rviz. The robocop-sophia-app package provide a “debug” application that uses rviz to first debug the controller. Its configuration file share/resource/robocop-sophia-app/hankamp_debug/config.yaml looks like:
...
processors:
robot_state_publisher:
type: robocop-ros2-utils/processors/robot_state_async_publisher
options:
name: bazar
topic: robot_description
period: 0.1
model:
type: robocop-model-ktm/processors/model-ktm
options:
implementation: rbdyn
input: state
forward_kinematics: true
forward_velocity: true
whole_body_controller:
type: robocop-qp-controller/processors/kinematic-tree-qp-controller
options:
joint_group: bazar_upper_joints
velocity_output: true
force_output: false
include_bias_force_in_command: false
solver: qld
hierarchy: strict
...
These are the three base element you would always need to start implementing your application:
- a model library used to compute forward kinematics and update robot state from joint state. The model library is used everywhere so you can consider always using it. Simply select one implementation (
rbdynorpinocchioare avalable for now). - a simulator or in this case the
robot_state_async_publisherthat generates ROS URDF description and related TF topics. This allows to visualize, using RVIZ, how robot is behaving. Visualization is simpler than simulation but you will have to write more code, for instance to set robot state from its command or to generate fake forces. This wa you have a better control on scenario than with a simulator, but of course with less realism. - a controller. In the application we use the
kinematic-tree-qp-controllerprovided byrobocop-qp-controllerpackage.
All processors have various specific options that we cannot detail here, you should report to each individual package documentation and examples to unedrstand their meaning and effects.
Once you get a minimal set of processors you have to:
- import corresponding description and processor packages (see
robocop-sophia-appCMakeLists.txtfile):
#since application publish robot state and description it is also a ROS package
check_PID_Environment(TOOL ros2_build)
import_ROS2(PACKAGES rclcpp)
# common
PID_Dependency(robocop-core VERSION 1.0)
PID_Dependency(yaml-cpp FROM VERSION 0.6.2)
# description
PID_Dependency(robocop-bazar-description VERSION 1.0)
# interfacing with RIVIZ
PID_Dependency(robocop-ros2-utils VERSION 1.0)
# model
PID_Dependency(robocop-model-rbdyn VERSION 1.0)
# controller
PID_Dependency(robocop-qp-controller VERSION 1.0)
An important thing to do is to first configure your PID workspace to use ROS2:
#first source the ROS2 install (or overlayed workspace) in the terminal !!!
source /opt/ros/humble/setp.bash
#then configure the PID workspace to use ROS2 (version of ROS2 must match !!)
pid workspace profils cmd=add env=ros2_build[version=humble]
The process can take time.
- and use the libraries they contain to generate a model that contain data needed by processors. For instance in
apps/CMakeLists.txtwe define an application namehankamp-debug:
PID_Component(hankamp-debug
CXX_STANDARD 17
WARNING_LEVEL ALL
RUNTIME_RESOURCES
robocop-sophia-app
DEPEND
robocop/core
robocop/bazar-description
robocop/ros2-utils
robocop/model-rbdyn
robocop/kinematic-tree-qp-controller
)
ROS2_Component(hankamp-debug)
Robocop_Generate(
hankamp-debug
robocop-sophia-app/hankamp_debug/config.yaml
)
The call to ROS2_Component CMake function is specific to ROS2 usage and allows to declare the component as an executable container of ROS2 nodes (Mandatory because the robot state publisher is published by the application).
- write a basic source code for your application, something like (see file
hankamp_debug.cpp):
#include <rclcpp/rclcpp.hpp>
#include "robocop/world.h"
int main(int argc, char* argv[]) {
// intialize ROS2 system
rclcpp::init(argc, argv);
//instanciate the world
robocop::World world;
//TODO: instanciate processors
return 0;
}
Here the spcific part is bound to ROS2, that needs to be initialized.
Your application’s world is ready to be generated:
pid build
The RoboCoP code generator will report errors in description or in processors configurations, allowing to early solve application configuration issues. If code generation succeed, then you can start writing processors instaciantion code.
The first part of the main() function is purely declarative: the code simply instanciate all the processors used. For the hankamp_debug application, the code could look like:
#include <rclcpp/rclcpp.hpp>
#include "robocop/world.h"
//include fo the 3 processors
#include <robocop/utils/ros2_utils.h>
#include <robocop/model/rbdyn.h>
#include <robocop/controllers/kinematic-tree-qp-controller/qp.h>
int main(int argc, char* argv[]) {
// intialize ROS2 system
rclcpp::init(argc, argv);
using namespace phyq::literals;
using namespace std::chrono_literals;
constexpr auto time_step = phyq::Period{5ms};
// Declarative section: instanciating world and processors
// create robocop processors and configure them using generated
// configuration
robocop::World world;
auto model = robocop::ModelKTM{world, "model"};
auto controller = robocop::qp::KinematicTreeController{
world, model, time_step, "whole_body_controller"};
auto state_publisher = robocop::ros2::RobotStateAsyncPublisher(
world, model, "robot_state_publisher");
...
// application behavior
return 0;
}
Headers for the processors are imported then the processors are instanciated at the very beginning of the main function: this is the application instanciation step. You can notice that the last argument passed to processors constructor is the name of the corresponding entry in the application configuration file. Accross the development iterations the number of processors you instanciate will grow, for instance you will add drivers fo robots and sensors, oberserver processors to compute or estimate state of the system, etc.
Runtime behavior
After instanciating the world(s) and processors comes the code defining the application behavior. There is no precise guideline to write the behavior code right now but it generally should follow this basic rule: there is a controller periodic loop that implements a pipeline: read state and perceptions, decision on tasks to execute dependening on applicative context, compute control,apply command to the robot.
In hankamp_debug application the simplified main controller loop looks like:
...
pid::Period periodic_loop{time_step.as_std_duration()};
(void)state_publisher.connect();
auto& cmd = ctrl.command().get<robocop::JointVelocity>();
while (not end) {
// update the cmd values
controller.command().read_from_world<robocop::JointVelocity>();
// integrate previous command to get current state
// real application code should call driver.read() instead
ctrl.state().update(
[&](robocop::JointPosition& state) { state += cmd * time_step; });
// controller part
model.forward_kinematics();
(void)state_publisher.write();
...
// decide what to do from application execution context
// configure tasks and constraints of controller
controller.compute();//update command on joints
periodic_loop.sleep();
}
while the simplified real application code looks like:
...
(void)driver.connect();
(void)state_publisher.connect();
while (not end){
if (not driver.sync_one_arm() or not driver.read()) {
break;
}
// controller part
model.forward_kinematics();
(void)state_publisher.write();
...
// decide what to do from application execution context
// configure tasks and constraints of controller
controller.compute();//update command on joints
// applying commands and publishing state
if (not driver.write()) {
break;
}
}
About threading
Main difference in the real application code is that the robot driver is used to read state and write command but fundamentally this is the same behavior. From a technical point of view there are simply more driver threads that synchronize with controller thread.
One important aspect to keep in mind is the underlying threading model:
- each
worldcan be accessed only in a single thread. Each of these thread is a controller thread as presented before, so each controller acts on its own dedicated world. Multiples worlds may be needed when one wants to implement cascading controller architectures with multiple threads (running at different frequencies). There is no specific tool in RoboCoP for controller threads creation, you can use any kind of thread model (basicstd::thread, ROS2 multithreaded executors,pid::loops, etc.). When there is only one controller thread you can directly put it in main thread, like inrobocop-sophia-apppackage. - all other threads (e.g. driver threads) must use their own bufferized data as explained in the create driver tutorial. The driver thread acts on a buffer built from world information extracted in the controller thread.
- A Driver’s
read()/write()functions must be called in one controller thread only. More specifically only the controller thread corresponding to the world that defines the joints used to configrue the driver (i.e. joints controlled by the driver) can call these function. If multiple controller threads need the same data updated from driver (e.g. joint state) then the controller thread interacting with the drivber thread must do the copy: it takes data from its world and copies it to other worlds that need it. - To implement synchronization between threads RoboCoP provides two classes
AsyncWorldReaderandAsyncWorldWriterthat define base pattern for threads synchronization based on buffers copy. Example ofAsyncWorldReaderis theControlModeManagerin the create driver tutorial.
To get a better understanding on how to deal with multiple worlds, please read this explanation page.
Using the controller
Last thing to talk about is the way controller is used to achieve desired appliations goals. This consists in:
- Defining tasks, constraints and configurations of the controller. This should be done at initialization time, after instanciation of processors.
- Enabling/Disabling tasks, constraints and configurations depending on runtime context. This is typically achieve using a kind of state machine in order to provide the robot a desired behavior given an estimated situation.
Defining tasks and constraints
Definition is performed in a way similar to:
...
int main(int argc, char* argv[]) {
...
// Declarative section: instanciating world and processors
robocop::World world;
auto model = robocop::ModelKTM{world, "model"};
...
// defining all possible tasks and constraints
controller_.set_auto_enable(true);//all following constraint are enabled by default
// joint kinematic constraint
auto& kinematic_constraint =
controller_.add_constraint<robocop::qp::kt::JointKinematicConstraint>(
"kinematic_constraint", controller_.controlled_joints());
kinematic_constraint.parameters().min_position =
controller_.controlled_joints().limits().lower().get<JointPosition>();
kinematic_constraint.parameters().max_position =
controller_.controlled_joints().limits().upper().get<JointPosition>();
kinematic_constraint.parameters().max_velocity.set_constant(1);
kinematic_constraint.parameters().max_acceleration.set_constant(5);
// body limits constraints
auto& left_vel_limit =
controller_.add_constraint<robocop::BodyVelocityConstraint>(
"left_velocity_limits",
world.body("bazar_left_hankamp_tool_object_side"),
robocop::ReferenceBody{world.world()});
left_vel_limit.parameters().max_velocity.value() << 0.5, 0.5, 0.5, 1, 1, 1;
...
//tasks
controller_.set_auto_enable(false);
// arms position task
auto& jg = world.joint_group("bazar_arms");
arms_joint_position_task =
&controller_.add_task<robocop::JointPositionTask>("arms_joint_position_task", jg);
reflexxes = &arms_joint_position_task->target()
.interpolator().set<robocop::ReflexxesOTG<robocop::JointPosition>>(jg.dofs(), controller.time_step());
reflexxes->max_first_derivative().set_constant(1.0);
reflexxes->max_second_derivative().set_constant(1.0);
reflexxes->target_first_derivative().set_constant(0.0);
arms_joint_pos_fb = &arms_joint_position_task->feedback_loop().set_algorithm<robocop::ProportionalFeedback>();
arms_joint_pos_fb->gain().resize(jg.dofs());
arms_joint_pos_fb->gain().set_constant(1);
...
// application behavior
...
return 0;
}
As a first remark the set_auto_enable function is used to enable or disable tasks or constraints that are to be added into the controller. For constraints most of time we want the controller to always take them into account, so all constraionts are enabled by default. On the contrary, tasks are used or not depending on what needs to be done at runtime so all tasks are disabled by default.
The following lines declare a joint group constraint:
auto& kinematic_constraint =
controller_.add_constraint<robocop::qp::kt::JointKinematicConstraint>("kinematic_constraint", controller_.controlled_joints());
kinematic_constraint.parameters().min_position = controller_.controlled_joints().limits().lower().get<JointPosition>();
- Add a constraint of type
JointKinematicConstraintinto the controller, with unique name"kinematic_constraint", and this constraint applies to all joints controlled by the controller (using the joint group returned bycontroller_.controlled_joints()) but it could have applied to a more limited set of joints. - Returns a reference on it to let the user set its parameters using the
parameters()accessor.
Definition of a body constraint is a bit different:
auto& left_vel_limit =
controller_.add_constraint<robocop::BodyVelocityConstraint>(
"left_velocity_limits",
world.body("bazar_left_hankamp_tool_object_side"),
robocop::ReferenceBody{world.world()});
left_vel_limit.parameters().max_velocity.value() << 0.5, 0.5, 0.5, 1, 1, 1;
For body constraint, the target body (the one suject to the constraint) must be passed as argument (world.body("bazar_left_hankamp_tool_object_side")) as well as the reference body (robocop::ReferenceBody{world.world()}) that is used to defined constraint data. In the example the velocity of body "bazar_left_hankamp_tool_object_side" is limited to 0.5 m.s-1 along X, Y and Z axis of the world frame.
Tasks are declared mostly the same way. For instance for the a joint task:
auto& jg = world.joint_group("bazar_arms");
arms_joint_position_task =
&controller_.add_task<robocop::JointPositionTask>(
"arms_joint_position_task", jg);
This task is used to control position of the robots arms’ joints. For construction, this is the same logic as the joint level constraint. We can define an interpolator, in the exemple the reflexxes interpolator, that we can in turn configure the way we want:
reflexxes = &arms_joint_position_task->target().interpolator()
.set<robocop::ReflexxesOTG<robocop::JointPosition>>(jg.dofs(), controller.time_step());
reflexxes->max_first_derivative().set_constant(1.0);
reflexxes->max_second_derivative().set_constant(1.0);
reflexxes->target_first_derivative().set_constant(0.0);
Because it is a loop with feedback we also set the algorithm used for feedbaclk loop control:
arms_joint_pos_fb = &arms_joint_position_task->feedback_loop().set_algorithm<robocop::ProportionalFeedback>();
arms_joint_pos_fb->gain().resize(jg.dofs());
An important point is to call the resize() function on gains vector before setting gains value, otherwise memory of this vector is not allocated which result in program crach.
controlling tasks/constraints execution
The last part consists in controllling execution of tasks during runtime.
Let’s take following example code, coming from a state machine implementation:
sm_.add(
"goto_initial_pose",
[this] {//during execution of each cycle
arms_joint_pos_fb->gain().set_constant(10);
},
[this]() {//first time state is executed
arms_joint_position_task->enable();
arms_joint_position_task->target().reset_interpolator();
auto& arms_target = ctrl_.arms_joint_position_task->target();
arms_target->value() << 0, 0.25, 0, -2, 0, 0.7, 0, 0, 0.25, 0, -2, 0, 0.7, 0;
},
[this] {//last time state is executed
arms_joint_position_task->disable();
},
[this]() {//check if exitting state
return reflexxes->is_trajectory_completed();
},
"move_to_pickup");
This example show the simplified code of the "goto_initial_pose" state. This first lambda is executed every controller cycle, second and thirs ones are executed when state is activated / deactivate, last function is used to check state exit condition.
When the state is activate: controller starts using task by calling its enable(). When it no more needs it to be executed it can call its disable() function. It is also a good idea, for tasks using statefull interpolators (this is true for the reflexxes interpolator), to call their reset_interpolator function to initialize them correctly.
Then the task target value has to be set with the adequate value.
During execution of the state: we can perform various actions, for instance changing feedback loop gains or even why not target. The stop condition can use some data coming from the task like for instance the feedback loop’s data (error, state) or specific functions provided by interpolator (in this example the function is_trajectory_completed()).
When the state is deactivated: controller stops using task by calling its disable() function.
That’s it, for the basic principles of application development.