1 Introduction
Over the last decade, there has been a growing interest in the robotics community to develop autonomous humanoid robots. Unlike laboratory settings, everyday environments are highly dynamic and unstructured. Articulated objects like doors, drawers, valves, and tools are multilink rigid body systems with their object parts moving relative to one other. Articulation models describe the joint nature between two object parts. So, for a humanoid robot to operate autonomously in dynamic environments, it has to learn the articulation models. This paper contributes to learning articulation models and estimates the topology of articulated objects.
Doors are the most likely experienced articulated objects in many robotic applications like rescue scenarios, elderly care, hospitality, and others. The earliest investigations tackling the door opening problem are carried in [1] and [2]. The authors in [1] assume a known door model and leveraged the combined motion of the manipulator and the autonomous mobile platform to open the door. In contrast, a modelfree approach of controlled interactions along the path of least resistance is investigated in [2]. Later, the concept of equilibrium point control (EPC) for the specific task of opening novel doors and drawers is evaluated in [3]. In addition, they implemented an articulation model estimation algorithm using the endeffector trajectory, assuming a stable grasp and planar motion of the endeffector. The algorithm returns an estimate of the rotation axis location and the radius. The prismatic joint is estimated as a rotational joint with a large radius. More recently, a modelfree adaptive velocityforce/torque controller for simultaneous compliant interaction and estimation of articulation models in objects like doors and drawers with one degree of freedom motion is proposed in [4]. Additionally, they provide proof of convergence of the articulation model estimates.
On the other hand, the idea of interactive perception paradigm is introduced in [5] and [6] highlighting the need for extracting taskspecific perceptual information using the manipulation capabilities of a robot by interacting with the environment. They employ optical flow based tracking of features on moving object parts and build a graph. Then the articulation models are extracted from the information contained in the graph. The rotational joint is identified by rotating centers between two subgraphs and prismatic joint by shifting movements of subgraphs. They successfully demonstrated the use of interactive perception in extracting the kinematic model of various tools to build a DenavitHartenberg (DH) parameter model and then use it to operate a tool. In addition, a symbolic learningbased approach to manipulation is presented in [7] which uses relational representations of kinematic structures that are grounded using perceptual and interaction capabilities of a robot. They successfully demonstrated learning and generalization of manipulation knowledge to previously unseen objects.
A probabilistic learning framework proposed in [8] uses a noisy 3D pose observations of object parts. They implemented predefined candidate joint models with parameters and also a nonparametric Gaussian process model to which observed 3D pose trajectory data of object parts is fit to find kinematic structures of kinematic trees. Later, a stereo camera system is used to get dense depth images as input [9]. Building on the previous work, a unified framework with several extensions like dealing with kinematic loops and an extended set of experiments is presented in [10]. A particle filter based approach presented in [11] integrates the idea of interactive perception into a probabilistic framework using visual observations and manipulation feedback from the robot. They also presented best action selection methods based on entropy and information gain which guides the robot to perform the most useful interactions with the object to reduce the uncertainty on articulation model estimates.
The concept of exploration challenge for robots where the task is to perform explorative actions and learn the structure of the environment is presented in [12]
. One of their main contributions is probabilistic belief representation of articulation models including properties like friction and joint limits. They successfully demonstrated how the behavior emerged from entropybased exploration is more informative than explorative strategies based on heuristics. An online multilevel recursive estimation algorithm considering taskspecific priors based on the concept of interactive perception is presented in
[13]. They use a series of RGBD image data as input to estimate articulation models including the joint configuration. Further, they extended their approach [14] integrating information from vision, forcetorque sensing and proprioception. In addition to kinematic articulation model estimation, they also generated a dynamic model of the articulated object.In this paper, we propose an algorithm to estimate the topology of a complex floating base articulated object by leveraging the momentum and interaction wrench information while manipulating the object. Unlike the previous approaches, our approach is addressed to handle floating base objects. Further, our method attempts to identify the topology of an articulated system with any number of degrees of freedom. This paper is organized as follows. Section 2 introduces the notation and the problem statement. Section 3 presents our method and algorithm. Section 4 provides the details of the experiments. Section 5 contain the numerical results showing the articulation model estimation followed by conclusions.
2 Background
Spatial vectors
[15]are 6D vectors that are proven to be powerful tools in analyzing rigidbody dynamics. Unlike the standard notation of spatial vectors, we use a modified notation. In the case of spatial motion vectors, we consider the linear part first followed by the angular part and in the case of spatial force vectors, we consider the forces first followed by the moments.
2.1 Notation

denotes the inertial frame, denotes a bodyfixed frame and denotes a frame associated with the center of mass of a rigid body.

Let and be two dimensional column vectors of real numbers, i.e. , , their inner product is denoted as , with , the transpose operator.

denotes the set of orthogonal matrices with determinant equal to one.

Given , , denotes the skewsymmetric matrixvalued operator associated with the cross product in , such that .

Given the vector , we define the skewsymmetric matrix as,

denotes the euclidean norm of a vector, .

is the motion subspace matrix [16] of a joint, that has degrees of freedom and is a column vector that denotes the joint variable.

denotes the origin of the frame , expressed in the inertial frame; is the rotation matrix that transforms 3D vector, expressed with the orientation of to a 3D vector expressed in frame .

denotes the 3D pose of a rigid body with respect to the inertial frame A
where denotes the orientation of the rigid body expressed as a quaternion

denotes the angular velocity of a rigid body, expressed in the body frame , defined as

denotes the twist of a rigid body, expressed in the body frame ,

denotes an external wrench exerted on the body, expressed in the body frame

g denotes the gravitational force vector

denotes the spatial inertia, expressed in the body frame

denotes the mass of a rigid body,

denotes the center of mass of a rigid body, expressed in the body frame

denotes the 3D rotational inertia matrix of a rigid body, expressed with the orientation of the body frame and with respect to the origin of the body frame

denotes the 3D rotational inertia matrix of a rigid body , with respect to the center of mass of the body, where:


denotes the spatial momentum of a rigid body with respect to the body frame

denotes spatial transformation from frame B to the inertial frame

denotes homogeneous transformation from frame B to the inertial frame

Operator takes 3D pose and returns homogeneous transformation matrix.

Operator takes a homogeneous transform as input and returns a spatial transformation.

Operator takes a homogeneous transformation matrix as input and returns the position.

Operator takes a homogeneous transformation matrix as input and returns the rotation matrix.
2.2 Problem Statement
Consider a floating base articulated object as shown in Fig. 1 with rigid bodies called links. The links are connected to one another by one degree of freedom articulation model. We assume to have the simplest articulation models of either a revolute joint model or a prismatic joint model . We define the set of joint indices, and the set of articulation models, . Now, the topology of the articulated object is represented by the set , whose elements are pairs of elements from the sets and i.e
(1) 
where,
The articulated object is assumed to be of a serial chain kinematic structure. An anthropomorphic robot with two arms manipulate the object by holding the terminal links which result in the interaction wrenches and at the arms of the robot. The contacts between the terminal links and the arms of the robot are considered rigid. Now, the problem we are interested in is to leverage the kinematic evolution of the links and the interactions wrench to estimate the set that represents the true articulation models present in the object
3 Method
Consider a complex articulated object as shown in Fig. 1. The momentum of the th rigid body expressed in its body frame, is given by,
(2) 
In a kinematic tree structure, the twist of the th rigid body, expressed in its body frame, is given by,
(3) 
where,

is the spatial transformation from the parent link to the child link.

is the twist of the th joint, connecting link to its parent, expressed in the body frame of link .
The twist of a joint, expressed in the child link body frame, is given by,
(4) 
Now, the joint twist depends on the nature of the articulation model present between the two links that are connected by the joint and can be written as,
(5) 
Following the relations (4) and (5), we can express the momentum of the th rigid body in terms of the articulation model present between it and its parent link. In this way, we encode the articulation model information in the momentum of a rigid body.
(6) 
The net wrench acting on any th rigid body expressed in the body frame is the gravitational wrench give by,
(7) 
In addition, the terminal links experience reaction wrenches and from the arms of the robot. So, the total net wrench acting on the articulated object is given by,
(8) 
where is the spatial transformation for force vectors with respect to the inertial frame A. The total momentum of the articulated object is equal to the sum of its link momenta given by,
(9) 
where represents the topology of the articulated object.
According to classical mechanics [17], the net wrench , acting on a rigid body system is equal to the rate of change of its momentum expressed with respect to the inertial frame of reference, .
(10) 
Now, for number of joints, we will have sets. The set, which solves the following optimization represents the true topology of the articulated object.
(11) 
4 Experiments
As a proof of concept experiment, we considered simple articulated objects as shown in Fig. 2 containing two links connected through either a revolute joint (Fig. 1(b)) or a prismatic joint (Fig. 1(a)). The motivation behind this experimental choice is that many reallife articulated objects like scissors, pliers, drawers and other articulated objects can be represented in this simple form. Accordingly, we modeled two objects in gazebo simulation environment using Simulation Description Format (SDF). The revolute model articulated object contains two links connected through a revolute joint and the prismatic model articulated object contains two links connected through a prismatic joint. The joints are designed with a damping value of and static friction value of .
We envision an experimental scenario where the humanoid robot iCub [18] [19] will hold the articulated object, as shown in Fig. 3 and perform exploratory actions to estimate and learn the topology of the object.
The cylindrical elements in black color are the handles of the terminal links. They are designed to be virtual links without any significant mass and inertial values to contribute towards the system dynamics. The rectangular elements in yellow color are the object links that are connected to handles through fixed joints.
Several realworld articulated objects are passive and do not contain any sensors to give the information related to the motion of the links or the wrenches acting at the terminal links. A vast amount of research has been carried on tracking rigid bodies either using markers, features or depth information, yet the problem of obtaining robust 3D pose values of rigid bodies is still an open challenge in the field of computer vision. As visual perception is not the main goal of this work, we acquire the pose values directly from the simulation environment using a plugin. Also, we made the assumption to have full knowledge of the link inertial parameters i.e., mass, inertia, and center of mass. In the case of iCub robot, external wrenches acting at the hands are estimated using the techniques developed for wholebody control
[20]. So under the assumptions of rigid contacts between the terminal links of the articulated object and the arms of the robot, the wrenches acting on the terminal links of the object are simply the reaction forces from the arms of the robot.In this proof of concept, we primarily want to highlight the articulated motion estimation approach. Operating on an articulated object by a humanoid robot without its true object model poses quite a challenge on the control aspects of the experiment. So, we did not consider the iCub robot to manipulate the articulated object. Also, we embedded a simulated 6 axis ForceTorque sensor plugin [21] at the handles to measure the external wrenches acting on the object terminal links. Furthermore, one of the handles is anchored to the world in gazebo simulation through a fixed joint and this also anchors the object link attached to that handle. The other link is free to move and we apply an external sinusoidal exploration wrench of random frequency and amplitude mimicking the exploratory actions a robot performs while manipulating the object without being certain of the articulation models.
5 Results
The range of motion for the prismatic joint is set to and for the revolute joint, . The amplitude range of the exploration wrench is and the range of frequency is . This choice of ranges for the random Sinusoidal exploration wrench is motivated to reflect motor babbling behavior a robotic endeffector will perform while manipulating an articulated object. The exploration wrench is applied for a duration of and when the object is moving, we record the simulation time, links 3D pose values and the wrench values acting on the terminal links. Currently, our articulation model estimation algorithm 1 is offline and the recorded trial data is passed as input.
In our modeling, the net wrench acting on the articulated object is given by,
(12) 
As our simplified model contains only one joint we have two sets, that represent the topology of the articulated object i.e.
(13)  
For each trial, we compute the following two model hypothesis error values for each of the objects,

Revolute Model Hypothesis Error, which is the value that corresponds to the mismatch between the actual revolute joint motion and the revolute model hypothesis, given by,
(14) 
Prismatic Model Hypothesis Error, which is the value that corresponds to the mismatch between the actual prismatic joint motion and the prismatic articulation model hypothesis, given by,
(15)
The true topology of the articulated object corresponds to the smallest model hypothesis error value. We ran several trials with random exploration wrench on the two articulated objects. As our algorithm depends on the kinematic evolution data, any wrench applied in the constrained direction results in hypothesis error values. The model hypothesis error values of trials, in which the exploration wrench acted in the motion direction of the joint, is shown in Fig. 4 and Fig. 5. In the case of manipulating the articulated object with a revolute joint, the value of revolute model hypothesis error is less than the value of prismatic model hypothesis error as shown in Fig. 4. Similarly, in the case of manipulating the articulated object with a prismatic joint, the value of prismatic model hypothesis error is less than the value of revolute model hypothesis error as shown in Fig. 5.
6 Conclusions
In this paper we presented a general algorithm to estimate the topology of a multiple degrees of freedom articulated object. To demonstrate our algorithm with a simple case study, we made certain assumptions about the availability of rigid body pose in simulation and inertial parameters. A fixed timestep simulator like ode () adds considerable numerical errors for a stiff mechanical system used in our experiments. In such a noisy environment, our work is a proof of concept proving that we can estimate the articulation models using the kinematic evolution and interaction wrench information available during manipulation.
7 Acknowledgements
This work is supported by PACE project which has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie SklodwskaCurie grant agreement No 642961.
References
 [1] Keiji Nagatani and SI Yuta. An experiment on openingdoorbehavior by an autonomous mobile robot with a manipulator. In Intelligent Robots and Systems 95.’Human Robot Interaction and Cooperative Robots’, Proceedings. 1995 IEEE/RSJ International Conference on, volume 2, pages 45–50. IEEE, 1995.
 [2] Günter Niemeyer and JJE Slotine. A simple strategy for opening an unknown door. In Robotics and Automation, 1997. Proceedings., 1997 IEEE International Conference on, volume 2, pages 1448–1453. IEEE, 1997.
 [3] Advait Jain and Charles C Kemp. Pulling open novel doors and drawers with equilibrium point control. In Humanoid Robots, 2009. Humanoids 2009. 9th IEEERAS International Conference on, pages 498–505. IEEE, 2009.
 [4] Yiannis Karayiannidis, Christian Smith, Francisco Eli Vina Barrientos, Petter Ögren, and Danica Kragic. An adaptive control approach for opening doors and drawers under uncertainties. IEEE Transactions on Robotics, 32(1):161–175, 2016.
 [5] Dov Katz and Oliver Brock. Interactive perception: Closing the gap between action and perception. In ICRA 2007 Workshop: From features to actionsUnifying perspectives in computational and robot vision, 2007.
 [6] Dov Katz and Oliver Brock. Manipulating articulated objects with interactive perception. In Robotics and Automation, 2008. ICRA 2008. IEEE International Conference on, pages 272–277. IEEE, 2008.
 [7] Dov Katz Yuri Pyuro Oliver Brock. Learning to manipulate articulated objects in unstructured environments using a grounded relational representation. Robotics: Science and Systems IV, page 254, 2009.
 [8] Jürgen Sturm, Vijay Pradeep, Cyrill Stachniss, Christian Plagemann, Kurt Konolige, and Wolfram Burgard. Learning kinematic models for articulated objects. In IJCAI, pages 1851–1856, 2009.
 [9] Jürgen Sturm, Kurt Konolige, Cyrill Stachniss, and Wolfram Burgard. 3d pose estimation, tracking and model learning of articulated objects from dense depth video using projected texture stereo. In RGBD: Advanced Reasoning with Depth Cameras Workshop, RSS, 2010.

[10]
Jürgen Sturm, Cyrill Stachniss, and Wolfram Burgard.
A probabilistic framework for learning kinematic models of
articulated objects.
Journal of Artificial Intelligence Research
, 41:477–526, 2011.  [11] Karol Hausman, Scott Niekum, Sarah Osentoski, and Gaurav S Sukhatme. Active articulation model estimation through interactive perception. In Robotics and Automation (ICRA), 2015 IEEE International Conference on, pages 3305–3312. IEEE, 2015.
 [12] Stefan Otte, Johannes Kulick, Marc Toussaint, and Oliver Brock. Entropybased strategies for physical exploration of the environment’s degrees of freedom. In Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on, pages 615–622. IEEE, 2014.
 [13] Roberto Martin Martin and Oliver Brock. Online interactive perception of articulated objects with multilevel recursive estimation based on taskspecific priors. In Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on, pages 2494–2501. IEEE, 2014.
 [14] Roberto MartınMartın and Oliver Brock. Building kinematic and dynamic models of articulated objects with multimodal interactive perception. In AAAI Symposium on Interactive MultiSensory Object Perception for Embodied Agents, AAAI, Ed, 2017.
 [15] Roy Featherstone. Rigid body dynamics algorithms. Springer, 2014.
 [16] Roy Featherstone. Rigid body dynamics algorithms, chapter 3, pages 49–50. Springer, 2014.
 [17] Roy Featherstone. Rigid body dynamics algorithms, chapter 2, pages 35–36. Springer, 2014.
 [18] Giorgio Metta, Lorenzo Natale, Francesco Nori, Giulio Sandini, David Vernon, Luciano Fadiga, Claes Von Hofsten, Kerstin Rosander, Manuel Lopes, José SantosVictor, et al. The icub humanoid robot: An opensystems platform for research in cognitive development. Neural Networks, 23(89):1125–1134, 2010.
 [19] Lorenzo Natale, Chiara Bartolozzi, Daniele Pucci, Agnieszka Wykowska, and Giorgio Metta. icub: The notyetfinished story of building a robot child. Science Robotics, 2(13), 2017.
 [20] Francesco Nori, Silvio Traversaro, Jorhabib Eljaik, Francesco Romano, Andrea Del Prete, and Daniele Pucci. icub wholebody control through force regulation on rigid noncoplanar contacts. Frontiers in Robotics and AI, 2:6, 2015.
 [21] Enrico Mingo Hoffman, Silvio Traversaro, Alessio Rocchi, Mirko Ferrati, Alessandro Settimi, Francesco Romano, Lorenzo Natale, Antonio Bicchi, Francesco Nori, and Nikos G Tsagarakis. Yarp based plugins for gazebo simulator. In International Workshop on Modelling and Simulation for Autonomous Systems, pages 333–346. Springer, 2014.
Comments
There are no comments yet.