We are developing EICA architecture (Embodied Interactive Control Architecture) to realize a learning robot that can eventually exhibit human-like interactions as a result of autonomous learning (Fig.1). In this theory, the learner watches the external behavior of the partners, learn action and command models and the communication protocol. The learning process in EICA is done in three stages. Firstly, a set of recurrent patterns (motifs) are discovered in the interaction dimensions using Robust Singular Spectrum Transform (RSST) followed by solving the resulting constrained motif discovery problem. Secondly, the hierarchy of dynamical systems is built using the Interaction Structure Learning algorithm. Finally and optionally, the robot starts to engage in human-robot interactions and uses the interaction adaptation algorithm . Navigation behavior and listener behavior are acquired by this architecture. We have also developed a method of evaluating naturalness of the human-robot interaction by interpreting the signals from physiological sensors .
Fig.1: EICA (Embodied Interactive Control Architecture).
 Yasser Mohammad and Toyoaki Nishida: Learning interaction protocols using Augmented Baysian Networks applied to guided navigation, IROS 2010: 4119-4126.
 Yasser F. O. Mohammad, Toyoaki Nishida, Shogo Okada: Unsupervised simultaneous learning of gestures, actions and their associations for Human-Robot Interaction. IROS 2009: 2537-2544.
 Yasser F. O. Mohammad, Toyoaki Nishida: Mining Causal Relationships in Multidimensional Time Series. Smart Information and Knowledge Management 2010: 309-338
 Yasser F. O. Mohammad, Toyoaki Nishida: Controlling gaze with an embodied interactive control architecture. Appl. Intell. 32(2): 148-163 (2010)
 Yasser F. O. Mohammad, Toyoaki Nishida: Using physiological signals to detect natural interactive behavior. Appl. Intell. 33(1): 79-92 (2010)
 Yasser F. O. Mohammad, Toyoaki Nishida: Constrained Motif Discovery in Time Series. New Generation Comput. 27(4): 319-346 (2009)