Internal representation of the environment in cognitive robotics
Novel theories have recently been proposed to try to explain higher cognitive functions as internal representations of action and perception of an embodied autonomous agent in a situated environment. Using neural evolutionary robotics, a new concept of collaborative control architecture allows construction of a behaviour- based system as a result of interactions between the control system and both the external and internal environments. The full separation achieved between the inner world of the autonomous agent and the real external world gives some insight into how comprehensive understanding on robot sensing and learning can be obtained. Two experiments, the ﬁrst on generation of walking gaits for the Aibo robot and the second on a two-sensor, two-motor simulated robot orbiting around an object illustrate the performance of the proposed paradigm and lead to discussion of concepts in the robot’s inner world, emerged from the interaction with the environment when completing a task.