RT Dissertation/Thesis T1 Multi-camera intelligent space for a robust, fast and easy deployment of proactive robots in complex and dynamic environments A1 Canedo Rodríguez, Adrián A2 Universidade de Santiago de Compostela. Facultade de Física. E.T.S. de Enxeñaría. Departamento de Electrónica e Computación. Centro Singular de Investigación en Tecnoloxías da Información (CITIUS)., K1 Sistemas robóticos en red K1 Computación ubicua K1 Espacios inteligentes K1 Arquitectura robótica AB One of the current challenges in robotics is the integration of robots in everyday environments. However,it is difficult to achieve this with stand-alone robots that use only the information provided by their ownsensors (on-board sensors). In this thesis, we will explore the use of intelligent spaces (i.e. spaces wheremany sensors and intelligent devices are distributed and which provide information to the robot), to getrobots operating in complex environments in a short period of time. Our proposal is to build an intelligentspace that allows an easy, fast, and robust deployment of robots in different environments. This solutionmust allow robots to move and operate efficiently in unknown environments, and it must be scalable tothe number of robots and other elements.Our intelligent space will consist of a distributed network of intelligent cameras and autonomous robots.The cameras will detect situations that might require the presence of the robots, inform them about thesesituations, and also support their movement in the environment. The robots, on the other hand, willnavigate safely within this space towards the areas where these situations happen. With this proposal, ourrobots are not only able to react to events that occur in their surroundings, but to events that occuranywhere. As a consequence, the robots can react to the needs of the users regardless of where the usersare. This will look as if our robots are more intelligent, useful, and have more initiative. In addition, thenetwork of cameras will support the robots on their tasks, and enrich their environment models. This willresult on a faster, easier and more robust robot deployment and operation.In this thesis, we will explore two alternatives, regarding how the intelligence is distributed among theagents: collective intelligence and centralised intelligence. Under the collective intelligence paradigm,intelligence is fairly distributed among robots and cameras. Global intelligence arises from the interactionamong individual agents, and there is not a central agent that handles most decision making. This issomehow similar to self-organization processes that are usually observed in nature, where there is nohierarchy nor centralisation. In this case, we assume that it is possible to get robots operating in a prioriunknown environments when their behaviour emerges from the interaction amongst an ensemble ofindependent agents (cameras), that any user can place in different locations of the environment. Theseagents, initially identical, will be able to observe human and robot behaviour, learn in parallel, adapt andspecialize in the control of the robots. To this extent, our cameras will be able to detect and track robotsand humans robustly, to discover their camera neighbours, and to guide the robot navigation throughroutes of these cameras. Meanwhile, the robots must only follow the instructions of the cameras andnegotiate obstacles in order to avoid collisions.On the other hand, under the centralised intelligence paradigm, one type of agent will be assigned muchmore intelligence than the rest. Therefore, this agent will make most decision making and coordination,and its performance will have a higher importance than that of other agents. To explore this paradigm, inthis thesis, the role of central agent will be played by the robot agent, and most of this intelligence will bedevoted to the task of self-localisation and navigation. In this regard, we have performed an experimentalstudy about the strengths and weaknesses of different information sources to be used for the task ofrobot localisation. The study has shown that no source performs well in every situation, but thecombination of complementary sensors may lead to more robust localisation algorithms. Therefore, wehave developed a robot localisation algorithm that combines the information from multiple sensors. Thisalgorithm is able to provide robust and precise localisation estimates even in situations where singlesensorlocalization techniques usually fail. It can fuse the information of an arbitrary number of sensors,even if they are not synchronised, work at different data rates, or if some of them stop working. We havetested our algorithm with the following sensors: a 2D laser range finder, a magnetic compass, a WiFireception card, a radio reception card (433 MHz band), the network of external cameras, and a cameramounted in the robot. We have also designed wireless transmitters (motes) and we have studied theperformance of our positioning algorithm when they are able to vary their transmission power. Throughan experimental study, we have demonstrated that this ability tends to improve the performance of awireless positioning system. This opens the door for future improvements in the line of active localisation.Under this paradigm, the robot would be able to modify the transmission power of the transmitters inorder to discard localisation hypotheses proactively.Our proposal is a generic solution that can be applied to many different service robot applications. As anspecific example of application, we have integrated our intelligent space with a general purpose guiderobot that we have developed in the past. This robot is aimed to operate in different social environments,such as museums, conferences, or robotics demonstrations in research centres. Our robot is able todetect and track people around him, follow an instructor around the environment, learn routes of interestfrom the instructor, and reproduce them for the visitors of the event. Moreover, the robot is able tointeract with humans using gesture recognition techniques and an augmented reality interface. YR 2015 FD 2015-10-06 LK http://hdl.handle.net/10347/13632 UL http://hdl.handle.net/10347/13632 LA eng DS Minerva RD 3 may 2026