Sensor Networks

CAMERA NETWORKS
Camera networks is one of the main research activities within the SPARCS research group.  It is indeed considered a fertile and multidisciplinary research territory, that gives the opportunity to study and fuse strategies and methodological tools coming from different scientific and engineering disciplines like Computer Vision, Control Theory, Computer Science, Robotics, Data Mining, Machine Learning, Information and Communication Theory.
The pervasive real-world applicability of Visual Sensor Networks, combined with their flexibility, represent a stimulative research field whose main topics are briefly described in the following.

Automatic Calibration: automatic calibration consists on the camera capability to exploit images in order to estimate its own intrinsic and extrinsic parameters. It is of fundamental importance because manual calibration is tedious and error prone, especially for very wide networks. Moreover, there are scenarios in which calibration needs to be performed continuously, due to the dynamics of the network.  The upcoming scenario of smart cities, in which mobile and heterogeneous devices interact and cooperate, has recently stimulated researchers to find algorithms for efficient real-time automatic calibration of visual sensor networks involving non-static nodes with different characteristics.

Cooperative Task Assignment, Reconfiguration and Decision Taking: one of the most fascinating aspects regarding the field of smart networks is the possibility to design systems in which neighbors communicate in order to take decisions that will improve the overall system performance. Effective strategies to cope with this kind of task are extracted from different areas of study like Machine Learning (Model Prediction, Deep Learning, Reinforcement Learning), Game Theory, Graph Theory, Distributed Optimization and Network Science.
The field of automatic decision taking gives the possibility to solve two important tasks like cooperative task assignment and self-reconfiguration. The former consists in designing coordination strategies in order to fulfill a set of global requirements within some specified constraints. This is usually achieved associating to each node a sub-problem or by organizing collaborative actions among the agents; both of them lead to a reduction of the strength of the individuals. Self-reconfiguration instead derives from the self-organizing capability that makes cameras “smart”; in particular, these devices autonomously monitor their state, communicate with neighbors and learn from experience in order to adapt their PTZ parameters to environmental changes, new task requirements or to reduce the overall resources consumption. This is a very challenging sector, since it is not always clear how to define optimality criteria to account for all the specifications, especially when the reconfiguration has to fulfill multiple, sometimes conflicting, requirements and constraints.

Parameter Patrolling: the first surveillance systems have been deployed to accomplish security tasks related to perimeter patrolling and area monitoring. In many vigilance applications it is sufficient to deploy a certain number of fixed cameras; nevertheless, when wide areas need to be supervised, PTZ cameras are more suitable, since they span a larger region by changing their parameters. Earlier systems based on this technology were manually and remotely controlled by human operators or, alternatively, they were programmed to continuously accomplish a default task. Clearly, both this strategies highlight some evident drawbacks: first of all, a human operator can easily understand the events under surveillance, but the event detection capability decreases in case of crowded and chaotic images; moreover, the attention and efficiency of a human operator is not aways constant in time. At the same time, the use of a default monitoring strategy is not always efficient, because it can not adapt to the specific situation. Having said that, the benefits introduced by smart cameras become clear: they have the ability of adapting to changing situations and new tasks; in addition, their efficiency is not comparable to the systems based on human decisions. For this reasons, SPARCS group is focusing on algorithms regarding optimal sensor placement of mobile cameras, efficient intruder detection and resource-aware patrolling strategies.

Target Tracking: target tracking is one of the principal tasks introduced with smart cameras, since it usually requires strong computational capabilities and very good sensing properties. In practical applications target tracking is often performed within a network of collaborative and spatially distributed cameras. The interaction among the nodes increases the effectiveness of the task, since information fusion leads to more precise target localization; in addition, by estimating the time of arrival of the target into the field of view of a neighboring camera, it is possible to guarantee a certain level of tracking continuity. One of the main challenges related to multi-camera tracking is that real-world applications require to cover wide areas without deploying too many cameras (due to cost constraints). For this reason, SPARCS group researchers are studying solutions for multi-camera tracking with disjoint field of view. The most stimulating aspects are related to the fact that, when the target exits from the field of view of one camera, no more visual information can be gathered about it; therefore, it is needed to rely on statistical estimation methods in order to infer the most probable target’s directions.

Traffic monitoring: modern cities suffer from strong traffic congestions that deteriorate the quality of citizens’ life and the quality of air too. SPARCS group researcers are convinced that it is possible to rely on ICT methods to design efficient solutions that may help public administrations manage the traffic flows. For this reason, SPARCS group is studying methods and algorithms inspired to Internet of Everything and Big Data Analysis in order to efficiently exploit the large amount of urban traffic data collected by camera networks. The main purpose is to design a system that prevents traffic congestions by regulating the traffic lights, according to current traffic status; moreover, the system should increase the readiness of rescues by detecting incidents and other anomalies that may compromise the fluency of the traffic.

WIRELESS SENSOR NETWORKS (WSNs)
Wireless sensors are battery-powered devices that do not require any wiring infrastructure to work. Thanks to their high scalability and to their fast and economical deployment, wireless sensors have become a propulsive technology in various applications, including home and industrial automation, environmental monitoring, precision agriculture, security and surveillance, smart buildings, and healthcare.
Research on Wireless Sensor Networks (WSNs) represents a continuously evolving technological domain, fostered by the emerging Internet of Things (IoT) paradigm and the recent advances in the field of machine learning, distributed optimization and automatic controls.

The SPARCS research group is currently active in the following WSNs-related areas.
– Large Scale Networks: thanks to their cost-effective and easily deployable nature, wireless sensors are often employed in large scale networks; these require the design of efficient information fusion and decision making strategies to manage the sensors’ energy consumption, as well as the communication and processing workload over the network.

– Multi-modal WSNs: heterogeneous perception capabilities (i.e., multi-modality) is an increasingly prevalent property in cyber-physical systems. Multiple sensing modalities induce inherent robustness and complementarity (i.e., different properties of the environment can be perceived), while the aggregated multi-modal data allow inferences that are not possible with uni-modal measurements. Nevertheless, heterogeneous data sources suffer from different types of noise, they might produce conflicting features,  and they require different calibration procedures, each of which might involve extensive human intervention.

– Mobile WSNs: the evolution in embedded systems have enabled the introduction of mobile networks, composed of smart sensors (i.e., devices with onboard communication, processing and sensing capabilities). Swarms of sensors have the ability to simultaneously gather information from disjoint locations, enlarging the spatial and temporal coverage of the network. Moreover, multiple sensing platforms open up new perspectives in scene perception, by enabling parallelisation and specialisation.

– Probabilistic Active Sensing (i.e., autonomous perception): Active sensing (AS) consists in the control of the perception process, either in mobile robotic systems or in static data acquisition processes. AS enables autonomous perception by automating the perception process and maximizing its efficiency. Probabilistic AS (PAS) exploits incoming data to generate a belief map to encode the knowledge gathered during the sensing mission and to guide the platform decision process. Probabilistic approaches account for realistic perception uncertainties; hence, they are suitable to manage real-world (noisy) scenarios, unmodeled dynamics, and sensing nuisance. For these reasons, PAS will be a flourishing research area with a direct application in environmental mapping and exploration, autonomous source term estimation, search and rescue, and collaborative mobile robotics.