Sensor limitations, natural disasters, and adversarial users pose unique challenges during the design of networked robotic systems interacting with the environment. Traditional models of uncertainty for these cyber-physical systems utilize stochasticity, game theory, or worst-case analysis, but integration of such models into the design and validation process of cyber-physical systems has languished due to computational and theoretical limitations. This workshop will cover some of the current work in the field of integrating sensors and humans with cyber-physical systems and its application to infrastructure and automation systems.
This full day workshop will take place as part of the 2014 Robotics: Science & Systems Conference at UC Berkeley, on July 13, 2014. For more information about the venue, registration, accomodations, and transportation, please see the conference website at http://www.roboticsconference.org.
- Raj Rajkumar (CMU) - Humans and Self-Driving Vehicles
This talk will discuss the role that humans play in "self-driving" vehicles. Vehicles are not going to start driving themselves as an overnight change. Vehicles will become increasingly autonomous over time, capable of driving themselves in specific scenarios. A responsible human can be expected to be in the driver's seat for the foreseeable future. What should the human do and not do? Conversely, what can the vehicle and not do? The speaker will voice some of his opinions regarding these questions.
- George Pappas and Nikolay Atanasov (UPenn) - Distributed Information Acquisition with Mobile Sensors
The remarkable advances in sensing and mobility for robots allow us to address some important information acquisition problems such as active object recognition, source seeking, active localization and mapping, and environmental monitoring. In the first part of the talk, we will zoom into the problem of source seeking, in which a team of mobile sensors is tasked to localize the source of a noisy signal of interest such as chemical concentration, magnetic force, heat, or wireless radio signal. A model-free and a model-based scenario will be considered. In the former, the robots receive measurements without knowledge of the signal formation process, while in the latter, the robots have an accurate signal model which can be exploited to localize the source potentially faster and with higher accuracy. Fully-distributed algorithms based on stochastic gradient descent will be discussed.
In the second part of the talk, we will formulate a precise problem statement, which captures the common characteristics of the aforementioned information acquisition scenarios. Using the intuition from the source seeking problem, the aim will be to devise an efficient distributed strategy for active information acquisition with a team of mobile sensors. Notably, under linear Gaussian assumptions on the sensing models, we will prove that the celebrated separation principle holds for the information acquisition problem. The problem reduces to deterministic optimal control and can be solved off-line. A nonmyopic algorithm with performance guarantees will be designed in a centralized setting. Coupled with linearization and model predictive control, the algorithm can be used to generate adaptive policies for mobile sensors with non-linear sensing models. To distribute the algorithm, we will rely on the submodularity of the objective function which quantifies the acquired information.
- Radha Poovendran, Linda Bushnell, and Andrew Clark (University of Washington) - Leader Selection for Multi-Agent Systems
Multi-agent systems consist of distributed, networked nodes that coordinate to perform shared tasks such as navigation, formation control, and target tracking. A widely studied and implemented approach to controlling multi-agent systems is to designate a subset of nodes as leaders, which act as external inputs and steer the remaining (follower) nodes via local interactions. The choice of leader nodes is known to affect performance and control properties of the overall system, including robustness to noise, rate of convergence to a desired state, and controllability. The number of possible leader sets, however, is exponential in the network size, and hence additional problem structure must be identified and exploited in order to select a leader set for a large network.
In this talk, we will present a submodular optimization framework for leader selection. Submodularity is a diminishing returns property of set functions, analogous to concavity of continuous functions. We will first discuss selecting a leader set in order to minimize the error due to link noise and prove the submodular structure of the error due to link noise by establishing connections to random walks. We will then introduce submodular optimization techniques for leader selection to ensure joint performance and controllability. We will derive computationally efficient algorithms with provable optimality guarantees for solving both problems. Applications to control of biological networks will be discussed.
- Hadas Kress-Gazit (Cornell) - Synthesis and Analysis of High-Level Controllers for Robots with Imperfect Sensing and Actuation
In the past several years, there has been a lot of work on synthesizing provably correct robot controllers from high-level specifications; if the robot sensing and actuation is perfect, then the desired robot behavior is guaranteed. This talk will discuss how one can address the problem of noisy sensors and imperfect actuators by considering probabilistic guarantees. Specifically, the talk will discuss a framework for synthesizing and analyzing controllers and automatically suggesting changes to the specification that will make the robot more likely to succeed.
- Rahul Jain (USC) - Decentralized Learning for Multi-Player Systems
Multi-Armed bandits are an elegant model of learning in an unknown and uncertain environment. Such models are relevant in many scenarios, and of late have received increased attention recently due to various problems of distributed control that have arisen in wireless networks, pricing models on the internet, etc. We consider a non-Bayesian multi-armed bandit setting proposed by Lai & Robbins in mid-80s. There are multiple arms each of which generates an i.i.d. reward from an unknown distribution. There are multiple players, each of whom is choosing which arm to play. If two or more players choose the same arm, they all get zero rewards. The problem is to design a learning algorithm to be used by the players that results in an orthogonal matching of players to arms (e.g., users to channels in wireless networks), and moreover minimizes the expected regret.
We first consider this as an online bipartite matching problem. We model this combinatorial problem as a classical multi-armed bandit problem but with dependent arms, and propose an index-based learning algorithm that achieves logarithmic regret. From prior results, it is known that this is order-optimal. We then consider the distributed problem where players do not communicate or coordinate in any way. We propose a index-based algorithm that uses Bertsekas' auction mechanism to determine the bipartite matching. We show that the algorithm has expected regret at most near-log-squared. This is the first distributed multi-armed bandit learning algorithm in a general setting.
- Marco Pavone (Stanford) and Pratik Chaudhari (MIT) - On the Societal and Engineering Impact of Autonomous Cars
This talk gives an overview of our recent results on design and control of robotic, self-driving vehicles in an urban mobility-on-demand scenario.
The first part of the talk focuses on the system-wide analysis, design, and control of autonomous mobility-on-demand systems, where robotic, self-driving vehicles transport customers within an urban environment and rebalance themselves to ensure acceptable quality of service throughout the entire network. Specifically, we discuss analytical models capturing the dynamic and stochastic features of customer demand, we present system-wide coordination algorithms aimed at throughput maximization, and we apply our results on taxi data from New York City and Singapore. Collectively, our results shed light on fleet sizing and financial benefits for large-scale urban mobility-on-demand systems.
The second part of the talk focuses on the problem of controlling the individual self-driving vehicles, i.e., the problem of generating trajectories with formal guarantees about safety and optimality. Specifically, we present an algorithmic approach to synthesize control strategies for self-driving vehicles interacting with external agents in urban environments, e.g., other autonomous or human-driven cars. This approach leverages ideas from linear temporal logic, sampling-based motion-planning, model checking, and differential games, and allows to synthesize control algorithms with provable guarantees of completeness, optimality, and convergence to well-defined game-theoretic notions of equilibria.
- Edgar Lobaton (NCSU) - Robust Mapping of Unknown Environments using Stochastic Agents
Mapping of an unknown environment is an essential task in a variety of applications including search and rescue for emergency response, surveillance for security applications, and environmental monitoring for health purpose. These tasks become extremely challenging when localization information is not available (e.g., agents are in-door, underground, or do not have the necessary hardware or power requirements to implement traditional localization schemes). In this work, we explore how stochastic motion models and weak encounter information can be exploited to learn topological information about an unknown environment. As a case study, we consider systems of agents that follow the probabilistic motion model of insects (in particular, cybor-insect networks are studied). We employ tools from computational topology to extract spatial information of the environment based on neighbor to neighbor interactions among the agents with no need for localization data. This information is used to build a map of persistent topological features of the space.
- 07:50-08:00AM — Introduction by Anil Aswani and Ram Vasudevan
- 08:00-09:00AM — Talk by Nikolay Atanasov
- 09:00-10:00AM — Talk by Hadas Kress-Gazit
- 10:00-10:30AM — Coffee Break
- 10:30-11:30PM — Talk by Marco Pavone and Pratik Chaudhari
- 11:30-01:30PM — Lunch Break
- 01:30-02:30PM — Talk by Rahul Jain
- 02:30-03:30PM — Talk by Edgar Lobaton
- 03:30-04:30PM — Talk by Raj Rajkumar
- 04:30-05:00PM — Coffee Break
- 05:00-06:00PM — Talk by Andrew Clark