Coordinated Path Following of Multiple Quadrotors
New flight experiment where two AR.Drone quadrotors was performed at CAVR lab at NPS! In these flight tests the quadrotors are tasked to follow predefined paths (computed off-line), while coordinating their position and attitude according to the scenario requirements. At the lower control level, the developed path-following controller makes each quadrotor converge and follow its own path, independently of the temporal assignments of the scenario. The algorithm relies on the implementation of a virtual vehicle running along the path, the rate of progression of which can be controlled at will. At the higher control level, the quadrotors exchange coordination information over a supporting communications network to synchronize their position along the path as well as their attitude; heading of each quadrotor is also a degree of freedom that can be control independently. The coordination and path-following algorithms have been implemented in Simulink in real time, and use feedback data from a Vicon Motion Tracking system to produce the control commands. The efficiency of the cooperative control framework has been evaluated by utilizing convenient features and the API of the Parrot AR.Drone quadrotor.
Time-Critical Cooperative Path Following
The objective of this work is to develop, implement, and test robust decentralized strategies for path-following control and time-coordination of a fleet of multiple autonomous vehicles supported by an inter-vehicle dynamic communications network.
*Conceptual architecture of the cooperative control framework adopted.*
The methodology for time-critical cooperative path-following control developed at the NPS(in collaboration with UIUC and IST) can be summarized in three basic steps:
1. Initially, each vehicle is assigned a feasible path with a desired speed profile that together satisfy the mission requirements and the vehicle dynamic constraints, while ensuring collision-free maneuvers.
2. Then, a path-following algorithm ensures that every vehicle follows its own path independently of the temporal assignments of the mission.
3. Finally, the vehicles coordinate their position along the path with the remaining vehicles engaged in the mission by exchanging coordination information over the supporting communications network.
These three steps are accomplished by judiciously decoupling space and time in the formulation of the trajectory-generation, path-following, and time-coordination problems. Moreover, the approach adopted applies to teams of heterogeneous vehicles and does not necessarily lead to swarming behavior, which is unsuitable for many of the mission scenarios envisioned in this project.
*Two small tactical UAVs equipped with complementary vision sensors try to detect and follow improvised targets along a pre-specified road.*
The efficacy of the developed multi-vehicle cooperative control framework has been demonstrated in a cooperative road-search mission scenario involving multiple unmanned aerial vehicles. The mission is
initiated by a minimally trained user who specifies a road of interest on a digital map. The coordinated of this road are then transmitted over the network to a fleet of small tactical UAVs equipped with complementary visual sensors. Decentralized optimization algorithms autonomously generate feasible flight trajectories that maximize road coverage and account for sensor capabilities (field of view, resolution, and gimbal constraints) as well as inter-vehicle and ground-to-air communications limitations. The fleet of UAVs then starts the cooperative road search. During this phase, the information obtained from the sensors mounted onboard the UAVs is shared over the network and retrieved by remote users in near real time. Target detection can thus be done remotely on the ground, based on in-situ imagery data delivered over the network.
*VIDEO: Full-motion video delivered over the network and retrieved by remote users in near real time.*
In this particular mission scenario, our cooperative control strategies improve mission performance and provide reliable target discrimination, by effectively combining the capabilities of the onboard sensors. In fact, flying in a coordinated fashion is what allows, for example, to maximize the overlap of the fields of view of multiple sensors and to take full advantage of complementary sensors. Details about this project, including the flight test experiments conducted by NPS, can be found in:
1. Xargay, Dobrokhodov, Kaminer, Pascoal, Hovakimyan, and Cao, *“Time-Critical Cooperative Control for Multiple Autonomous Vehicles,”* to appear in /IEEE Control Systems Magazine/, 2012.
2. Xargay, Kaminer, Pascoal, Hovakimyan, Dobrokhodov, Cichella, Aguiar, and Ghabcheloo, *“Time-Critical Cooperative Path Following of Multiple UAVs over Time-Varying Networks,”* submitted to /Journal of Guidance, Control, and Dynamics/, 2011.
Quadrotor Force Augmentation: Utility without burden
As UAV technologies mature and decision and perception approaches become established, there is a need to focus on advanced autonomy to put unmanned systems to work. A key limitation of current systems is the tail-to-tooth ratio: many unmanned systems require multiple operators. Ideally, a single operator should be able to control multiple (heterogeneous) assets. With this in mind, we are working on the notion of force augmentation: providing utility to a force without requiring continuous command and control of the platform. Specifically, we aim to provide utility to a forward tactical force with a quadrotor (i.e., urban operations, etc.). See http://wiki.nps.edu/display/~nedutoit/Quadrotor+Force+Augmentation for more details.
ScanEagle Autonomy Extension
The ScanEagle platform is extensively used in theater, but the utility of the platform is limited due to semi-autonomous operations: a dedicated pilot commands the vehicle remotely, with the option of a few basic autonomous behaviors (such as loitering). For this platform to be utilized in advanced applications, the platform must be able to adapt its behavior as the mission evolves. This adaptation is accomplished through onboard sensing and decision-making. This project focuses on extending the autonomy capability of the ScanEagle UAV platform by developing and implementing a secondary-autopilot architecture (or backseat driver). See http://wiki.nps.edu/display/~nedutoit/ScanEagle+Autonomy+Extension for more details.
Helmsman Assist Graphical Interface
NPS and Virginia Tech (VT) are developing a “helmsman assist” system for riverine forces that will provide enhanced situational awareness to a coxswain operating in constrained waterways. VT is designing a sonar mount to allow installation of the USV sensor package on a Special Operations Craft-Riverine (SOC-R). NPS is developing a graphical interface to display the surface/subsurface maps and path recommendations generated by the USV autonomy package. The helmsman assist system is capable of providing vessel operators with 3D views of the waterway and obstacle maps. In September 2012, NPS and VT integrated this system onto a SOC-R and conducted a nighttime demonstration for Naval Special Warfare personnel on the Pearl River near Stennis Space Center in Mississippi.
ANT Glider With Acoustic Vector Sensor
This project employs autonomous gliders that measure the ocean environment and utilize acoustic vector sensors for studying properties of the acoustic field (vertical/horizontal noise directionality, tracking, etc) as potential systems for the development of ad-hoc acoustic ranges.
The project develops & demonstrate tools to characterize the effectiveness of multiple, autonomous platforms utilizing incoherent and coherent processing on target detection, localization, and tracking. Enhancements in the performance of the sensors by optimal placement and adaptive vector sensor processing will be determined.
Collaborative Robotic Diver Assistant
Diver operations are inherently dangerous. Physiological effects limit dive duration and frequency and necessitate a large support crew, increasing operational costs. The sensory-deprived underwater environment makes navigation, communication, and documentation challenging. A robotic diver assistant system can provide autonomous support to diver teams, which has the potential to significantly enhance underwater operations. The project is aimed at providing utility to the diver team (e.g., illumination, improved situational awareness, etc.) without burdening the team with vehicle command and control, thereby augmenting the diver team and allowing more effective, efficient, and safer operations. This project seeks to go beyond co-inhabitance of man and machine---our aim is to fundamentally enable the transformative capability of robots as underwater co-workers. Seehttp://wiki.nps.edu/display/~nedutoit/Collaborative+Robotic+Diver+Assistant for more details.
REMUS AUV Docking Station
The Naval Postgraduate School (NPS) has begun a project to deploy a seafloor docking station for its REMUS Autonomous Underwater Vehicles (AUVs) in Monterey Bay. Engineers from the NPS Center for Autonomous Vehicle Research (CAVR) and the Department of Oceanography are developing an interface so that the Monterey Inner Shelf Observatory (MISO), located in 16m deep water 600 meters offshore from NPS, can host a REMUS docking station developed by Woods Hole Oceanographic Institution (WHOI).
LCM 3D Viewer