Abstract:
Autonomous vehicles, e.g., cars, aircraft or ships, will need to accept some degree of human control for the coming years. Consequently, a method of controlling autonomous systems (ASs) that integrates control inputs from humans and machines is critical. We describe a framework for blended autonomy, in which humans and ASs interact with varying degrees of control to safely achieve a task. We empirically compare collaborative control tasks in which the human and AS have identical or conflicting objectives, under three main control frameworks: (1) leader-follower control (based on Stackelberg games); (2) blended control; and (3) switching control. We validate our results on a car steering control model, given communication delays, noise and different collaboration levels.