Everhart Lecture Series: John Lathrop
- Public Event
Enhancing Planning and Decision Making for Robotic Autonomy
Enabling robots to plan complex behaviors in real time, rather than relying on predesigned or offline-learned routines, reduces the need for specialized algorithms and large amounts of task-specific training data. At the same time, learning dynamics models through interaction improves planning accuracy, allowing agents to better enforce constraints and predict future behavior. In this talk, I present recent work at the intersection of reinforcement learning, optimization, and robotic autonomy, demonstrated on ground vehicles, aerial systems, and spacecraft. I introduce a new sampling-based planning algorithm with optimality guarantees for continuous, deterministic, differentiable MDPs, encompassing underactuated nonlinear dynamics and nonconvex rewards. I also present stability guarantees for a coupled dynamics learning and policy optimization framework, validated in the Indy Autonomous Challenge, a high-performance, safety-critical autonomous racing setting.
