Abstract:
In this paper, we develop a reinforcement learning (RL) based controller capable of stabilizing the swirling pendulum, a novel under-actuated two degrees of freedom system with a number of underlying peculiarities such as non-planar inertial coupling, loss of relative degree, multiple stable and unstable equilibria, etc. These properties make the control and stabilization of the swirling pendulum challenging, especially at the unstable equilibrium points in the upper hemisphere. To the best of our knowledge, the stabilization of the Swirling Pendulum at the upper unstable equilibrium points has not yet been solved using linear, modern, or nonlinear control methods. We present a novel controller that is able to stabilize the swirling pendulum at each of these unstable equilibrium points using the Actor-Critic algorithm. We also present a comparative analysis between reinforcement learning (RL) and traditional control methods for the purpose of stabilizing the swirling pendulum at the unstable equilibrium point in the lower hemisphere. Our results demonstrate that the RL-based approach outperforms the conventional controllers, such as PID and Lead Compensator, while stabilizing the swirling pendulum at these equilibrium points.