Learning to stabilize: comparative analysis of reinforcement learning and traditional methods for swirling pendulum control

Show simple item record

dc.contributor.author Dalal, Dwip
dc.contributor.author Riswadkar, Shubhankar
dc.contributor.author Palanthandalam-Madapusi, Harish J.
dc.contributor.other 9th Indian Control Conference (ICC 2023)
dc.coverage.spatial India
dc.date.accessioned 2024-03-20T14:30:48Z
dc.date.available 2024-03-20T14:30:48Z
dc.date.issued 2023-12-18
dc.identifier.citation Dalal, Dwip; Riswadkar, Shubhankar and Palanthandalam-Madapusi, Harish J., "Learning to stabilize: comparative analysis of reinforcement learning and traditional methods for swirling pendulum control", in the 9th Indian Control Conference (ICC 2023), Visakhapatnam, IN, Dec. 18-20, 2023.
dc.identifier.uri https://ieeexplore.ieee.org/document/10442669
dc.identifier.uri https://repository.iitgn.ac.in/handle/123456789/9879
dc.description.abstract In this paper, we develop a reinforcement learning (RL) based controller capable of stabilizing the swirling pendulum, a novel under-actuated two degrees of freedom system with a number of underlying peculiarities such as non-planar inertial coupling, loss of relative degree, multiple stable and unstable equilibria, etc. These properties make the control and stabilization of the swirling pendulum challenging, especially at the unstable equilibrium points in the upper hemisphere. To the best of our knowledge, the stabilization of the Swirling Pendulum at the upper unstable equilibrium points has not yet been solved using linear, modern, or nonlinear control methods. We present a novel controller that is able to stabilize the swirling pendulum at each of these unstable equilibrium points using the Actor-Critic algorithm. We also present a comparative analysis between reinforcement learning (RL) and traditional control methods for the purpose of stabilizing the swirling pendulum at the unstable equilibrium point in the lower hemisphere. Our results demonstrate that the RL-based approach outperforms the conventional controllers, such as PID and Lead Compensator, while stabilizing the swirling pendulum at these equilibrium points.
dc.description.statementofresponsibility by Dwip Dalal, Shubhankar Riswadkar and Harish J. Palanthandalam-Madapusi
dc.language.iso en_US
dc.publisher Institute of Electrical and Electronics Engineers (IEEE)
dc.subject Couplings
dc.subject 2-DOF
dc.subject Reinforcement learning
dc.subject Lead
dc.title Learning to stabilize: comparative analysis of reinforcement learning and traditional methods for swirling pendulum control
dc.type Conference Paper


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search Digital Repository


Browse

My Account