Abstract
Expert and intelligent systems are being developed to control many technological systems including mobile robots. However, the PID (Proportional-Integral-Derivative) controller is a fast low-level control strategy widely used in many control engineering tasks. Classic control theory has contributed with different tuning methods to obtain the gains of PID controllers for specific operation conditions. Nevertheless, when the system is not fully known and the operative conditions are variable and not previously known, classical techniques are not entirely suitable for the PID tuning. To overcome these drawbacks many adaptive approaches have been arisen, mainly from the field of artificial intelligent. In this work, we propose an incremental Q-learning strategy for adaptive PID control. In order to improve the learning efficiency we define a temporal memory into the learning process. While the memory remains invariant, a non-uniform specialization process is carried out generating new limited subspaces of learning. An implementation on a real mobile robot demonstrates the applicability of the proposed approach for a real-time simultaneous tuning of multiples adaptive PID controllers for a real system operating under variable conditions in a real environment.
Original language | English |
---|---|
Pages (from-to) | 183-199 |
Number of pages | 17 |
Journal | Expert Systems with Applications |
Volume | 80 |
DOIs | |
Publication status | Published - 1 Sept 2017 |
Keywords
- Incremental Q-learning
- Mobile robots
- Non-linear control
- PID
- Reinforcement learning
ASJC Scopus subject areas
- General Engineering
- Computer Science Applications
- Artificial Intelligence