Yazar "Yamac, Fatma" seçeneğine göre listele
Listeleniyor 1 - 2 / 2
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe Force control of electro-active polymer actuators using model-free intelligent control(Sage Publications Ltd, 2021) Sancak, Caner; Yamac, Fatma; Itik, Mehmet; Alici, GurselIn this paper, a model-free control framework is proposed to control the tip force of a cantilevered trilayer CPA and similar cantilevered smart actuators. The proposed control method eliminates the requirement of modeling the CPAs in controller design for each application, and it is based on the online local estimation of the actuator dynamics. Due to the fact that the controller has few parameters to tune, this control method provides a relatively easy design and implementation process for the CPAs as compared to other model-free controllers. Although it is not vital, in order to optimize the controller performance, a meta-heuristic particle swarm optimization (PSO) algorithm, which utilizes an initial baseline model that approximates the CPAs dynamics, is used. The performance of the optimized controller is investigated in simulation and experimentally. Successful results are obtained with the proposed controller in terms of control performance, robustness, and repeatability as compared with a conventional optimized PI controller.Öğe Position control of a planar cable-driven parallel robot using reinforcement learning(Cambridge Univ Press, 2022) Sancak, Caner; Yamac, Fatma; Itik, MehmetThis study proposes a method based on reinforcement learning (RL) for point-to-point and dynamic reference position tracking control of a planar cable-driven parallel robots, which is a multi-input multi-output system (MIMO). The method eliminates the use of a tension distribution algorithm in controlling the system's dynamics and inherently optimizes the cable tensions based on the reward function during the learning process. The deep deterministic policy gradient algorithm is utilized for training the RL agents in point-to-point and dynamic reference tracking tasks. The performances of the two agents are tested on their specifically trained tasks. Moreover, we also implement the agent trained for point-to-point tasks on the dynamic reference tracking and vice versa. The performances of the RL agents are compared with a classical PD controller. The results show that RL can perform quite well without the requirement of designing different controllers for each task if the system's dynamics is learned well.