Have a personal or library account? Click to login
PD Control with Auto-Tuned Gains Using Rbf Networks for Enhanced Trajectory Tracking in Manipulator Robots Cover

PD Control with Auto-Tuned Gains Using Rbf Networks for Enhanced Trajectory Tracking in Manipulator Robots

Open Access
|Dec 2025

Figures & Tables

Fig. 1.

(a) Two-DOF manipulator diagram; (b) Desired positions in the workspace for RBF training

Fig. 2.

Interpolation of kp1(qd1, qd2) and kp2(qd1, qd2)

Fig. 3.

Responses for qd1 = –40° and qd2 = 120°. (a) Position errors. (b) Comparison of ℒ2 Norms

Fig. 4.

Responses for qd1 = –40° and qd2 = 120°. (a) Torque responses. (b) Angular velocities

Fig. 5.

Monte Carlo robustness of PD, Tanh, and PDN controllers under parametric uncertainty and (qd1, qd2) = (–40°, 120°)

Fig. 6

Boxplots of robustness metrics (Ts, e(ts), ISE, and ℒ2 – i. e., ∥e∥2 –) for PD, Tanh, and PDN controllers with (qd1, qd2) = (–40°, 120°), ±15% parametric uncertainty, and 200 Monte Carlo trials

Fig. 7.

Disturbance Rejection to a 6 Nm, 0.15 s Torque Pulse at Joint 2 (t = 3 s): q1 and q2 Position Errors for PD, Tanh, and PDN

Fig. 8.

Point-to-point "Owl" trajectory tracking. (a) Ideal trajectory. (b) Trajectory with the PDN controller. (c) Actuator speed with the Tanh controller. (d) Actuator speed with the PDN controller. (e) Comparison of ℒ2 norms. (f) Comparison of energy consumption.

Parameters of the simulated anthropomorphic arm manipulator

ParameterValue
l1, l2 (link lengths)0.45m
τ1max (shoulder), τ2max (elbow)\tau _1^{max}{\rm{ (shoulder), }}\tau _2^{max}{\rm{ (elbow)}}150 Nm, 15 Nm
kg1(q), kg2(q)40.28 Nm, 1.81 Nm
Inertia MatrixM(q)=[ 2.351+0.167cos(q2)0.102+0.083cos(q2)0.102+0.083cos(q2)0.102 ]\matrix{ {M(q)} \hfill & = \hfill & {\left[ {\matrix{ {2.351 + 0.167\cos \left( {{q_2}} \right)} \hfill & {0.102 + 0.083\cos \left( {{q_2}} \right)} \hfill \cr {0.102 + 0.083\cos \left( {{q_2}} \right)} \hfill & {0.102} \hfill \cr } } \right]} \hfill \cr }
Coriolis MatrixC(q,q˙)=[ 0.1676sin(q2)q˙20.083sin(q2)q˙20.084sin(q2)q˙10 ]\matrix{ {C(q,\dot q)} \hfill & = \hfill & {\left[ {\matrix{ { - 0.1676\sin \left( {{q_2}} \right){{\dot q}_2}} \hfill & { - 0.083\sin \left( {{q_2}} \right){{\dot q}_2}} \hfill \cr {0.084\sin \left( {{q_2}} \right){{\dot q}_1}} \hfill & 0 \hfill \cr } } \right]} \hfill \cr }
Gravitational torque vectorg(q)=9.81[ 3.92sin(q1)+0.186sin(q1+q2)0.186sin(q1+q2) ]g(q) = 9.81\left[ {\matrix{ {3.92\sin \left( {{q_1}} \right) + 0.186\sin \left( {{q_1} + {q_2}} \right)} \cr {0.186\sin \left( {{q_1} + {q_2}} \right)} \cr } } \right]
Friction coefficient matrixB=[ 2.288000.175 ]B = \left[ {\matrix{ {2.288} & 0 \cr 0 & {0.175} \cr } } \right]

Comparative table of studies on PD controllers with variable gains

StudyController TypeVariable Gain StructureStability Analysis MethodValidation Approach
[4]PD-like with variable gainsVariable; state-, position-, and velocity-dependent; smooth functions (e.g. cos2 (tanh (error+velocity)))Lyapunov theory; global asymptotic stability; gravity compensation requiredSimulation; two-DOF direct-drive robot; joint regulation; L2 norm
[15]PD iterative neural-network learning (PDISN)Likely variable/adaptive; neural network and iterative learningExtended Lyapunov theories; stability type not specifiedSimulation; manipulator characteristics not specified; scenario not specified
[16]Proportional-derivative (PD)Variable; tuned by self-organizing fuzzy algorithmNot analyzed (no details)Simulation; manipulator characteristics not specified; tracking control; position error metric
[17]Self-tuning PDBounded, time-varying; neurofuzzy recurrent schemeLyapunov theory; semi-global exponential stabilitySimulation; manipulator characteristics not specified; trajectory tracking
[18]Nonlinear PID with fuzzy self-tuned PD gainsVariable, position-dependent; fuzzy logicNot mentioned; global asymptotic stability; no gravity compensationExperiments; type not specified; scenario and metrics not specified
[19]Adaptive PDAdaptive to gravity parametersNot mentioned; global convergenceSimulation; three-DOF manipulator; point-to-point and tracking
[20]PD-type robustVariable, error-varying; parameterized by perturbing parameterSingular perturbation theory; stability type not explicitPhysical experiment; planar two-DOF direct-drive robot; trajectory tracking
[21]Adaptive iterative learning control (ILC)-PDVariable; iterative learning, two iterative variablesLyapunov theory; asymptotic convergenceSimulation; two-DOF manipulator; trajectory tracking
[22]PD-typeVariable, state-dependentNot mentioned; global asymptotic stability claimedPhysical experiment; two-DOF directdrive arm; scenario not specified
[23]Linear and nonlinear PD-typeNonlinear functions of system statesNot mentioned; global asymptotic stability claimedSimulation; single-link and two-DOF robots; trajectory tracking
This workPD-like with variable gainsVariable; desired position dependent proportional gains with RBF interpolation networks trained offlineLyapunov theory; global asymptotic stability; gravity compensation requiredSimulation; two-DOF direct-drive robot; joint regulation; L2 norm, point-to-point tracking; regulation performance evaluated with parametric uncertainties and external perturbations
DOI: https://doi.org/10.2478/ama-2025-0070 | Journal eISSN: 2300-5319 | Journal ISSN: 1898-4088
Language: English
Page range: 617 - 625
Submitted on: Sep 21, 2024
|
Accepted on: Oct 5, 2025
|
Published on: Dec 19, 2025
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2025 Carlos MUÑIZ-MONTERO, Luis A. SÁNCHEZ-GASPARIANO, Javier LEMUS-LÓPEZ, published by Bialystok University of Technology
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.