Have a personal or library account? Click to login
Application of second-order sensitivity analysis to stabilisation of unstable continuous multi-degree-of-freedom parametric periodic systems Cover

Application of second-order sensitivity analysis to stabilisation of unstable continuous multi-degree-of-freedom parametric periodic systems

Open Access
|Sep 2025

Full Article

1
Introduction

Parametric vibrations are oscillatory movements that occur in mechanical systems or structures because of time-dependent (usually periodic) variation of parameters such as inertia or stiffness. From a mathematical point of view, parametric systems are described by differential equations with time-varying coefficients. Time is, therefore, a parameter on which these coefficients depend. Therefore, equations and vibrations are called parametric. The dependence on time is explicit, which means that an external source of energy exists. An external source of energy means it is possible to accumulate that energy, which, in turn, means that unstable forms of vibrations may occur with amplitudes increasing over time. This phenomenon is called parametric resonance. Resonance regions appear at certain values of the ratio of the parameters – which determine the magnitude and manner of parametric excitation – to the frequency of this excitation. These vibrations occur in a direction different than the direction of action of the parametric load exciting the vibrations (usually perpendicularly to it). Increasing the damping in the parametric system decreases the instability regions. Parametric resonance is the result of the instability of differential equations describing these vibrations. The mathematical basis is the theory of Lyapunov, discussed, for example, in the works of Demidowicz [1], Skalmierski and Tylikowski [2], La Sall and Lefschetz [3], Leipholz [4], Yakubovitch and Starzhinski [5] and others.

In this sense, a nonlinear dynamic system is always a parametric system. It can be said that the dependence of stiffness (or inertia) on time is a natural result of their dependence on deformations that change over time. However, the dependence on time is not explicit, and therefore, these vibrations are generally not classified as parametric.

Linear parametric systems exhibit some characteristics of nonlinear systems and are therefore classified between linear systems with constant coefficients and nonlinear systems. The similarity to nonlinear systems is seen when the phenomenon of subharmonic resonance occurs in a linear parametric system. This effect does not occur in linear systems with constant coefficients.

The problem of parametric vibrations has been particularly intensively developed since the 1950s, when several works were published, for example Bolotin [6], Pipes [7], Evan-Ivanowski [8], Hsu [9, 10, 11], Hsu and Cheng [12], Benedetti [13], McWhannell [14], Paidoussis and Sundararajan [15], Ibrahim and Barr [16], and among Polish authors, Kamiński and Osiński [17], Osiński [18] and others.

In the early 1990s, Tondl [19, 20] published his works, in which he showed that not only external excitation, but also parametric or self-excitation of a basic system can be the source of autoparametric excitation for an additional system. An exhaustive review of physical models in which the phenomenon of autoparametric resonances occurs was made by Tondl and Nabergoj in [21]. The book [22] by Tondl, Ruijgrok, Verhulst and Nabergoj, describing examples of systems in which autoparametric resonances appear, is also of interest.

The 1990s also witnessed the publication of the first papers presenting methods that allowed not only detecting the dangers resulting from the possibility of parametric resonance, but also actively reducing the effects of such resonance. Works by Tylikowski [23], Yang and Tsao [24] and Osiński [25, 26] can be mentioned here. Such a direction of research, in turn, forced the development of theories in the field of parametric vibration sensitivity analysis.

From a mathematical point of view, sensitivity analysis is carried out by calculating the derivatives of solutions of differential equations describing vibrations in terms of parameters, which are quantities appearing in the description of the equation, but not an independent variable themselves. If the first derivatives are used, one speaks of first-order sensitivity; if the second derivatives are also used, it is second-order sensitivity, and so on. The literature on this subject is abundant, but two works are fundamental in this regard: a monograph that Frank [27] published in 1978 and a monograph that Haug, Choi, and Komkov [28] published in 1986. Other chronologically presented examples of literature in this field are: Rudisill and Bhatia [29], Watari and Iwamoto [30], Ray, Pister and Polak [31], Arora and Haug [32], Haug and Roysselet [33], Haug and Ehle [34], Van Belle [35], Dems and Mróz [36, 37], Hsien and Arora [38, 39], Adelman and Haftka [40], Wicher [41], Wicher and Nałęcz [42], Nałęcz and Wicher [43], Chen and Ku [44], Godoy [45], Mróz and Piekarski [46] and Park, Kapania and Kim [47]. The author’s works in this field are: Wójcicki and Chrobok [48], Wójcicki and Grosel [49, 50], Wójcicki [51, 52, 53, 54] and ruta and wójcicki [55].

In parametric systems, there are basically no solutions in the analytical form. For this reason, studying sensitivity is complicated, and often, the sensitivity analysis of parametric systems itself is the subject of research work. The problem of the sensitivity of parametric systems has been intensively developed since the turn of the century, as evidenced by the works of Lu and Murthy [56], Gu and Chen and Wang [57], Seyranian, Solem and Pedersen [58], Wójcicki [52, 54], and in stochastic terms, for example, the work of Hien and Kleiber [59], Śniady, Sieniawska and Żukowski [60] and Mazur-Śniady and Śniady [61].

In the general case, in addition to parametric excitation, the system may be affected by a non-parametric excitation (constant or so-called forcing) with a direction similar to the direction of vibration. Resonant vibration amplification phenomena may then arise, which are caused not by the instability of the equations, but by the coupling of vibrations caused by these two types of excitation. The first (fundamental) subharmonic region of instability is usually the most important. The resulting forced vibrations can also induce resonant vibrations of further areas of instability. This issue was dealt with in the works of Hsu and Cheng [12], Kamiński and Osiński [17, 62, 18] and Klasztorny and Wójcicki [63].

Since the turn of the century, papers have also been published related to the analysis of nonlinear parametric systems. Although parametric nonlinear systems have already been analysed, for example, in the work of Bolotin [6], in view of the constant progress of computational techniques, the number of papers on this subject has clearly increased. Works by authors such as Shmidt and Tondl [64], Osiński [65], Szabelski and Warmiński [66], Shiau and Wu [67], Esmailzadeh and Nakhaie-Jazar [68, 69], Sinha and Butcher [70], Deolasi and Datta [71], Yu and Huseyin [72], Zhang and Peil [73] and Kamiński [74] should be noted here.

In the case of vibration instability of a parametric system, it is generally necessary to change the parameters (if possible) to get out of the region of solution instability. To achieve this goal, it is extremely useful to analyse the sensitivity of the eigenproblem of the monodromy matrix. Relatively new works on the sensitivity of the eigenproblem are Lee, Kim and Jung [75, 76], Scarpa and Curti [77], Lallemand, Level, Duveau and Mahieux [78] and Murthy, Lin and O’Hara [79].

Another way to remove the effects of vibration instability in dynamic systems is to use vibration eliminators. The dynamic vibration eliminator invented by Frahm in 1909 is generally a mass component. However, there are also other solutions, for example, active damping with the use of layers of piezoelectric foils (Tylikowski [23], Osiński [26]), in which the so-called intelligent materials are used (Dosch and Inman [80], Weiss and Carlson [81]). A liquid flowing from one part of the structure to another is also used, so that the resultant of the forces of gravity (or inertia) of the liquid counteracts the vibrations (Den Hartog [82], Gao, Kwok and Samali [83]). The so-called active methods of resonance avoidance are also used (Holnicki-Szulc [84], Mikhlin and Zhupiew [85], Glabisz [86, 87]).

The mass of a classic dynamic vibration eliminator is usually about 5%–20% of the mass of the structure whose vibration it is supposed to reduce. Such a mass is connected to the structure by means of an elastic or elastic-damping bond.

The problem of optimal tuning of the eliminator in a system with one dynamic degree of freedom and constant coefficients was formulated and solved by Den Hartog in [82] (see also Harris [88]). The two control parameters used in the tuning process are the stiffness and damping of the eliminator (at its assumed mass). In the paper [82], one can also find an analysis of the problem of vibration reduction by additional mass connected to the basic system by means of only an elastic constraint (without damping). Den Hartog showed analytically that if harmonic excitation is the only factor that excites the basic system, then such a dynamic vibration eliminator can effectively reduce the amplitudes of resonant vibrations near the natural frequencies of the system. In other cases, the vibration eliminator must also have damping properties.

Dynamic vibration eliminators and absorbers are increasingly used in construction and civil engineering. Examples of such technical utilisation of these are shown in Klasztorny [89] and Pakos [90]. They present, respectively, methods of designing absorbers and controlling the tension of cables to reduce the vibrations of a cable-stayed bridge.

The use of absorbers to dampen vibrations of tall buildings under wind load is presented, for example, in the work of Xu [91] and Majcher [92, 93]. An analysis of the possibility of active control of the characteristics of the dynamic vibration eliminator in the machine support to reduce the impact of the vibrating ground is presented, for example, by Yamaguchi, Yashime and Hirayama in their work [94].

This paper presents the method of automatic stabilisation of linear unstable continuous in time parametric periodic systems. The procedure is based on the concept of sensitivity analysis first presented in paper [54] and then in [95, 96, 97, 98, 99]. In paper [95], the first-order sensitivity analysis was used. The method presented in this paper is a direct continuation and improvement of the method presented in that paper. In this paper, in addition to the first-order sensitivity analysis, second-order sensitivity analysis was performed. The method is an alternative approach to that proposed in [56] and [57] and is similar to that presented in paper [58]. Second-order sensitivity analysis is a better tool; however, unfortunately, the algorithm becomes much more complicated. Using this method allows to better determine those parameters of the system whose influence on the stabilisation procedure of such systems could be the greatest.

The method is also improved in that it allows to determine correctly the second derivatives of characteristic exponents with respect to those system parameters, on which the parametric excitation period depends or is itself a design parameter.

In addition, as in paper [95], this method is based on sensitivity analysis of absolute values of multipliers. From a mathematical point of view, the sensitivity analysis of multipliers is the calculation of their eigenderivatives with the use of derivatives of the monodromy matrix. The second-order sensitivity analysis of multipliers differs in that the second derivatives of monodromy matrix and multipliers, among others, are also used.

Eigenderivatives are extremely useful for determining the sensitivities of dynamic response to the system parameters’ variations. The innovation of the method is the idea to achieve the non-homogeneous parametric sensitivity equation by analytically evaluating the derivative of the parametric homogeneous equation of motion with respect to design parameter. The next step of method realisation is solving the sensitivity equation obtained in this way, to evaluate the first and second derivatives of monodromy matrix and, finally, the first and second derivatives of multipliers. Ultimately, this method is based on the sensitivity analysis of absolute values of multipliers. Then, finally, the stabilisation procedure can be performed. Based on the concept of directional derivative [100], a procedure similar to that described in work [9] was used. A gradient vector has been designated for the fastest decrease of the absolute value of complex multipliers. This gradient is used to calculate the change in design parameters to make system vibrations stable. The method was tested on the same examples that were presented in the paper [95] and similar to those used in paper [54]. Generally, this work continues the topics developed by the author, among others, in the mentioned works [95,96,97,98,99], and is an attempt to summarise the author’s research and achievements in the field of parametric vibration stability analysis.

Since the work is a continuation of the work [95], naturally, there are some repetitions. These repetitions are useful, as they allow the reader to analyse the presented method comfortably. This paper will, therefore, present only those elements of the work [95] and the theory presented in it, which are necessary in the process of achieving the goal intended in this work without referring to the content of the quoted works.

2
First-order sensitivity analysis

The theory of linear parametric systems is also discussed in the earlier paper [95], of which this paper is a continuation. A more complete description can be found in the papers [1, 99].

A linear non-homogeneous periodic parametric system of an n linear second-order differential equation of motion can by written as a first-order non-homogeneous periodic coefficient system (1) x˙t=Atxt+ftAt=0IB1tKtB1tCtft=0B1Ft \matrix{ {{\bf{\dot x}}\left( t \right) = {\bf{A}}\left( t \right){\bf{x}}\left( t \right) + {\bf{f}}\left( t \right)} \cr {{\bf{A}}\left( t \right) = \left[ {\matrix{ {\bf{0}} & {\bf{I}} \cr { - {{\bf{B}}^{ - 1}}\left( t \right){\bf{K}}\left( t \right)} & { - {{\bf{B}}^{ - 1}}\left( t \right){\bf{C}}\left( t \right)} \cr } } \right]\;\;\;{\bf{f}}\left( t \right) = \left[ {\matrix{ {\bf{0}} \cr {{{\bf{B}}^{ - 1}}{\bf{F}}\left( t \right)} \cr } } \right]} \cr } where a 2n × 2n system matrix A(t) is periodic with period T and 2n dimensional vectors x(t) and f(t) are a vector of state variables and an external excitation, respectively, B(t), C(t), K(t) are square n × n real matrices of inertia, damping, and stiffness periodic with a period T and F(t) is an n dimensional excitation column vector.

From the point of view of the second-order sensitivity analysis carried out in this paper, the formula contained in the paper [95] will be useful later (2) ρp=LTDpR+LTDRpLTRpρ \left\{ {{\boldsymbol{\rho }}_p^{\prime}} \right\} = {{\bf{L}}^{\rm{T}}}{\bf{D}}_p^{\prime}{\bf{R}} + {{\bf{L}}^{\rm{T}}}{\bf{D}}\;{\bf{R}}_p^{\prime} - {{\bf{L}}^{\rm{T}}}{\bf{R}}_p^{\prime}\left\{ {\boldsymbol{\rho }} \right\} where R (det R ≠ 0) and LT (det LT ≠ 0) are right-side and left-side modal matrices, respectively, and multipliers {ñ} are eigenvalues of the monodromy matrix D.

For a single multiplier, Eq. (2) can be written in the form (3) DpρpIr=DρIrp \left( {{\bf{D}}_p^{\prime} - \rho _p^{\prime}{\bf{I}}} \right)\;{\bf{r}} = - \left( {{\bf{D}} - \rho {\bf{I}}} \right)\;{\bf{r}}_p^{\prime} from which, as proved in the paper [95], follows the simpler equation for eigenderivatives of multipliers (4) ρp=lTDprorρp=diagLTDpR \rho _p^{\prime} = {{\bf{l}}^{\rm{T}}}{\bf{D}}_p^{\prime}\;{\bf{r}}\;{\rm{or}}\;\left\{ {{\boldsymbol{\rho }}_p^{\prime}} \right\} = diag\left( {{{\bf{L}}^{\rm{T}}}{\bf{D}}_p^{\prime}\;{\bf{R}}} \right) One can see that to calculate the derivative of the multipliers of the monodromy matrix D, there is no need to know (as in Eq. (2)) the derivative of the right-hand modal matrix R with respect to the design parameter p, that is, Rp=R/p {\bf{R}}_p^{\prime} = \partial {\bf{R}}/\partial p , but the derivative of the monodromy matrix only, that is, Dp=D/p {\bf{D}}_p^{\prime} = \partial {\bf{D}}/\partial p .

It was also proved in [95] that the first derivative of monodromy matrix with respect to parameter p can be found in accordance with the formula (5) Dp=D0TX1τApτXτdτ {\bf{D}}_p^{\prime} = {\bf{D}}\int\limits_0^T {{{\bf{X}}^{ - 1}}\left( \tau \right){\bf{A}}_p^{\prime}\left( \tau \right){\bf{X}}\left( \tau \right)d\tau } This formula can be calculated analytically, when the analytical form of monodromy matrix D exists, or numerically in any other case, and the result may be used to calculate derivatives of multipliers in accordance with Eq. (4).

In the case when the period of parametric excitation T depends on design parameter p, the first derivative of monodromy matrix can be written as [95] (6) D˜p=Dp+D¯p {\bf{\tilde D}}_p^{\prime} = {\bf{D}}_p^{\prime} + {\bf{\bar D}}_p^{\prime} where (7) Dp=0TpApt,pXt,p+At,pXpt,pdt {\bf{D}}_p^{\prime} = \int\limits_0^{T\left( p \right)} {\left[ {{\bf{A}}_p^{\prime}\left( {t,p} \right)\;{\bf{X}}\left( {t,p} \right) + {\bf{A}}\left( {t,p} \right){\bf{X}}_p^{\prime}\left( {t,p} \right)} \right]dt} and (8) D¯p=ATDTp {\bf{\bar D}}_p^{\prime} = {\bf{A}}\left( T \right)\;{\bf{D}}\;T_p^{\prime} The following markings were adopted in Eqs (6–8): Tp=dTdp,Ap=At,pp,Xp=Xt,pp. T_p^{\prime} = {{dT} \over {dp}},{\bf{A}}_p^{\prime} = {{\partial {\bf{A}}\left( {t,p} \right)} \over {\partial p}},{\bf{X}}_p^{\prime} = {{\partial {\bf{X}}\left( {t,p} \right)} \over {\partial p}}. The matrix Xpt,p {\bf{X}}_p^{\prime}\left( {t,p} \right) in Eq. (7) is a derivative of the fundamental matrix of solutions with respect to the design parameter p. This matrix can be found [95] in accordance with the formula (9) Xpt,p=Xt,p0tX1τ,pApτ,pXτ,pdτ {\bf{X}}_p^{\prime}\left( {t,p} \right) = {\bf{X}}\left( {t,p} \right)\int\limits_0^t {{{\bf{X}}^{ - 1}}\left( {\tau ,p} \right){\bf{A}}_p^{\prime}\left( {\tau ,p} \right)\;{\bf{X}}\left( {\tau ,p} \right)d\tau } and can be obtained analytically (when the analytical form of monodromy matrix X(t) exists) or numerically by solving a non-homogeneous sensitivity equation with zero initial conditions.

3
Second-order sensitivity analysis
3.1
Second derivative of multipliers with respect to design parameter

Calculating a derivative of both sides of Eq. (3) with respect to parameter pj and left-hand multiplication by the left-hand eigenvector lT and right-hand multiplication by the right-hand eigenvector r of monodromy matrix leads to equation (10) lTDpipjrρpipjlTr+lTDpirpjρpilTrpj==lTDpjrpi+ρpjlTrpilTDρIrpipj \matrix{ {{{\bf{l}}^{\rm{T}}}{\bf{D}}_{{p_i}{p_j}}^{''}\;{\bf{r}} - \rho _{{p_i}{p_j}}^{''}\;{{\bf{l}}^{\rm{T}}}{\bf{r}} + {{\bf{l}}^{\rm{T}}}{\bf{D}}_{{p_i}}^{\prime}\;{\bf{r}}_{{p_j}}^{\prime} - \rho _{{p_i}}^{\prime}{{\bf{l}}^{\rm{T}}}\;{\bf{r}}_{{p_j}}^{\prime} = } \hfill \cr { = - {{\bf{l}}^{\rm{T}}}{\bf{D}}_{{p_j}}^{\prime}\;{\bf{r}}_{{p_i}}^{\prime} + \rho _{{p_j}}^{\prime}{{\bf{l}}^{\rm{T}}}\;{\bf{r}}_{{p_i}}^{\prime} - {{\bf{l}}^{\rm{T}}}\left( {{\bf{D}} - \rho {\bf{I}}} \right)\;{\bf{r}}_{{p_i}{p_j}}^{''}} \hfill \cr } Under the validity of an assumption lT is a left-side eigenvector of monodromy matrix D, the last component of Eq. (10) is equal to zero. Moreover, based on the assumption that the scalar product of eigenvectors is normalised (11) lTr=1 {{\bf{l}}^{\rm{T}}}{\bf{r}} = 1 and, in addition, that the scalar product of the vectors satisfies the orthogonality condition (12) lTrpi=0 {{\bf{l}}^{\rm{T}}}{\bf{r}}_{{p_i}}^{\prime} = 0 it is possible from Eq. (10) to obtain the formula for calculating the second derivative of the multiplier in a simpler form: (13) ρpipj=lTDpipjr+lTDpirpj+lTDpjrpi \rho _{{p_i}{p_j}}^{''} = {{\bf{l}}^{\rm{T}}}{\bf{D}}_{{p_i}{p_j}}^{''}{\bf{r}} + {{\bf{l}}^{\rm{T}}}{\bf{D}}_{{p_i}}^{\prime}{\bf{r}}_{{p_j}}^{\prime} + {{\bf{l}}^{\rm{T}}}{\bf{D}}_{{p_j}}^{\prime}{\bf{r}}_{{p_i}}^{\prime} From Eq. (13), it follows that to calculate the second derivative of the multiplier, in addition to the derivatives of the monodromy matrices Dpi {\bf{D}}_{{p_i}}^{\prime} and Dpj {\bf{D}}_{{p_j}}^{\prime} , one must know the second derivative of the monodromy matrix Dpipj {\bf{D}}_{{p_i}{p_j}}^{''} and the derivatives of the right-sided eigenvectors of the monodromy matrix rpi {\bf{r}}_{{p_i}}^{\prime} and rpj {\bf{r}}_{{p_j}}^{\prime} . The derivatives of the monodromy matrices Dpi {\bf{D}}_{{p_i}}^{\prime} and Dpj {\bf{D}}_{{p_j}}^{\prime} can be calculated from Eq. (5) or Eqs (6–8) when the parametric excitation period depends on the design variable or is a design variable.

In the following sections, it will be shown how to determine the unknown matrix Dpipj {\bf{D}}_{{p_i}{p_j}}^{''} and the unknown vectors rpi {\bf{r}}_{{p_i}}^{\prime} and rpj {\bf{r}}_{{p_j}}^{\prime} .

3.2
Second derivatives of a monodromy matrix with respect to design parameters

The second derivatives of the monodromy matrix can be determined by calculating the derivative of the monodromy matrix defined by Eq. (6), taking into account Eqs (7) and (8), that is, (14) D˜pipj=dD˜pidpj=d2D˜dpidpj=Dpipj+Dpipj {\bf{\tilde D}}_{{p_i}{p_j}}^{''} = {{d{\bf{\tilde D}}_{{p_i}}^{\prime}} \over {d{p_j}}} = {{{d^2}{\bf{\tilde D}}} \over {d{p_i}d{p_j}}} = {\bf{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}}\over D} }}_{{p_i}{p_j}}^{''} + {\bf{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}}\over D} }}_{{p_i}{p_j}}^{''} The first component on the right side of Eq. (14) can be calculated using Eq. (7), (15) Dpipj=dDpidpj=0TpjpjApit,pjXt,pj+At,pjXpit,pjdt++ApiTD+ATDpidTdpj=Dpipj+ApiTD+ATDpi Tpj \matrix{ {{\bf{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}}\over D} }}_{{p_i}{p_j}}^{''} = {{d{\bf{D}}_{{p_i}}^{\prime}} \over {d{p_j}}} = \int\limits_0^{T\left( {{p_j}} \right)} {{\partial \over {\partial {p_j}}}\left[ {{\bf{A}}_{{p_i}}^{\prime}\left( {t,{p_j}} \right)\;{\bf{X}}\left( {t,{p_j}} \right) + {\bf{A}}\left( {t,{p_j}} \right){\bf{X}}_{{p_i}}^{\prime}\left( {t,{p_j}} \right)} \right]dt + } } \hfill \cr { + \;\left[ {{\bf{A}}_{{p_i}}^{\prime}\left( T \right)\;{\bf{D}} + {\bf{A}}\left( T \right){\bf{D}}_{{p_i}}^{\prime}} \right]{{dT} \over {d{p_j}}} = {\bf{D}}_{{p_i}{p_j}}^{''} + \left[ {{\bf{A}}_{{p_i}}^{\prime}\left( T \right)\;{\bf{D}} + {\bf{A}}\left( T \right)\;{\bf{D}}_{{p_i}}^{\prime}\;} \right]T_{{p_j}}^{\prime}} \hfill \cr } where Xpit,pj {\bf{X}}_{{p_i}}^{\prime}\left( {t,{p_j}} \right) is derived from the expression described by Eq. (9).

The second component on the right side of Eq. (14), that is, the matrix Dpipj {\bf{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}}\over D} }}_{{p_i}{p_j}}^{''} , is determined by counting the derivative of the second component on the right side of Eq. (6) described by Eq. (8) (16) Dpipj=ddpjATpj,pjDTpj,pjTpipj==ApjTDTpi+ATD˜pjTpi+ATDTpipj \matrix{ {{\bf{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}}\over D} }}_{{p_i}{p_j}}^{''} = {d \over {d{p_j}}}\left[ {{\bf{A}}\left( {T\left( {{p_j}} \right),{p_j}} \right)\;{\bf{D}}\left( {T\left( {{p_j}} \right),{p_j}} \right)T_{{p_i}}^{\prime}\left( {{p_j}} \right)} \right] = } \hfill \cr {\;\;\;\;\;\;\;\;\; = {\bf{A}}_{{p_j}}^{\prime}\left( T \right)\;{\bf{D}}T_{{p_i}}^{\prime} + {\bf{A}}\left( T \right)\;{\bf{\tilde D}}_{{p_j}}^{\prime}T_{{p_i}}^{\prime} + {\bf{A}}\left( T \right)\;{\bf{D}}\;T_{{p_i}{p_j}}^{''}} \hfill \cr } Considering that the matrix D˜pi {\bf{\tilde D}}_{{p_i}}^{\prime} in Eq. (14) is defined by Eqs (6–8), the matrix Dpipj {\bf{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}}\over D} }}_{{p_i}{p_j}}^{''} is obtained lastly in the form (17) Dpipj=ApjTDTpi+ATDpjTpi++A2TDTpjTpi+ATDTpipj \matrix{ {{\bf{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}}\over D} }}_{{p_i}{p_j}}^{''} = {\bf{A}}_{{p_j}}^{\prime}\left( T \right)\;{\bf{D}}T_{{p_i}}^{\prime} + {\bf{A}}\left( T \right)\;{\bf{D}}_{{p_j}}^{\prime}T_{{p_i}}^{\prime}\; + } \cr { + \;{{\bf{A}}^2}\left( T \right)\;{\bf{D}}\;T_{{p_j}}^{\prime}T_{{p_i}}^{\prime} + {\bf{A}}\left( T \right){\bf{D}}T_{{p_i}{p_j}}^{''}} \cr } Ultimately, the second derivative of the monodromy matrix can be written as a sum of two matrices (18) D˜pipj=Dpipj+D¯pipj {\bf{\tilde D}}_{{p_i}{p_j}}^{''} = {\bf{D}}_{{p_i}{p_j}}^{''} + {\bf{\bar D}}_{{p_i}{p_j}}^{''} The matrix D¯pipj {\bf{\bar D}}_{{p_i}{p_j}}^{''} calculated based on Eqs (16) and (17) is (19) D¯pipj=ApiTD+ATDpiTpj++ApjTD+ATDpjTpj+A2TDTpjTpi+ATDTpipj \matrix{ {{\bf{\bar D}}_{{p_i}{p_j}}^{''} = \left[ {{\bf{A}}_{{p_i}}^{\prime}\left( T \right){\bf{D}} + {\bf{A}}\left( T \right){\bf{D}}_{{p_i}}^{\prime}} \right]T_{{p_j}}^{\prime}\, + } \cr { + \;\left[ {{\bf{A}}_{{p_j}}^{\prime}\left( T \right){\bf{D}} + {\bf{A}}\left( T \right)\;{\bf{D}}_{{p_j}}^{\prime}} \right]T_{{p_j}}^{\prime} + {{\bf{A}}^2}\left( T \right)\;{\bf{D}}\;T_{{p_j}}^{\prime}T_{{p_i}}^{\prime} + {\bf{A}}\left( T \right)\;{\bf{D}}\;T_{{p_i}{p_j}}^{''}} \cr } and contains a complement that extends the ability to use an algorithm to calculate the second derivative of a monodromy matrix for cases where the period of parametric excitation is a design parameter or depends on the design parameter with respect to which the sensitivity is analysed.

The matrix Dpipj {\bf{D}}_{{p_i}{p_j}}^{''} in Eq. (18) is defined by (20) Dpipj=0TpjApit,pjXt,pj+At,pjXpit,pjdt==0TApipjtXt,pj+ApitXpjt+ApjtXpit+AtXpipjtdt \matrix{ {{\bf{D}}_{{p_i}{p_j}}^{''} = \int\limits_0^T {{\partial \over {\partial {p_j}}}\left[ {{\bf{A}}_{{p_i}}^{\prime}\left( {t,{p_j}} \right)\;{\bf{X}}\left( {t,{p_j}} \right) + {\bf{A}}\left( {t,{p_j}} \right)\;{\bf{X}}_{{p_i}}^{\prime}\left( {t,{p_j}} \right)} \right]dt} = } \cr { = \int\limits_0^T {\left[ {{\bf{A}}_{{p_i}{p_j}}^{''}\left( t \right)\;{\bf{X}}\left( {t,{p_j}} \right) + {\bf{A}}_{{p_i}}^{\prime}\left( t \right)\;{\bf{X}}_{{p_j}}^{\prime}\left( t \right) + {\bf{A}}_{{p_j}}^{\prime}\left( t \right)\;{\bf{X}}_{{p_i}}^{\prime}\left( t \right) + {\bf{A}}\left( t \right)\;{\bf{X}}_{{p_i}{p_j}}^{''}\left( t \right)} \right]dt} } \cr } and denotes the second mixed derivative of the monodromy matrix in cases when the period of a parametric excitation is not a design parameter or does not depend on the design parameter.

To determine practically useful formulas for the calculation of the matrix Dpipj {\bf{D}}_{{p_i}{p_j}}^{''} , the second derivative of the monodromy matrix Dpipj {\bf{D}}_{{p_i}{p_j}}^{''} can be derived from the relation (21) Dpipj=D0TX1tApjtXt0tX1τApiτXτdτdt++D0TX1tApitXt0tX1τApjτXτdτdt++D0TX1tApipj tXtdt \matrix{ {{\bf{D}}_{{p_i}{p_j}}^{''} = {\bf{D}}\int\limits_0^T {{{\bf{X}}^{ - 1}}\left( t \right){\bf{A}}_{{p_j}}^{\prime}\left( t \right){\bf{X}}\left( t \right)} \int\limits_0^t {{{\bf{X}}^{ - 1}}\left( \tau \right){\bf{A}}_{{p_i}}^{\prime}\left( \tau \right){\bf{X}}\left( \tau \right)d\tau dt}\, + } \cr { + \;{\bf{D}}\int\limits_0^T {{{\bf{X}}^{ - 1}}\left( t \right){\bf{A}}_{{p_i}}^{\prime}\left( t \right){\bf{X}}\left( t \right)} \int\limits_0^t {{{\bf{X}}^{ - 1}}\left( \tau \right){\bf{A}}_{{p_j}}^{\prime}\left( \tau \right){\bf{X}}\left( \tau \right)d\tau dt}\, + } \cr { + \;{\bf{D}}\int\limits_0^T {{{\bf{X}}^{ - 1}}\left( t \right){\bf{A}}_{{p_i}{p_j}}^{''}\;\left( t \right){\bf{X}}\left( t \right)dt} } \cr } Changing the order of integration and the designation of variables in the second integral Eq. (21) a new formula is obtained: (22) Dpipj=D0TX1tApjtXt0tX1τApiτXτdτdt++D0TtTX1τApjτXτdτX1tApitXtdt++D0TX1tApipj tXtdt \matrix{ {{\bf{D}}_{{p_i}{p_j}}^{''} = {\bf{D}}\int\limits_0^T {{{\bf{X}}^{ - 1}}\left( t \right){\bf{A}}_{{p_j}}^{\prime}\left( t \right){\bf{X}}\left( t \right)} \int\limits_0^t {{{\bf{X}}^{ - 1}}\left( \tau \right){\bf{A}}_{{p_i}}^{\prime}\left( \tau \right){\bf{X}}\left( \tau \right)d\tau dt}\, + } \cr { + \;{\bf{D}}\int\limits_0^T {\int\limits_t^T {{{\bf{X}}^{ - 1}}\left( \tau \right){\bf{A}}_{{p_j}}^{\prime}\left( \tau \right){\bf{X}}\left( \tau \right)d\tau \;{{\bf{X}}^{ - 1}}\left( t \right){\bf{A}}_{{p_i}}^{\prime}\left( t \right){\bf{X}}\left( t \right)dt} }\, + } \cr { + \;{\bf{D}}\int\limits_0^T {{{\bf{X}}^{ - 1}}\left( t \right){\bf{A}}_{{p_i}{p_j}}^{''}\;\left( t \right){\bf{X}}\left( t \right)dt} } \cr } If matrix factor X1τApjτXτ {{\bf{X}}^{ - 1}}\left( \tau \right){\bf{A}}_{{p_j}}^{\prime}\left( \tau \right){\bf{X}}\left( \tau \right) is commutative with matrix factor X1τApiτXτ {{\bf{X}}^{ - 1}}\left( \tau \right){\bf{A}}_{{p_i}}^{\prime}\left( \tau \right){\bf{X}}\left( \tau \right) , it becomes possible to substitute the sum of first two double integrals over the triangle in Eq. (22) by one double integral over a rectangle. Finally, this double integral can be substituted by a product of two integrals. Eq. (21) then takes the form (23) Dpipj=D0TX1tApitXtdt0TX1τApjτXτdt++D0TX1tApipjtXtdt \matrix{ {{\bf{D}}_{{p_i}{p_j}}^{''} = {\bf{D}} \cdot \int\limits_0^T {{{\bf{X}}^{ - 1}}\left( t \right){\bf{A}}_{{p_i}}^{\prime}\left( t \right){\bf{X}}\left( t \right)dt} \cdot \int\limits_0^T {{{\bf{X}}^{ - 1}}\left( \tau \right){\bf{A}}_{{p_j}}^{\prime}\left( \tau \right){\bf{X}}\left( \tau \right)dt}\, + } \cr { + \;{\bf{D}}\int\limits_0^T {{{\bf{X}}^{ - 1}}\left( t \right){\bf{A}}_{{p_i}{p_j}}^{''}\left( t \right){\bf{X}}\left( t \right)dt} } \cr } The calculation of the second derivatives of the monodromy matrix can be done with the equation (24) Dpipj=DpjD1Dpi+D0TX1tApipjtXtdt {\bf{D}}_{{p_i}{p_j}}^{''} = {\bf{D}}_{{p_j}}^{\prime}{{\bf{D}}^{ - 1}}{\bf{D}}_{{p_i}}^{\prime} + {\bf{D}}\int\limits_0^T {{{\bf{X}}^{ - 1}}\left( t \right){\bf{A}}_{{p_i}{p_j}}^{''}\left( t \right){\bf{X}}\left( t \right)dt} Unfortunately, in the general case, the factors that occur under the integral are not commutative and need to use Eq. (20).

Ultimately, in Eqs (6) and (24) for the first and second derivatives of multipliers, instead of matrices Dp {\bf{D}}_p^{\prime} and Dpipj {\bf{D}}_{{p_i}{p_j}}^{''} , matrices D˜p {\bf{\tilde D}}_p^{\prime} and D˜pipj {\bf{\tilde D}}_{{p_i}{p_j}}^{''} ought to be substituted, which differ from the matrices Dp {\bf{D}}_p^{\prime} and Dpipj {\bf{D}}_{{p_i}{p_j}}^{''} by the matrix components D¯p {\bf{\bar D}}_p^{\prime} and D¯pipj {\bf{\bar D}}_{{p_i}{p_j}}^{''} , described by Eqs (8) and (19), respectively.

3.3
Derivatives of right-side eigenvectors with respect to design parameters

Firstly, referring again to Eq. (3), one needs to determine the unknown vectors rpi {\bf{r}}_{{p_i}}^{\prime} and rpj {\bf{r}}_{{p_j}}^{\prime} , which must additionally satisfy the condition of orthogonality Eq. (11). Eq. (3) shows that because of Eqs (4), (6), (7) and (8), the only unknown is the vector rpi {\bf{r}}_{{p_i}}^{\prime} . To determine it from Eq. (3), one needs to invert the matrix (DρI). Unfortunately, ρ is the eigenvalue of the monodromy matrix D and the matrix (DρI) is singular, so it cannot be inversed. This does not mean, however, that there is no solution for Eq. (3).

Based on the Kronecker–Capelli theorem [101], it is known that if the rank of the principal matrix of a heterogeneous algebraic system of equations is equal to the rank of the augmented (extended) matrix, then the solution of the system of algebraic equations always exists. If at the same time, this rank is smaller than the number of unknowns – the system is indeterminate – so there are arbitrarily (infinitely) many solutions for such a system of equations.

It is assumed that Eq. (3) is an indeterminate system of equations with respect to the unknown rpi {\bf{r}}_{{p_i}}^{\prime} . This assumption is strongly motivated by the following facts:

  • Firstly, from Eq. (3), one can determine the vector r from the formula (25) r=(DpiρpiI)1DρIrpi {\bf{r}} = - {({\bf{D}}_{{p_i}}^{\prime} - \rho _{{p_i}}^{\prime}{\bf{I}})^{ - 1}}\left( {{\bf{D}} - \rho {\bf{I}}} \right){\bf{r}}_{{p_i}}^{\prime} and, of course, the condition r0 must be fulfilled, since vector r is a non-zero solution of Eq. (3) (assumed to be det R ≠ 0). This means that the inverse matrix (DpiρpiI)1 {({\bf{D}}_{{p_i}}^{\prime} - \rho _{{p_i}}^{\prime}{\bf{I}})^{ - 1}} exists, and it follows that ρpi \rho _{{p_i}}^{\prime} is not the eigenvalue of the matrix Dpi {\bf{D}}_{{p_i}}^{\prime} . From Eq. (25), one can further conclude that the vector rpi0 {\bf{r}}_{{p_i}}^{\prime} \ne {\bf{0}} is not an eigenvector of the matrix (DρI). Otherwise, the r vector would have to be a zero vector, which is not true. The vector rpi {\bf{r}}_{{p_i}}^{\prime} cannot, therefore, not exist. The vector r would then not exist either. Conclusion: The system of Eq. (3) is not inconsistent.

  • Secondly, the vector r being the eigenvector of the matrix D is, by definition, known with the precision of a constant multiplier. So, there are infinitely many eigenvectors. Therefore, there should also be an infinite number of derivatives of these vectors. In general, the system of Eq. (3) should be an indeterminate system of equations with respect to the unknown rpi {\bf{r}}_{{p_i}}^{\prime} .

To solve the indeterminate system of Eq. (3), with respect to the unknown rpi {\bf{r}}_{{p_i}}^{\prime} , and to simultaneously normalise this vector so that it satisfies the condition in Eq. (11), one can add to the system Eq. (3) one more equation written in the form of the scalar product Eq. (12). This can be done when the system of Eq. (3) is indeterminate and there is one more unknown than the number of independent equations. One unknown can certainly be chosen freely. Formally, the linearly dependent equation of the system of Eq. (3) is replaced by Eq. (12). The matrix rank of the system increases by one, becoming equal to the number of unknowns. The principal matrix thus ceases to be singular. The system of equations obtained in this way can already be formally solved.

However, this method of proceeding has the disadvantage that it is necessary to replace the ‘proper’ equation of the initial system of Eq. (3) with a new equation, that is, the one that is ‘responsible’ for the singularity of (DρI) matrix. According to the theory of linear algebra (e.g. [101]), determining which equation is linearly dependent is done by examining the rank of the minors of the matrix (DρI). This is a troublesome procedure and to avoid it, it is more convenient to modify it a bit.

Multiplying Eq. (3) on both sides by the conjugate vector l¯ {\bf{\bar l}} , in the complex sense, with the left-sided eigenvector lT of the matrix of the monodromy D, the following matrix equation is obtained: (26) Ωrpi=0, {\bf{\Omega }}\;{\bf{r}}_{{p_i}}^{\prime} = 0, where matrix Ω is defined by the formula (27) Ω=l¯lT {\bf{\Omega }} = {\bf{\bar l}}\;{{\bf{l}}^{\rm{T}}} The matrix Eq. (27) has the property that its determinant det Ω = 0 and the vector rpi {\bf{r}}_{{p_i}}^{\prime} is its eigenvector corresponding to zero eigenvalues. To be more precise, the order of the matrix rank Ω = 1. In accordance to Eq. (27) all rows of the matrix Ω must be proportional to the vector lT. A matrix Ω is a Hermitian matrix, that is, one that satisfies the condition ΩT=Ω¯ {{\bf{\Omega }}^{\rm{T}}} = {\bf{\bar \Omega }} , and there are real elements on its main diagonal.

Subtracting Eq. (26) from Eq. (3) (subtracting the zero vector according to Eq. (26)), one finally obtains (28) DρIΩrpi=DpiρpiIr \left( {{\bf{D}} - \rho {\bf{I}} - {\bf{\Omega }}} \right){\bf{r}}_{{p_i}}^{\prime} = - \left( {{\bf{D}}_{{p_i}}^{\prime} - \rho _{{p_i}}^{\prime}{\bf{I}}} \right){\bf{r}} All equations are modified, including the one that corresponds to the row of the matrix (DρI) ‘responsible’ for its singularity. The matrix (DρIΩ) on the left side of Eq. (28), denoted by (29) C=DρIΩ {\bf{C}} = \left( {{\bf{D}} - \rho {\bf{I}} - {\bf{\Omega }}} \right) is, therefore, non-singular (det C ≠ 0) and can be formally inversed. Therefore, from Eq. (28), one can unambiguously determine the vector rpi {\bf{r}}_{{p_i}}^{\prime} (30) rpi=C1DpiρpiIr {\bf{r}}_{{p_i}}^{\prime} = - {{\bf{C}}^{ - 1}}\left( {{\bf{D}}_{{p_i}}^{\prime} - \rho _{{p_i}}^{\prime}{\bf{I}}} \right){\bf{r}} which, at the same time, satisfies the condition in Eq. (12). Similarly, the vector rpj {\bf{r}}_{{p_j}}^{\prime} can be calculated. Ultimately, it becomes possible to reject two terms in Eq. (10) and obtain an equation in the form Eq. (13).

In the theory presented in this section, it is assumed that there are no multiple multipliers. In the case of multiple multipliers, the analysis of the problem becomes much more complicated mathematically and it is rather a case for a separate work.

4
Parametric periodic systems’ stabilisation method

A full description of the elements of the theory consistent in both works can be found in the paper [95]. Based on the concept of directional derivative [100], a gradient vector has been designated for the fastest decrease of the absolute value of the complex multiplier whose absolute value is the greatest. This gradient is used to calculate the change in design parameters to make system vibrations stable. The resulting formulas can be interpreted as an expansion of the function describing the multiplier module in Taylor series, including the first three expansion members.

As has been mentioned, a gradient vector has been designated for the fastest decrease of the absolute value of complex multipliers. However, it could be the first three members of the formula of study [58], where the problem was solved using the small parameter method. The algorithm was tested on the same examples that were previously analysed in work [95]. In particular, the effectiveness of the method was compared when, in addition to other system parameters, the parametric excitation period was also a design parameter. This is the fundamental difference between the algorithm presented in this work and that described in the work [58]. The possibility of a one-step exit from the area of instability was also tested.

4.1
Gradient of absolute value of multiplier [95]

Due to the specificity of the problem, the stabilisation algorithm will not need so much multiplier gradients as gradients of their absolute values because their absolute values provide the stability or instability of the parametric system. With accordance to the theory presented in [95], the gradient of the absolute value of the multiplier as a vector function of design parameters p = [ p1,..., pn ] can be written down in the form (31) g=gradρp=Reρρgr+Imρρgu {\bf{g}} = grad\left| {\rho \left( {\bf{p}} \right)} \right| = {{{\rm{\;Re\;}}\rho } \over {\left| \rho \right|}}{{\bf{g}}_r} + {{{\rm{\;Im\;}}\rho } \over {\left| \rho \right|}}{{\bf{g}}_u} where vector gr is the gradient of the real part and vector gu is the gradient of the imaginary part of the multiplier.

According to the geometric interpretation of the scalar product of two vectors, a directional derivative is a projection of a gradient vector on the direction of the vector e. Vector eT = [e1,..., en] is any vector. It is also accepted in the literature that this vector is normalised, that is, in the parameters state, it fulfils the condition (32) e=e12+e22++en2=1 \left\| {\bf{e}} \right\| = \sqrt {e_1^2 + e_2^2 + \cdots + e_n^2} = 1 According to this interpretation, the value of the directional derivative is the largest when its direction is the same as the direction of the gradient vector. As the possibility of reducing the value of the multiplier module is sought as soon as possible, e = − g, which, after normalising in accordance with Eq. (32) finally gives (33) e=gg=Δg {\bf{e}} = - {{\bf{g}} \over {\left| {\bf{g}} \right|}} = - \Delta {\bf{g}} Therefore, the change in the value of design parameters, leading to the fastest reduction in the absolute value of the multiplier, is determined by (34) Δp=εe=εΔg \Delta {\bf{p}} = \varepsilon \;{\bf{e}} = - \varepsilon \;\Delta {\bf{g}} where ε is a small numeric parameter.

4.2
Change in multiplier value – stabilisation procedure

One can calculate the change in the value of a multiplier using the formula (35) ΔIρ=grTΔp+iguTΔp {\Delta _I}\rho = {\bf{g}}_r^{\rm{T}}\Delta {\bf{p}} + i\;{\bf{g}}_u^{\rm{T}}\Delta {\bf{p}} Using Eq. (34), Eq. (35) can be presented in the form (36) ΔIρ=ΔIρr+iΔIρu=εgrTΔg+iguTΔg {\Delta _I}\rho = {\Delta _I}{\rho _r} + i\;{\Delta _I}{\rho _u} = - \varepsilon \left( {{\bf{g}}_r^{\rm{T}}\Delta {\bf{g}} + i\;{\bf{g}}_u^{\rm{T}}\Delta {\bf{g}}} \right) The change in the absolute value of the multiplier can be calculated based on Eq. (36) (37) ΔIρ=(ΔIρr)2+(ΔIρu)2=εΔIg {\Delta _I}\left| \rho \right| = \sqrt {{{({\Delta _I}{\rho _r})}^2} + {{({\Delta _I}{\rho _u})}^2}} = \varepsilon \;{\Delta _I}\left| g \right| where the designation (38) ΔIg=grTΔg2+guTΔg2 {\Delta _I}\left| g \right| = \sqrt {{{\left( {{\bf{g}}_r^{\rm{T}}\Delta {\bf{g}}} \right)}^2} + {{\left( {{\bf{g}}_u^{\rm{T}}\Delta {\bf{g}}} \right)}^2}} was adopted.

With the second derivative, one can calculate the second-order change of the multiplier value (39) ΔIIρ=12ΔpTFrΔp+iΔpTFuΔp {\Delta _{II}}\rho = {1 \over 2}\left( {\Delta {{\bf{p}}^{\rm{T}}}\;{{\bf{F}}_r}\;\Delta {\bf{p}} + i\;\Delta {{\bf{p}}^{\rm{T}}}\;{{\bf{F}}_u}\;\Delta {\bf{p}}} \right) Using once again Eq. (34), Eq. (39) can be presented in the form (40) ΔIIρ=ΔIIρr+iΔIIρu=ε212ΔgTFrΔg+iΔguTFuΔg {\Delta _{II}}\rho = {\Delta _{II}}{\rho _r} + i\;{\Delta _{II}}{\rho _u} = {\varepsilon ^2}{1 \over 2}\left( {\Delta {{\bf{g}}^{\rm{T}}}\;{{\bf{F}}_r}\;\Delta {\bf{g}} + i\;\Delta {\bf{g}}_u^{\rm{T}}\;{{\rm{F}}_u}\;\Delta {\bf{g}}} \right) and the second-order change of the multiplier module (41) ΔIIρ=(ΔIIρr)2+(ΔIIρu)2=ε2ΔIIg {\Delta _{II}}\left| \rho \right| = \sqrt {{{({\Delta _{II}}{\rho _r})}^2} + {{({\Delta _{II}}{\rho _u})}^2}} = {\varepsilon ^2}{\Delta _{II}}\left| g \right| where it is marked (42) ΔIIg=12(ΔgTFrΔg)2+(ΔguTFuΔg)2 {\Delta _{II}}\left| g \right| = {1 \over 2}\sqrt {{{(\Delta {{\bf{g}}^{\rm{T}}}\;{{\bf{F}}_{r\;}}\Delta {\bf{g}})}^2} + {{(\Delta {\bf{g}}_u^{\rm{T}}\;{{\bf{F}}_u}\;\Delta {\bf{g}})}^2}} In Eqs (39) and (42), symbols Fr and Fu mean, respectively, the real and imaginary part of the matrix of the second partial derivatives of the multiplier due to all design variables. One can obtain an approximate multiplier value after the procedure for reducing the value of its module by adding the components described in Eqs (36) and (41), that is, (43) ρ=ρo+ΔIρ+ΔIIρ=ρr+iρu \rho = {\rho _o} + {\Delta _I}\rho + {\Delta _{II}}\rho = {\rho _r} + i\;{\rho _u} and the value of the module is then expressed as (44) ρ=ρr2+ρu2 \rho = \sqrt {\rho _r^2 + \rho _u^2} where ρr = ρor + ΔI ρr + ΔII ρr and ρu = ρou + ΔI ρu + ΔII ρu are obtained based on Eqs (40) and (36). Eq. (43) may, on the one hand, be interpreted as an expansion of the function describing the multiplier into a Taylor series, taking into account the first three members of the expansion, and on the other hand, the first three members of the formula obtained in the work [58], where the task was solved using the small parameter method.

5
Examples

The method presented in this paper was verified using the same example that was analysed in [95]. The first example is used to check the correctness of the formulated method, in particular, the derived complex formulas and algorithms for implementation of the parametric system stabilisation process. This example, for the parametric system, is unique. There is an analytical solution for all mathematical operations associated with the computational algorithm presented in this work. This is a great advantage of this example. It is possible to objectively verify the correctness of the theory and to determine the efficiency of the method. In addition, one can directly compare the results with those obtained in works [95] and [58]. Obtaining the results of the second-order sensitivity analysis consistent with those obtained in the paper [95] in the field of first-order sensitivity analysis will confirm the correctness of the new, more complex formulas and algorithms.

In the case of implementing a second-order sensitivity analysis algorithm, reaching a solution can be realised with a smaller number of iterative steps (longer iteration step length). This is possible as the algorithm uses nonlinear extrapolation with three components of the expansion of the function describing the multiplier in Taylor series (see Eq. (43)), and not, as in the case of the first-order analysis, a linear extrapolation with two components. However, more complex algorithms must be implemented at each step of the iteration of the second-order sensitivity analysis algorithm.

Ultimately, the implementation time of both algorithms, based on the first-order sensitivity analysis used in the work [95] and the second-order sensitivity analysis used in this work, is an individual matter and one or the other of them may be more advantageous depending on the task.

Both methods, using the first-order and second-order sensitivity analyses, were improved and generalised and allow to correctly determine the eigenderivatives of multipliers also with respect to those system parameters, on which the parametric excitation period depends. Thus, in particular, it becomes possible to use the parametric excitation period as a design parameter.

The goal of the second example, parametric resonant vibrations eliminator (absorber), was different. It is an attempt to show the possibility of practical application of the proposed method for tuning an absorber in an unstable parametric system.

5.1
Example 1 – method validation

In the stabilisation process, new formulas related to the analysis of second-order sensitivity, that is, described by Eqs (35–44), have been used. The comparison was realised parallel in three ways:

  • analytical calculations performed by humans,

  • analytical calculations performed using symbolic procedures of the Mathematica system and

  • calculations performed using numerical procedures of the Mathematica system.

The consistency of the results obtained by these three approaches irrefutably proves the correctness of the method described in this paper.

5.1.1
Analysis of the system sensitivity

A linear parametric system described by Eq. (1) is considered, in which the system matrix has an analytical form (45) At=a+ia+bcos2atab+iba+bsin2atab+ia+bbsin2ataia+bcos2at {\bf{A}}\left( t \right) = \left[ {\matrix{ {a + i\left( {a + b} \right)\cos 2at} & { - ab + ib\left( {a + b} \right)\sin 2at} \cr {{a \over b} + i{{a + b} \over b}\sin 2at} & {a - i\left( {a + b} \right)\cos 2at} \cr } } \right] Matrix A(t) is periodic with a constant period T = π/a, while a and b are system parameters. Consequently, the period T is a function of the design parameter T = T (a). The fundamental matrix of the solutions of this system, monodromy matrix, modal matrices and first-order sensitivity analysis results have the form (46) Xt=cosateiatbsinateiat1bsinateiatcosateiatea+ibt00eaibt {\bf{X}}\left( t \right) = \left[ {\matrix{ {\cos at \cdot {e^{iat}}} & { - b\sin at \cdot {e^{ - iat}}} \cr {{1 \over b}\sin at \cdot {e^{iat}}} & {\cos at \cdot {e^{ - iat}}} \cr } } \right]\left[ {\matrix{ {{e^{\left( {a + ib} \right)t}}} & 0 \cr 0 & {{e^{\left( {a - ib} \right)t}}} \cr } } \right] (47) D=XT,a,b=ρ100ρ2=ea+ibT00eaibT=ea+ibπa00eaibπa=ρ {\bf{D}} = {\bf{X}}\left( {T,a,b} \right) = \left[ {\matrix{ {{\rho _1}} & 0 \cr 0 & {{\rho _2}} \cr } } \right] = \left[ {\matrix{ {{e^{\left( {a + ib} \right)T}}} & 0 \cr 0 & {{e^{\left( {a - ib} \right)T}}} \cr } } \right] = \left[ {\matrix{ {{e^{\left( {a + ib} \right){\pi \over a}}}} & 0 \cr 0 & {{e^{\left( {a - ib} \right){\pi \over a}}}} \cr } } \right] = \left\{ {\boldsymbol{\rho }} \right\} (48) R=L=I {\bf{R}} = {\bf{L}} = {\bf{I}} (49) Da=ρ1a00ρ2a=ibaTρ100ibaTρ2 {\bf{D}}_a^{\prime} = \left[ {\matrix{ {\rho _{1a}^{\prime}} & 0 \cr 0 & {\rho _{2a}^{\prime}} \cr } } \right] = \left[ {\matrix{ { - i{b \over a}T \cdot {\rho _1}} & 0 \cr 0 & {i{b \over a}T \cdot {\rho _2}} \cr } } \right] (50) Db=ρ1b00ρ2b=iTρ100iTρ2 {\bf{D}}_b^{\prime} = \left[ {\matrix{ {\rho _{1b}^{\prime}} & 0 \cr 0 & {\rho _{2b}^{\prime}} \cr } } \right] = \left[ {\matrix{ { - iT \cdot {\rho _1}} & 0 \cr 0 & {iT \cdot {\rho _2}} \cr } } \right] Further testing of the algorithm presented in this paper is carried out first in the case of sensitivity analysis with respect to the parameter b. This parameter does not affect the period of the parametric excitation. Therefore, the results obtained in accordance with Eq. (7) should be correct. The algorithm described in this formula was programmed in the Mathematica computer system. This allowed conducting analytical calculations using the symbolic procedures. The results achieved were consistent with those obtained in work [97].

The calculations were repeated using a generalised algorithm described by Eq. (7). The obtained results are identical to those achieved by analytical calculation of the derivatives, that is, Eqs (49) and (50).

The subsequent calculations were carried out using the procedure of second-order sensitivity analysis (in accordance with Eq. (18): (51) Dab=ρ1ab00ρ21ab=Ta2πbiaρ100Ta2πb+iaρ2 {\bf{D}}_{ab}^{''} = \left[ {\matrix{ {\rho _{1ab}^{''}} & 0 \cr 0 & {\rho _{21ab}^{''}} \cr } } \right] = \left[ {\matrix{ {{T \over {{a^2}}}\left( {\pi b - ia} \right){\rho _1}} & 0 \cr 0 & {{T \over {{a^2}}}\left( {\pi b + ia} \right){\rho _2}} \cr } } \right] The same results are obtained analytically and by solving the eigenproblem for the matrix Dab {\bf{D}}_{ab}^{''} , which is evident from the form of the matrix Dab {\bf{D}}_{ab}^{''} in Eq. (51).

5.1.2
Analysis of the system stability

In work [95], an analysis of the stability of a homogeneous system corresponding to Eq. (1) with a system matrix A(t) described by Eq. (45) was carried out. Since the parametric excitation period in this example is T = π/a, the greatest absolute value of the multiplier is, therefore, described by the equation [95] (52) ρ1,2=eaT=eπ=constant>1 \left| {{\rho _{1,2}}} \right| = {e^{aT}} = {e^\pi } = {\rm{constant}} > 1 This means the system is unstable and it will remain unstable regardless of the change in the value of the parameters a or b.

5.1.3
Stabilisation procedure

Since the system is unstable and will remain unstable regardless of the change in the value of the parameters a or b, the stabilisation procedure does not make sense in this case. In this situation, another possibility of stabilising the system was considered. It was assumed that the vector of the design parameter contains not two, but three parameters, and that the period of parametric excitation depends additionally on its value d, according to the formula T (a, d) = d/a, and also that the initial value of the parameter d = π.

Because this example was intended to test all three variants and the associated algorithms, that is, manual calculations performed by humans, symbolic calculations using symbolic operations of the Mathematica system [102] and numerical calculations of the Mathematica system [102], such calculations were performed.

In the starting point of the stabilisation procedure, the vector of the design parameter has the form (53) po=abdT=abπT {{\bf{p}}_o} = {\left[ {\matrix{ a & b & d \cr } } \right]^{\rm{T}}} = {\left[ {\matrix{ a & b & \pi \cr } } \right]^{\rm{T}}} Second-order sensitivity analysis procedures presented by Eqs (32–39) were used. The final vector of design parameters takes the form (54) p1=ab0T {{\bf{p}}_1} = {\left[ {\matrix{ a & b & 0 \cr } } \right]^{\rm{T}}} Such a result was obtained in all three methods described above, in particular, by performing strict analytical calculations. The result Eq. (54) is also identical to the one obtained in the first-order sensitivity analysis and means that in this case, the limit of the stable area will be reached only if the period of T = T (a, d) = 0/a = 0. Mathematically, this is the correct result, but from the point of view of physical interpretation, this means the period of parametric excitation disappears. Stabilisation of the system is, therefore, possible, but requires complete elimination of parametric vibrations. This means a de facto transition to a system with constant coefficients.

5.2
Example 2 – stabilisation of an unstable parametric system by a dynamic absorber

As in the previous paper [95], where first-order sensitivity analysis was used, in this paper, calculations related to the implementation of the second-order sensitivity analysis algorithms for stabilisation of the parametric unstable system were also performed. An example from the study [95] was used. The process of stabilising the unstable parametric system with the use of two types of vibration eliminators (absorbers), a classical and a parametrical one, was carried out. In addition, the effectiveness of these two kinds of dynamic vibration absorbers is analysed: classic and parametric ones. However, the stabilisation procedure itself excluded the use of vibration absorber other than active ones. ‘Active’ is here understood as an eliminator whose parameters can be changed during its stabilisation and/or operation.

The aim of this example is not to seek an answer to the question: Is a dynamic vibration eliminator able to effectively stabilise an unstable parametric system? It was shown in paper [95] that the answer to this question is yes. Now, the answer is sought to the question: Is it better to use first-order or second-order sensitivity analysis?

5.2.1
Model of the system-absorber [95]

By virtue of the theory presented in the paper, a primary parametric system with mass M(t) with a dynamic vibration eliminator (absorber mass m) attached to it is analysed. Thus, a damped two-dynamic-degrees-of-freedom q1 and q2 parametric system is created. The system is shown in Fig. 1.

Figure 1:

Two-mass dynamic model: parametrically excited primary system M with attached eliminator m.

The additional upper mass m in Fig. 1 is the mass of the dynamic vibration eliminator connected in parallel by an elastic spring k(t) and a damping bond c(t) with the primary mass M (lower mass) of the parametric system, attached to the foundation by an elastic spring K(t) and a damping bond C(t). The system is parametrically excited, that is, elastic spring stiffness K(t) and damping characteristic C(t) periodically change in time with a parametric excitation frequency no. It is also assumed that, in a general case, the stiffness k(t) and damping characteristics c(t) in the bonds connecting the mass of the vibration eliminator m with the primary mass M(t) can harmonically vary in time with the frequency of parametric excitation.

The matrix equation of the motion of a parametric homogenous system shown in Fig. 1 can be generally written in the form (55) Btq¨+Ctq˙+Ktq=0 {\bf{B}}\left( t \right){\bf{\ddot q}} + {\bf{C}}\left( t \right){\bf{\dot q}} + {\bf{K}}\left( t \right){\bf{q}} = {\bf{0}} where B(t), C(t) and K(t) are inertia, damping and stiffness matrices and q = [q1  q2]T is a general coordinates vector (Fig. 1).

In this example, without losing the generality of Eq. (55), it was assumed that the system characteristics of elements in Fig. 1 are described by the following values (papers [95] and [54]):

Primary parametric system (lower mass M)

the basic massM = 3·104 kg,
the constant part of the stiffness of the elastic bondKo = 4 ·108 N/m,
the characteristic of the damping bondC = 4.4 ·104 Ns/m
and
parametric excitation frequencyνo = 233 rad/s.
Eliminator (upper mass m)
the eliminator mass (1/20 of the mass M)m = 1.5 ·103 kg,
the spring stiffensk = 1.77 ·107 N/m
and
the characteristic of the damping bondc = 4.4 ·104 Ns/m.

Symbolic and numerical calculations are performed using the computer system ‘Mathematica’ [102].

5.2.2
The classic vibration absorber

The matrix coefficients of the equation of motion, Eq. (55), can be then described by the following formulas and matrix coefficients: (56) Bt=m00M {\bf{B}}\left( t \right) = \left[ {\matrix{ m & 0 \cr 0 & M \cr } } \right] (57) Ct=cocococo+C {\bf{C}}\left( t \right) = \left[ {\matrix{ {{c_o}} & { - {c_o}} \cr { - {c_o}} & {{c_o} + C} \cr } } \right] (58) Kt=kkkk+Ko+000K1cosνot {\bf{K}}\left( t \right) = \left[ {\matrix{ k & { - k} \cr { - k} & {k + {K_o}} \cr } } \right] + \left[ {\matrix{ 0 & 0 \cr 0 & {{K_1}} \cr } } \right]{\rm{\;}}\;\;{\rm{cos\;}}\left( {{\nu _o}\;t} \right) where (59) Kt=Ko+K1cosνot,(Ko>K1=0.5Ko) K\left( t \right) = {K_o} + {K_1}{\rm{\;cos\;}}\left( {{\nu _o}\;t} \right),\;\;\;\;\;({K_o} > {K_1} = 0.5{K_o}) To use the theory presented in this paper, the equation of motion, Eq. (55), is reduced to a system of the first-order homogenous differential equations corresponding to the first equation in Eq. (1).

According to the theory presented in the paper, the system matrix A(t) in the second equation in Eq. (1) then has a form (60) At=00100001kmkmcomcomkMKo+K1cosvot+kMcoMC+coM {\bf{A}}\left( t \right) = \left[ {\matrix{ 0 & 0 & 1 & 0 \cr 0 & 0 & 0 & 1 \cr { - {k \over m}} & {{k \over m}} & { - {{{c_o}} \over m}} & {{{{c_o}} \over m}} \cr {{k \over M}} & { - {{\left( {{K_o} + {K_1}{\rm{\;cos\;}}{v_o}t} \right) + k} \over M}} & {{{{c_o}} \over M}} & { - {{C + {c_o}} \over M}} \cr } } \right] Based on stability analysis, it was found that the initial system is unstable. This is evident from the values of the multipliers on the major diagonal of the Jordanian form of the monodromy matrix. The first multiplier is modularly larger than one. (61) J=1.193800000.66404+0.22579i00000.664040.25579i00000.69295 {\bf{J}} = \left[ {\matrix{ { - 1.1938} & 0 & 0 & 0 \cr 0 & { - 0.66404 + 0.22579\;i} & 0 & 0 \cr 0 & 0 & { - 0.66404 - 0.25579\;i} & 0 \cr 0 & 0 & 0 & { - 0.69295} \cr } } \right] To stabilise the system, the method of second-order sensitivity analysis presented in this paper was used. The design parameter vector is assumed to be in the form (62) p=kcoT {\bf{p}} = {\left[ {\matrix{ k & {{c_o}} \cr } } \right]^T} The values of the design parameter vector in start point are the values (63) p0=1.7710744000T {{\bf{p}}_0} = {\left[ {\matrix{ {1.77 \cdot {{10}^7}} & {44000} \cr } } \right]^T} The system was able to stabilise at the level of (64) p=1.7710722213T {\bf{p}} = {\left[ {\matrix{ {1.77 \cdot {{10}^7}} & {22213} \cr } } \right]^T} The stiffness value hardly changed, while the eliminator damping decreased almost twice compared to the initial value. Such a tuned eliminator guarantees the stability of the entire system, as evidenced by the values of the multipliers (65) J=0.99170.02384i00000.99170.02384i00000.7598+0.2601i00000.75980.2601i {\bf{J}} = \left[ {\matrix{ { - 0.9917 - 0.02384\;i} & 0 & 0 & 0 \cr 0 & { - 0.9917 - 0.02384\;i} & 0 & 0 \cr 0 & 0 & { - 0.7598 + 0.2601\;i} & 0 \cr 0 & 0 & 0 & { - 0.7598 - 0.2601\;i} \cr } } \right] The largest modulus of the multiplier has the value 0.992, which is less than one. This indicates the stability of the system. The obtained results are consistent with those obtained in paper [95].

5.2.3
The parametric vibration absorber

As in work [95], the same unstable parametric system, which was stabilised by a classical vibration eliminator, was subjected to the stabilisation procedure described in this work, but this time, the characteristic of the damping bond of the vibration absorber was modified.

It was assumed that (66) ct=co+c1cosνot,(co>c1=0.5co) c\left( t \right) = {c_o} + {c_{\it 1}}{\rm{\;cos\;}}\left( {{\nu _o}t} \right),\;\;\;\;\;({c_o} > {c_1} = 0.5{c_o}) Because the characteristics of the bond change periodically in time with the parametric excitation frequency, the absorber has become a parametric vibration eliminator. Damping matrix in equation of motion, that is, Eq. (55), can be written as (67) Ct=cocococo+C+c1c1c1c1cosνot {\bf{C}}\left( t \right) = \left[ {\matrix{ {{c_o}} & { - {c_o}} \cr { - {c_o}} & {{c_o} + C} \cr } } \right] + \left[ {\matrix{ {{c_{\it 1}}} & { - {c_{\it 1}}} \cr { - {c_{\it 1}}} & {{c_{\it 1}}} \cr } } \right]{\rm{\;cos\;}}\left( {{\nu _o}\;t} \right) System matrix A(t) Eq. (57) must be modified, and now has the form (68) At=00100001kmkmco+c1cosνotmco+c1cosνotmkMKo+K1cosνot+kMco+c1cosνotMco+c1cosνot+CM {\bf{A}}\left( t \right) = \left[ {\matrix{ 0 & 0 & 1 & 0 \cr 0 & 0 & 0 & 1 \cr { - {k \over m}} & {{k \over m}} & { - {{{c_o} + {c_1}{\rm{\;cos\;}}{\nu _o}t} \over m}} & {{{{c_o} + {c_1}{\rm{\;cos\;}}{\nu _o}t} \over m}} \cr {{k \over M}} & { - {{\left( {{K_o} + {K_1}{\rm{\;cos\;}}{\nu _o}t} \right) + k} \over M}} & {{{{c_o} + {c_1}{\rm{\;cos\;}}{\nu _o}t} \over M}} & { - {{\left( {{c_o} + {c_1}{\rm{\;cos\;}}{\nu _o}t} \right) + C} \over M}} \cr } } \right] The initial values of the multipliers have hardly changed compared to the previously described relation, that is, Eq. (61). (69) J=1.191200000.66352+0.24280i00000.663510.24280i00000.70447 {\bf{J}} = \left[ {\matrix{ { - 1.1912} & 0 & 0 & 0 \cr 0 & { - 0.66352 + 0.24280\;i} & 0 & 0 \cr 0 & 0 & { - 0.66351 - 0.24280\;i} & 0 \cr 0 & 0 & 0 & { - 0.70447} \cr } } \right] The largest modulus has the value 1.00246 and is greater than 1, which means that the system is still unstable. The limit of the area of stability was practically reached, but it was not exceeded and, as mentioned earlier, the system remained unstable.

As in work [95], the automatic stabilisation test was repeated assuming a decrease in the value of the parametric part of the eliminator damping bond characteristic, that is, it is assumed that c1 = 0.3co instead of c1 = 0.5co. This time, the stabilisation process was successful and, as in the case of the classic vibration eliminator, a stable system was obtained, in which the maximum value of the multiplier modulus was 0.99737. The vector of design parameters also did not differ much from that obtained for a classic vibration eliminator, that is, (70) p=1.7700210723358T {\bf{p}} = {\left[ {\matrix{ {1.77002 \cdot {{10}^7}} & {23358} \cr } } \right]^T} Once again, as in work [95], one more variant of calculations was carried out. It was assumed that all the parameters of the vibration eliminator and the parametric system are design parameters, that is, that the vector of the design parameters consisted of seven coordinates (71) p=kKoK1coCc1TT {\bf{p}} = {\left[ {\matrix{ k & {{K_o}} & {{K_1}} & {{c_o}} & C & {{c_1}} & T \cr } } \right]^{\rm{T}}} whose values are (72) po=1.77107410821084400044000220000.026995T {{\bf{p}}_o} = {\left[ {\matrix{ {1.77 \cdot {{10}^7}} & {4 \cdot {{10}^8}} & {2 \cdot {{10}^8}} & {44000} & {44000} & {22000} & {0.026995} \cr } } \right]^{\rm{T}}} The last parameter is the parametric excitation period of the system T = 2π/νo = 0.0269665 s. To perform this variant of calculations, again as in work [95], it was necessary to use generalised formulas for calculating the derivatives of the monodromy matrix, that is, Eq. (18).

The design parameters vector after stabilising the system is shown below: (73) p=1.77107410821084400044000220000.024144T {\bf{p}} = {\left[ {\matrix{ {1.77 \cdot {{10}^7}} & {4 \cdot {{10}^8}} & {2 \cdot {{10}^8}} & {44000} & {44000} & {22000} & {0.024144} \cr } } \right]^{\rm{T}}} The Jordan form of the monodromy matrix after stabilisation of the system supports the conclusion of its stability. This is because we get (74) J=0.001716000000.65648+0.49881i00000.656480.49881i00000.70099 {\bf{J}} = \left[ {\matrix{ { - 0.0017160} & 0 & 0 & 0 \cr 0 & { - 0.65648 + 0.49881\;i} & 0 & 0 \cr 0 & 0 & { - 0.65648 - 0.49881\;i} & 0 \cr 0 & 0 & 0 & { - 0.70099} \cr } } \right] The largest absolute value of the multiplier is 0.99172 and less than 1, which indicates the stability of the system, while the absolute value of the largest multiplier could be reduced even further.

Values of the vector p differ from the values of the vector po practically only at the last position – corresponding to the parameter, which is the parametric excitation period. Compared to the initial value, the parametric excitation period decreased by only about 10%. Thus, in this case, a small change in this parameter value (parametric excitation frequency) guarantees the stability of the system. The stabilisation process was carried out instantly, which is important if the stabilisation procedure is to be carried out in practice, in which case it must take place in real time.

It must, however, be noted that this is not a parameter of the eliminator only, but, at the same time, a parameter of the system. Changing the parameter of the system may prove difficult or even impossible in practice. It can also involve high costs of modifying the system.

As expected, the calculations performed in accordance with the formulas of the second-order sensitivity analysis gave results fully consistent with those obtained in the paper [95], where the first-order sensitivity analysis was used.

6
Conclusions

The paper is a continuation of the paper [95], where an application of the first-order sensitivity analysis method to stabilisation of unstable multi-degree-of-freedom parametric periodic system was presented. This paper presents an application of the second-order sensitivity analysis to stabilisation of unstable continuous in time parametric periodic system. The method allows to calculate derivatives of multipliers of the monodromy matrix. It is, therefore, an alternative approach to that proposed in [56] and [57] and is similar to that which was presented in paper [58]. The equation of sensitivity with respect to the design parameter is found first. The solutions of this equation allow to find eigenderivatives of multipliers. The method has been tested on the example giving the correct results consistent with the ones obtained in work [95] for the first-order sensitivity analysis. In addition, the calculations performed in accordance with the formulas of the second-order sensitivity analysis gave results fully consistent with those obtained in the paper [95], where the first-order sensitivity analysis was used. The method gives correct results also when the parametric period depends on the design parameter or is itself the design parameter for which the sensitivity is analysed. For this reason, the method is much more general than the ones presented in papers [56] and [58].

The aim of this work was not to seek an answer to the question: Is it possible to stabilise an unstable parametric system using the sensitivity analysis method? It was shown in paper [95] that the answer to this question is yes. Now, the answer is sought to the question: Is it better to use first-order or second-order sensitivity analysis?

In assessing the two variants of the methodology, it was predicted that the final results of calculations using first-order sensitivity, presented in paper [95], and second-order sensitivity analysis, presented in this paper, should be identical. It was also predicted that the only difference may be in the speed of obtaining results, which is especially important when the system stabilisation process has to take place in real time. Unfortunately, the time to obtain results is influenced by two opposing tendencies: in the first-order sensitivity analysis, the algorithms are simpler than in the second-order analysis; but due to worse prediction (linear extrapolation), a higher number of iterative steps is required, and vice versa, in the second-order sensitivity analysis, algorithms are more complicated, but thanks to nonlinear extrapolation Eq. (40), the length of iterative steps can be greater and there can be fewer iteration steps. Which way will be better depends on the specific case, the quality of the computer processing equipment, etc. Thus, it is impossible to determine a priori which method is better. The results of this paper make it possible to test both methods in any given case and choose the one that fits best in a given application.

As in work [95], the method was modified in such a way that in particular, it becomes possible to use the parametric excitation period also as a design parameter.

Two examples were analysed in the paper. This first example – method validation (the same example that was analysed in [95, 57]) – is absolutely unique for the parametric systems, since analytical solutions exist for all mathematical operations associated with the presented algorithm. This is a great advantage of this example. It is possible to objectively verify the correctness of the theory and to determine the efficiency of the method. In addition, one can directly compare the results with those obtained in work [58].

The comparison was realised simultaneously in three ways:

  • analytical calculations performed by humans,

  • analytical calculations performed using symbolic procedures of the Mathematica system and

  • calculations performed using numerical procedures of the Mathematica system.

The consistency of the results obtained by these three ways irrefutably proves the correctness of the method described in the paper.

The goal of the second example, parametric resonant vibrations eliminator (absorber), was different. It is an attempt to show the possibility of practical application of two approaches for tuning a vibrations eliminator (absorber) in an unstable parametric system to stabilise: the first-order sensitivity analysis presented in paper [95] and the second-order sensitivity analysis presented in this paper.

The method presented in the paper consists in reversing the typical order of calculation. Instead of searching for a solution to a differential equation describing parametric vibrations and then calculating its derivatives, the calculations are performed in reverse order. First, the derivative of the equation of motion is calculated to obtain the sensitivity equation and then this sensitivity equation is solved. The theoretically demonstrated correctness of such an approach is also confirmed by the fact that the results obtained in these two ways are consistent in the examples presented in the paper. The most important benefit of using this approach is the abandonment of the need to use the approximate small parameter method, which has been de facto used in other works, for example, [56], [57] and [58]. In the proposed approach, the iteration step can, therefore, be significantly longer and vibration stabilisation can be achieved faster. To summarise, the most important benefit of the method presented in this paper and the previous one is the ability to stabilise an unstable system in one-step iteration.

These papers present the author’s achievements on the subject, which has been the author’s area of expertise. This is not the end of the publication cycle on this subject, as the author plans to publish further papers in this field. They will concern the sensitivity analysis of discontinuous parametric periodic systems. These works will be submitted for publication soon.

DOI: https://doi.org/10.2478/sgem-2025-0015 | Journal eISSN: 2083-831X | Journal ISSN: 0137-6365
Language: English
Page range: 75 - 92
Submitted on: Jan 22, 2025
Accepted on: Apr 28, 2025
Published on: Sep 19, 2025
Published by: Wroclaw University of Science and Technology
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2025 Zbigniew Wójcicki, published by Wroclaw University of Science and Technology
This work is licensed under the Creative Commons Attribution 4.0 License.