Have a personal or library account? Click to login
Central Limit Theorem for Random Dynamical System with Jumps and State-Dependent Jump Intensity Cover

Central Limit Theorem for Random Dynamical System with Jumps and State-Dependent Jump Intensity

Open Access
|Mar 2025

Full Article

1.
Introduction

Piecewise-deterministic Markov processes (PDMPs) represent a class of stochastic models, that have recently received extensive research attention. They were introduced by Davis [7] as a general and various practical systems in which randomness is limited to jumps. These processes are governed by deterministic semiflows and mentioned jumps which change the state as well the semiflows which will determine evolution of the system. PDMPs have proven to be effective mathematical models for many phenomena, such as gene expression [14] and population dynamics [1]. Results concerning ergodicity and asymptotic stability of these systems are already quite extensively researched [1, 2, 3, 8, 9, 5, 16].

Having the asymptotic stability of such processes, natural questions arise regarding the limit theorems. In [11], we focused on demonstrating a law of the iterated logarithm for some PDMPs in which the intensity of jumps depends on the state of the system. In this paper we prove the central limit theorem (CLT) for this class of processes.

In [4], a criterion for the CLT was introduced, and an example of a model with a constant intensity of jumps was given. The difference between our model and the one considered in [4] lies in the assumption that the intensity of jumps follows an exponential distribution with a constant intensity parameter. This assumption seems to be too restrictive concerning biological models, such as gene expression, in which the relation between the intensity of jumps and the state of the system is visible. It turns out that the criterion for the CLT from [4] is also applicable in the case we are considering.

The paper contains three sections. In Section 2, basic definitions and notations are introduced. In Section 3, we start with an intuitive description of the considered model, and then we formulate strict definitions. We also provide conditions that were used in [6] to show exponential ergodicity. The last section contains the proof of the CLT for the model described earlier, and it is strongly influenced by the methods used in [4].

2.
Preliminaries

Let ℝ be the set of all real numbers and let ℝ+ = [0, ∞). Let (E, ρ) be a Polish space with the σ-field ℬ(E) of all Borel subsets of E. By B(E) we denote the space of all bounded, Borel measurable functions f : E → ℝ, equipped with the supremum norm and we list two of its subsets: C(E) and Lip(E) consisting of all continuous functions and Lipschitz-continuous functions, respectively. A continuous function V : E → ℝ+ is called Lyapunov function if it is bounded on bounded sets and for some x0E limρ(x,x0)V(x)=. \mathop {\lim}\limits_{\rho \left({x,{x_0}} \right) \to \infty} V\left(x \right) = \infty.

Let ℳs(E) be the set of all finite, countably additive functions on ℬ(E). By ℳ+(E) and ℳ1(E) we denote the subsets of ℳs(E) consisting of all non-negative measures and all probability measures, respectively. We write 1V(E) {\cal M}_1^V\left(E \right) for the set of all μ ∈ ℳ1(E) satisfying ∫E V (x) μ(dx) < ∞.

The set ℳs(E) will be considered with the Fortet-Mourier norm ([12, 13]), given by μFM=sup{|f,μ|:fFM(E)}forμs(E), \matrix{{{{\left\| \mu \right\|}_{FM}} = \sup \{|\left\langle {f,\mu} \right\rangle |:f \in {{\cal F}_{FM}}\left(E \right)\}} \hfill & {{\rm{for}}\,\,\mu \in {{\cal M}_s}\left(E \right)} \hfill \cr}, where FM(E)={fC(E):|f(x)|1,|f(x)f(y)|ρ(x,y),x,yE},f,μ=Ef(x)μ(dx) \matrix{{{{\cal F}_{FM}}\left(E \right) = \left\{{f \in C\left(E \right):\left| {f\left(x \right)} \right| \le 1,\,\,\,\left| {f\left(x \right) - f\left(y \right)} \right| \le \rho \left({x,y} \right),\,\,x,y \in E} \right\},} \cr {\langle f,\mu \rangle = \int_E {f\left(x \right)\mu \left({dx} \right)}} \cr}

As usual, by B(x, r) we denote the open ball in E centered at x and radius r > 0. For the fixed set AE we define the indicator function 𝟙A : E → {0, 1} as 𝟙A(x) = 1 for xA and 𝟙A(x) = 0 otherwise.

A function 𝒦: E × ℬ(E) → [0, 1] is called a (sub)stochastic kernel if for each A ∈ ℬ(E), x ↦ 𝒦(x, A) is a measurable map on E, and for each xE, A ↦ 𝒦(x, A) is a (sub)probability Borel measure on ℬ(E).

An operator P : ℳ+(E) → ℳ+(E) is called a Markov operator if:

  • P (α1μ1 + α2μ2) = α11 + α22 for α1, α2 ∈ ℝ+, μ1, μ2 ∈ ℳ+(E),

  • (E) = μ(E) for μ ∈ ℳ+(E).

A Markov operator P is called regular if there is a linear operator U : B(E) → B(E), called the dual operator to P, such that (1) f,Pμ=Uf,μforallfB(E),μ+(E). \matrix{{\left\langle {f,\,P\mu} \right\rangle = \left\langle {U\,f,\,\mu} \right\rangle} \hfill & {{\rm{for}}\,{\rm{all}}\,\,f \in B\left(E \right),\,\,\,\mu \in {{\cal M}_ +}\left(E \right)} \hfill \cr}. If P is a regular Markov operator then the function 𝒦: E × ℬ(E) → [0, 1], given by 𝒦(x, A) = x(A) for xE, A ∈ ℬ(E), is a stochastic kernel. On the other hand, an arbitrary given stochastic kernel 𝒦 defines a regular Markov operator P and its dual operator U by formulas: Pμ(A)=EK(x,A)μ(dx)forμ+(E),A(E) \matrix{{P\mu \left(A \right) = \int_E {{\cal K}\left({x,A} \right)\mu \left({dx} \right)}} \hfill & {{\rm{for}}\,\,\mu \in} \hfill \cr} {{\cal M}_ +}(E),\,\,\,A \in \,{\cal B}(E) and Uf(x)=Ef(y)K(x,dy)forxE,fB(E). Uf\left(x \right) = \int_E {\matrix{{f\left(y \right){\cal K}\left({x,dy} \right)} \hfill & {{\rm{for}}\,\,x \in E,\,\,\,f \in B\left(E \right)} \hfill \cr}}.

Let us note that the operator U can be extended to a linear operator defined on the space of all bounded below Borel functions (E) so that (1) holds for all f(E).

A regular Markov operator P is called Feller if U fC(E) for every fC(E).

We say that μ ∈ ℳ+(E) is invariant with respect to P if = μ.

If there exists an invariant measure μ ∈ ℳ1(E) and a constant β ∈ [0, 1) such that, for every μ1V(E) \mu \in {\cal M}_1^V\left(E \right) and some constant C(μ) ∈ ℝ, we have Pnμμ*FMC(μ)βnforalln, \matrix{{{{\left\| {{P^n}\mu - {\mu_*}} \right\|}_{FM}} \le C\left(\mu \right){\beta^n}} \hfill & {{\rm{for}}\,{\rm{all}}\,\,n \in {\mathbb{N}}} \hfill \cr}, then μ is called exponentially attracting. If an operator P has such an exponentially attracting invariant probability measure, then the operator P is said to be exponentially ergodic.

It is well known that for every stochastic kernel 𝒦 and any fixed measure μ ∈ ℳ1(E), we can always define on suitable probability space, say (Ω, ℱ, Probμ), a discrete-time homogeneous Markov chain {Xn}n∈ℕ0 for which (2) Probμ(X0A)=μ(A)forA(E),K(x,A)=Probμ(Xn+1A|Xn=x)forxE,A(E),n0. \matrix{{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\rm{Prob}}_{\mu}\left({{X_0} \in A} \right) = \mu \left(A \right)} \hfill & {{\rm{for}}\,A \in {\cal B}\left(E \right),} \hfill \cr {{\cal K}\left({x,A} \right) = {\rm{Pro}}{{\rm{b}}_\mu}({X_{n + 1}} \in A|{X_n} = x)} \hfill & {{\rm{for}}\,x \in E,\,A \in {\cal B}\left(E \right),\,n \in {{\mathbb{N}}_0}.}\cr} Then the Markov operator P corresponding to the kernel (2) describes the evolution of the distributions μn(·) := Probμ(Xn ∈ ·), that is μn+1=Pμnforn0. \matrix{{{\mu_{n + 1}} = P{\mu_n}} \hfill & {{\rm{for}}\,\,n \in {{\mathbb{N}}_0}} \hfill \cr}. In our further considerations, we will use the symbol 𝔼μ for the expectation with respect to Probμ. If μ = δx for some xE, we will write 𝔼x.

We say that a time-homogeneous Markov chain evolving on the space E2 (endowed with the product topology) is a Markovian coupling of some stochastic kernel 𝒦: E × ℬ(E) → [0, 1] whenever its stochastic kernel B : E2 × ℬ(E2) → [0, 1] satisfies B(x,y,A×E)=K(x,A)andB(x,y,E×A)=K(y,A) \matrix{{B\left({x,y,A \times E} \right) = {\cal K}\left({x,A} \right)} \hfill & {{\rm{and}}} \hfill & {B\left({x,y,E \times A} \right) = {\cal K}\left({y,A} \right)} \hfill \cr} for all x, yE and A ∈ ℬ(E). Let us underline this, that if Q : E2 × ℬ(E2) → [0, 1] is a substochastic kernel satisfying Q(x,y,A×E)K(x,A)andQ(x,y,E×A)K(y,A), \matrix{{Q\left({x,y,A \times E} \right) \le {\cal K}\left({x,A} \right)} \hfill & {{\rm{and}}} \hfill & {Q\left({x,y,E \times A} \right) \le {\cal K}\left({y,A} \right)} \hfill \cr}, for all x, yE and A ∈ ℬ(E), then we can always construct a Markovian coupling of 𝒦 whose transition function B satisfies QB. For this purpose, it suffices to define the family {R(x, y, ·) : x, yE} of measures on ℬ(E2), which on A × B ∈ ℬ(E2) are given by R(x,y,A×B)=(K(x,A)Q(x,y,A×E))(K(y,B)Q(x,y,E×B))1Q(x,y,E2) R\left({x,y,A \times B} \right) = {{\left({{\cal K}\left({x,A} \right) - Q\left({x,y,A \times E} \right)} \right)\left({{\cal K}\left({y,B} \right) - Q\left({x,y,E \times B} \right)} \right)} \over {1 - Q\left({x,y,{E^2}} \right)}} if Q(x, y, E2) < 1, and R(x, y, A × B) = 0 otherwise. Then B := Q + R is a stochastic kernel satisfying QB, and that the Markov chain with transition function B is a coupling of 𝒦.

Let {Xn}n∈ℕ0 be a Markov chain evolving in E with transition kernel 𝒦 and initial distribution μ. Suppose μ∈ ℳ1(E) is the unique invariant measure of the Markov operator P corresponding to the kernel given by (2). For any n ∈ ℕ and any Borel function g : E → ℝ define sn(g)=g(X1)++g(Xn)n,σ2(g)=limn𝔼μ*(sn2(g)). \matrix{{{s_n}\left(g \right) = {{g\left({{X_1}} \right) + \ldots + g\left({{X_n}} \right)} \over {\sqrt n}},} \cr {{\sigma^2}\left(g \right) = \mathop {\lim}\limits_{n \to \infty} {{\mathbb{E}}_{{\mu_*}}}\left({s_n^2\left(g \right)} \right).} \cr} Denote by Φsn(g) the distribution of sn(g) and put = g − 〈g, μ〉.

Let g : E → ℝ be a Borel function such that 〈g2, μ〉 < ∞. By definition, {g(Xn)}n∈ℕ0 satisfies the CLT condition, if σ2() < ∞ and Φsn() converges weakly to 𝒩(0; σ2()), as n → ∞.

Proving the CLT we will use the following theorem proved in [4].

Theorem 1

Let {Xn}n∈ℕ0 be a time-homogeneous Markov chain with values in E and let 𝒦: E × ℬ(E) → [0, 1] be the transition law of {Xn}n∈ℕ0 which satisfies the following conditions:

  • (B0)

    The Markov operator P corresponding to 𝒦 has the Feller property.

  • (B1)

    There exist a Lyapunov function V : E → ℝ+ and constants a ∈ (0, 1) and b ∈ (0, ∞) such that UV2(x)(aV(x)+b)2foreveryxE. \matrix{{U{V^2}\left(x \right) \le {{(aV\left(x \right) + b)}^2}} \hfill & {for\,every\,x \in E} \hfill \cr}. In addition, assume that there is a substochastic kernel Q : E2 × ℬ(E2) → [0, 1] satisfying Q(x,y,A×E)K(x,A)andQ(x,y,E×A)K(y,A) \matrix{{Q\left({x,y,A \times E} \right) \le {\cal K}\left({x,A} \right)} \hfill & {{\rm{and}}} \hfill & {Q\left({x,y,E \times A} \right) \le {\cal K}\left({y,A} \right)} \hfill \cr} for x, yE, A ∈ ℬ(E).

  • (B2)

    There exist FE2 and q ∈ (0, 1) such that suppQ(x,y,)FandE2ρ(u,v)Q(x,y,du×dv)qρ(x,y) \matrix{{{\rm{supp}}\,Q\left({x,y, \cdot} \right) \subset F} \hfill & {and} \hfill & {\int_{{E^2}} {\rho \left({u,v} \right)Q\left({x,y,du \times dv} \right) \le q\rho \left({x,y} \right)}} \hfill \cr} for (x, y) ∈ F.

  • (B3)

    For G(r) = {(x, y) ∈ F : ρ(x, y) ≤ r}, r > 0, we have inf(x,y)FQ(x,y,G(qρ(x,y)))>0. \mathop {\inf}\limits_{(x,y) \in F} Q\left({x,y,G\left({q\rho \left({x,y} \right)} \right)} \right) > 0.

  • (B4)

    There exist constants ν ∈ (0, 1] and l > 0 such that Q(x,y,E2)1lρ(x,y)νforevery(x,y)F. \matrix{{Q\left({x,y,{E^2}} \right) \ge 1 - l\rho {{(x,y)}^\nu}} \hfill & {for\,every\,(x,y) \in F} \hfill \cr}.

  • (B5)

    There is a coupling {(Xn(1),Xn(2))}n0 {\{\left({X_n^{\left(1 \right)},X_n^{\left(2 \right)}} \right)\}_{n \in {{\mathbb{N}}_0}}} of 𝒦 with transition law BQ such that for some R > 0 and K:={(x,y)F:V(x)+V(y)<R} K: = \{\left({x,y} \right) \in F:V\left(x \right) + V\left(y \right) < R\} we can find ζ ∈ (0, 1) and C̅ > 0 satisfying 𝔼(x,y)(ζσK)C¯wheneverV(x)+V(y)<4b(1a)1,whereσK=inf{n:XnK}. \matrix{{{{\mathbb{E}}_{\left({x,y} \right)}}\left({{\zeta^{- {\sigma_K}}}} \right) \le \bar C\,\,\,\,\,whenever\,\,V\left(x \right) + V\left(y \right) < 4b{{(1 - a)}^{- 1}},} \cr {where\,\,{\sigma_K} = \inf \left\{{n \in {\mathbb{N}}:{X_n} \in K} \right\}.} \cr}

Let μ1V(E) \mu \in {\cal M}_1^V\left(E \right) be an initial distribution of {Xn}n∈ℕ0. If gLip(E), then {g(Xn)}n∈ℕ0 satisfies the CLT.

3.
Model description and assumptions

Let (Y, ρ) be a Polish space and let I = {1, ..., N} for a fixed positive integer N. We define X : = Y × I and consider the space (X, ρc), where (3) ρc((y1,i),(y2,j))=ρ(y1,y2)+cΨ(i,j),(y1,i),(y2,j)X, \matrix{{{\rho_c}\left({\left({{y_1},i} \right),\,\left({{y_2},j} \right)} \right) = \rho \left({{y_1},{y_2}} \right) + c\Psi \left({i,j} \right),} \hfill & {\left({{y_1},i} \right),\,\left({{y_2},j} \right) \in X} \hfill \cr}, where (4) Ψ(i,j)={1forij,0fori=j. \Psi \left({i,j} \right) = \left\{{\matrix{{1\,\,\,{\rm{for}}\,\,\,i \ne j,} \hfill \cr {0\,\,\,{\rm{for}}\,\,\,i = j.} \hfill \cr}} \right. The constant c > 0 will be defined later. Let Θ be a compact interval.

Our considerations are conducted for a discrete-time dynamical system related to the stochastic process {(Y (t), ξ(t))}t∈ℝ+, which evolves through random jumps in the space X. We proceed to provide a description of this process.

Assume that we have a finite collection of maps, called semiflows, Πi : ℝ+ × YY, iI, which are continuous with respect to each variable and satisfy for every iI and each yY, the following conditions: Πi(0,y)=yandΠi(s+t,y)=Πi(s,Πi(t,y))fors,t+. \matrix{{{\Pi_i}\left({0,y} \right) = y} \hfill & {{\rm{and}}} \hfill & {{\Pi_i}\left({s + t,y} \right) = {\Pi_i}\left({s,{\Pi_i}\left({t,y} \right)} \right)} \hfill & {{\rm{for}}\,\,s,t \in {{\mathbb{R}}_ +}} \hfill \cr}.

The process {Y (t)}t∈ℝ+ evolves between jumps according to one of the transformation Πi, whose index i is determined by {ξ(t)}t∈ℝ+.

Let πij : Y → [0, 1], i, jI be a matrix of continuous functions such that jIπij(y)=1foriI,yY. \matrix{{\sum\limits_{j \in I} {{\pi_{ij}}\left(y \right) = 1}} \hfill & {{\rm{for}}\,i \in I,y \in Y} \hfill \cr}.

Just after every jump, the semiflow is changed from Πi to Πj in accordance with πij and at the moment of a jump the process {Y (t)}t∈ℝ+ shifts to the new state by a function ω(θ, ·): YY, which is randomly chosen from a given set {ω(θ, ·) : θ ∈ Θ}. We assume that Y × Θ ∋ (y, θ) ↦ ω(θ, y) ∈ Y is continuous and that the probability of choosing ω(θ, ·) is related with density functions Θ ∋ θp(θ, y), yY, such that (θ, y) ↦ p(θ, y) is continuous. Moreover, we assume that the intensity of jumps is given by a Lipschitz-continuous function λ : Y → (0, ∞), which satisfies the following conditions: (5) λ_=infyYλ(y)>0andλ¯=supyYλ(y)<. \matrix{{\underline \lambda = \mathop {\inf}\limits_{y \in Y} \lambda \left(y \right) > 0} \hfill & {{\rm{and}}} \hfill & {\overline \lambda = \mathop {\sup}\limits_{y \in Y} \lambda \left(y \right) < \infty} \hfill \cr}.

The evolution of {(Y (t), ξ(t))}t∈ℝ+ can be described as follows. Assume that the process starts at some point (y0, i0) ∈ Y × I. Then Y(t)=Πi0(t,y0)andξ(t)=i0,0t<t1, \matrix{{Y\left(t \right) = {\Pi_{{i_0}}}\left({t,{y_0}} \right)} \hfill & {{\rm{and}}} \hfill & {\xi \left(t \right) = {i_0},} \hfill & {0 \le t < {t_1}} \hfill \cr}, where t1 depends on y0 and i0. At time t1 the process {Y (t)}t∈ℝ+ jumps to the new position y1:=ω(θ1,Y(t1))=ω(θ1,Πi0(t1,y0)), {y_1}: = \omega \left({{\theta_1},Y\left({{t_1} -} \right)} \right) = \omega \left({{\theta_1},{\Pi_{{i_0}}}\left({{t_1},{y_0}} \right)} \right), where θ1 ∈ Θ is randomly chosen in accordance with the distribution given by density θp(θ, Πi0 (t1, y0)). In the next step, we choose randomly i1I such that the probability of choosing i1 is given by πi0i1 (y1). Then Y(t)=Πi1(tt1,y1)andξ(t)=i1,t1t<t2. \matrix{{Y\left(t \right) = {\Pi_{{i_1}}}\left({t - {t_1},{y_1}} \right)} \hfill & {{\rm{and}}} \hfill & {\xi \left(t \right) = {i_1},} \hfill & {{t_1} \le t < {t_2}} \hfill \cr}. We repeat the above steps with the point (y1, i1) instead of (y0, i0). The next steps are defined by induction.

Assuming that t0 = 0, we get Y(t)=Πin(ttn,yn)andξ(t)=in,tnt<tn+1,n0. \matrix{{Y\left(t \right) = {\Pi_{{i_n}}}\left({t - {t_n},{y_n}} \right)} \hfill & {{\rm{and}}} \hfill & {\xi \left(t \right) = {i_n},} \hfill & {{t_n} \le t < {t_{n + 1}},n \in {{\mathbb{N}}_0}} \hfill \cr}.

Let us emphasize that in this work we study only the sequence of random variables given by the post-jump locations of the process {(Y (t), ξ(t))}t∈ℝ+, namely the process {(Yn, ξn)n∈ℕ0, where Yn = Y (τn), ξn = ξ(τn) for n ∈ ℕ0 and τn is a random variable describing the jump time tn.

We can now give the formal description of the model. Let us set a probability space (Ω, ℱ, Probμ) and define {(Yn, ξn)}n∈ℕ0 as follows. Let (Y0, ξ0): Ω → X be a random variable with arbitrary and fixed distribution μ ∈ ℳ1(X). Further, let us define by induction the sequences of random variables {τn}n∈ℕ0, {ξn}n∈ℕ, {ηn}n∈ℕ and {Yn}n∈ℕ, which describe the sequences {tn}n∈ℕ, {in}n∈ℕ, {θn}n∈ℕ and {yn}n∈ℕ respectively, such that the following conditions are fulfilled:

  • The sequence τn : Ω → [0, ∞), n ∈ ℕ0, where τ0 = 0, is strictly increasing such that τn → ∞ a.e., and the increments Δτn = τnτn−1 are mutually independent and their conditional distributions are given by Probμ(Δτn+1t|Yn=yandξn=i)=1eL(t,y,i)fort+, \matrix{{{\rm{Pro}}{{\rm{b}}_\mu}(\Delta {\tau_{n + 1}} \le t|{Y_n} = y\,\,{\rm{and}}\,\,{\xi_n} = i) = 1 - {e^{- L\left({t,y,i} \right)}}} \hfill & {{\rm{for}}\,t \in {{\mathbb{R}}_ +}} \hfill \cr}, where yY, iI and L is given by (6) L(t,y,i)=0tλ(Πi(s,y))ds. L\left({t,y,i} \right) = \mathop \int \nolimits_0^t \lambda \left({{\Pi_i}\left({s,y} \right)} \right)ds.

  • The sequence ξn : Ω → I, n ∈ ℕ satisfies the following condition Probμ(ξn=j|Yn=y,ξn1=i)=πij(y)fori,jI,yY. \matrix{{{\rm{Pro}}{{\rm{b}}_\mu}({\xi_n} = j|{Y_n} = y,\,\,{\xi_{n - 1}} = i) = {\pi_{ij}}\left(y \right)} \hfill & {{\rm{for}}\,i,j \in I,y \in Y} \hfill \cr}.

  • ηn : Ω → Θ, n ∈ ℕ, is defined as follows Probμ(ηn+1D|Πξn(Δτn+1,Yn)=y)=Dp(θ,y)dθ {\rm{Pro}}{{\rm{b}}_\mu}({\eta_{n + 1}} \in D|{\Pi_{{\xi_n}}}\left({\Delta {\tau_{n + 1}},{Y_n}} \right) = y) = \int_D {p\left({\theta,y} \right)d\theta} for all D ∈ ℬ(Θ) and yY.

  • Yn : Ω → Y, n ∈ ℕ, are given in following way Yn+1=ω(ηn+1,Πξn(Δτn+1,Yn))forn0. \matrix{{{Y_{n + 1}} = \omega \left({{\eta_{n + 1}},{\Pi_{{\xi_n}}}\left({\Delta {\tau_{n + 1}},{Y_n}} \right)} \right)} \hfill & {{\rm{for}}\,n \in {{\mathbb{N}}_{\rm{0}}}} \hfill \cr}.

Furthermore, for U0=(Y0,ξ0),Uk=(Y0,τ1,,τk,η1,,ηk,ξ0,,ξk)fork, \matrix{{{U_0} = \left({{Y_0},{\xi_0}} \right),} \hfill & {{U_k} = \left({{Y_0},{\tau_1}, \ldots,{\tau_k},{\eta_1}, \ldots,{\eta_k},{\xi_0}, \ldots,{\xi_k}} \right)} \hfill & {{\rm{for}}\,k \in {\mathbb{N}},} \hfill \cr} we assume that for every k ∈ ℕ0, the random variables ξk+1 and ηk+1 are conditionally independent of Uk given {Yk+1 = y, ξk = i} and {Πξkτk+1, Yk) = y}, respectively. We require also that ξk+1, ηk+1 and Δτk+1 are mutually conditionally independent given Uk, and that Δτk+1 is independent of Uk.

We can easily check that the process {(Yn, ξn)}n∈ℕ0 is a time-homogeneous Markov chain with phase space X such that the evolution of the distributions μn(·) := Probμ((Yn, ξn) ∈ ·) can be described by the Markov operator P : ℳ+(E) → ℳ+(E) given by (7) Pμ(A)=jIΘX0𝟙A(ω(θ,Πi(t,y)),j)λ(Πi(t,y))eL(t,y,i)×πij(ω(θ,Πi(t,y))p(θ,Πi(t,y)))dtdμ(dy,di)dθ P\mu \left(A \right) = \sum\limits_{j \in I} {\int_\Theta {\int_X {\int_0^\infty {{{\mathbb{1}}_A}\left({\omega \left({\theta,{\Pi_i}\left({t,y} \right)} \right),j} \right)\lambda \left({{\Pi_i}\left({t,y} \right)} \right){e^{- L\left({t,y,i} \right)}} \times {\pi_{ij}}\left({\omega \left({\theta,{\Pi_i}\left({t,y} \right)} \right)p\left({\theta,{\Pi_i}\left({t,y} \right)} \right)} \right)dtd\mu \left({dy,di} \right)d\theta}}}} for μ ∈ ℳ+(E), A ∈ ℬ(E). Its dual operator U : B(E) → B(E) is given by Uf(y,i)=jIΘ0f(ω(θ,Πi(t,y)),j))λ(Πi(t,y))eL(t,y,i)×πij(ω(θ,Πi(t,y)))p(θ,Πi(t,y))dtdθ Uf\left({y,i} \right) = \sum\limits_{j \in I} {\int_\Theta {\int_0^\infty {f(\omega (\theta,{\Pi_i}(t,y)),j))} \lambda ({\Pi_i}(t,y)){e^{- L(t,y,i)}} \times {\pi_{ij}}(\omega (\theta,{\Pi_i}(t,y)))p(\theta,{\Pi_i}(t,y))dtd\theta}} for xE, fB(E), where L is defined in (6).

We apply the following assumptions:

  • (A1)

    There is yY such that supyY0eλ_tΘρ(ω(θ,Πi(t,y*)),y*)p(θ,Πi(t,y))dθdt<foriI. \matrix{{\mathop {\sup}\limits_{y \in Y} \int \nolimits_0^\infty {e^{\underline \lambda t}}\int_\Theta {\rho (\omega (\theta,{\Pi_i}(t,{y_*})),{y_*})p\left({\theta,{\Pi_i}\left({t,y} \right)} \right)d\theta dt < \infty}} \hfill & {{\rm{for}}\,i\, \in \,I.} \hfill \cr}

  • (A2)

    There exist constants γ ∈ ℝ, L > 0, and a bounded on bounded subsets of Y function ℒ: Y → ℝ+ such that the following inequality is satisfied ρ(Πi(t,y1),Πj(t,y2))Leγtρ(y1,y2)+t(y2)Ψ(i,j) \rho \left({{\Pi_i}\left({t,{y_1}} \right),{\Pi_j}\left({t,{y_2}} \right)} \right) \le L{e^{\gamma t}}\rho \left({{y_1},{y_2}} \right) + t{\cal L}\left({{y_2}} \right)\Psi \left({i,j} \right) for t ∈ ℝ+, y1, y2Y, i, jI, where Ψ(i, j) is given by (4).

  • (A3)

    There exists a constant M > 0 such that Θρ(ω(θ,y1),ω(θ,y2))p(θ,y1)dθMρ(y1,y2)fory1,y2Y. \int_\Theta {\matrix{{\rho \left({\omega \left({\theta,{y_1}} \right),\omega \left({\theta,{y_2}} \right)} \right)p\left({\theta,{y_1}} \right)d\theta \le M\rho \left({{y_1},{y_2}} \right)} \hfill & {{\rm{for}}\,\,{y_1},{y_2} \in Y} \hfill \cr}.}

  • (A4)

    There exists S > 0 such that |λ(y1)λ(y2)|Sρ(y1,y2)fory1,y2Y. \matrix{{\left| {\lambda \left({{y_1}} \right) - \lambda \left({{y_2}} \right)} \right| \le S\rho ({y_1},{y_2})} \hfill & {{\rm{for}}\,{y_1},{y_2} \in Y} \hfill \cr}.

  • (A5)

    There exists T > 0 and W > 0 such that JI|πij(y1)πij(y2)|Tρ(y1,y2)fory1,y2Y,iI,Θ|p(θ,y1)p(θ,y2)|dθWρ(y1,y2)fory1,y2Y. \matrix{{\sum\limits_{J \in I} {\left| {{\pi_{ij}}\left({{y_1}} \right) - {\pi_{ij}}\left({{y_2}} \right)} \right| \le T\rho \left({{y_1},{y_2}} \right)}} \hfill & {{\rm{for}}\,\,{y_1},{y_2} \in Y,\,\,i \in I,} \hfill \cr {\int_\Theta {\left| {p\left({\theta,{y_1}} \right) - p\left({\theta,{y_2}} \right)} \right|d\theta \le W\rho \left({{y_1},{y_2}} \right)}} \hfill & {{\rm{for}}\,\,{y_1},{y_2} \in Y.} \hfill \cr}

  • (A6)

    There exists ɛπ > 0 and ɛp > 0 such that JImin{πi1j(y1),πi2j(y2)}επfory1,y2Y,i1,i2I,Θ(y1,y2)min{p(θ,y1),p(θ,y2)}dθεpfory1,y2Y, \matrix{{\sum\limits_{J \in I} {\min \{{\pi_{{i_1}j}}\left({{y_1}} \right),{\pi_{{i_2}j}}\left({{y_2}} \right)\} \ge {\varepsilon_\pi}}} \hfill & {{\rm{for}}\,\,{y_1},{y_2} \in Y,\,\,{i_1},{i_2} \in I,} \hfill \cr {\int_{\Theta ({y_1},{y_2})} {\min \{p\left({\theta,{y_1}} \right),p\left({\theta,{y_2}} \right)\} d\theta \ge {\varepsilon_p}}} \hfill & {{\rm{for}}\,\,{y_1},{y_2} \in Y,} \hfill \cr} where Θ(y1, y2) = {θ ∈ Θ : ρ(ω(θ, y1), ω(θ, y2)) ≤ ρ(y1, y2)}.

As it was written at the beginning of this section, the constant c appearing in (3) is needed to be sufficiently large and depends on constants appearing in conditions (A1)–(A4). For more details, we refer to [6].

In [6] it is shown, that if the conditions (A1)–(A6) hold and the constants occurring in them satisfy the inequality (8) LMλ¯+γ<λ_, LM\overline \lambda + \gamma < \underline \lambda, then the operator P given by (7) is exponentially ergodic.

We would like to justify the conditions (A1)–(A6) stated above. The condition (A2) is met by a quite large class of semiflows defined on reflexive Banach spaces which can be generated by some differential equations engaging dissipative operators [10]. It happens often [5] then that the condition (A1) is a consequence of the conditions (A2) and (A3). The conditions (A3)–(A6) concerning assumptions such as contractivity are quite natural and in our setting commonly used regarding ergodic properties. One may consult [13, 15] for details.

4.
The main result

In this section we prove the CLT for model {(Yn, ξn)}n∈ℕ0 described in the previous section. Our proof is strongly based on the proof of Theorem 4.1 in [4]. The proof will require extensions of the conditions (A1) and (A3) of the model, namely:

  • (A1)′

    There exists > 0 such that supyY0eλ_tΘ[ρ(ω(θ,Πi(t,y˜)),y˜)]2p(θ,Πi(t,y))dθdt<foriI. \matrix{{\mathop {\sup}\limits_{y \in Y} \mathop \int \nolimits_0^\infty {e^{\underline \lambda t}}\int_\Theta {{{[\rho (\omega (\theta,{\Pi_i}(t,\tilde y)),\tilde y)]}^2}p\left({\theta,{\Pi_i}\left({t,y} \right)} \right)d\theta dt < \infty}} \hfill & {{\rm{for}}\,i\, \in \,I.} \hfill \cr}

  • (A3)′

    There exists M > 0 such that Θ[ρ(ω(θ,y1),ω(θ,y2))]2p(θ,y1)dθM[ρ(y1,y2)]2fory1,y2Y. \matrix{{\int_\Theta {{{[\rho (\omega (\theta,{y_1}),\,\omega (\theta,{y_2}))]}^2}p\left({\theta,{y_1}} \right)d\theta \le M'{{[\rho ({y_1},{y_2})]}^2}}} \hfill & {{\rm{for}}\,{y_1},{y_2}\, \in \,Y.} \hfill \cr}

It is important to notice that (A1)′, (A3)′ are truly more restrictive than the base conditions (A1) and (A3), respectively.

Let us also define a function V : X → ℝ+ by (9) V(y,i)=ρ(y,y˜)for(y,i)X, \matrix{{V\left({y,i} \right) = \rho \left({y,\tilde y} \right)} \hfill & {{\rm{for}}\,\,\left({y,i} \right) \in X,} \hfill \cr} where is the constant found by the condition (A1)′.

Theorem 2

Let {(Yn, ξn)}n∈ℕ0 be the Markov chain with transition law corresponding to the Markov operator given by (7) and let (A1)′, (A2), (A3)′, (A4)–(A6) hold with constants satisfying (10) (λ¯λ_L)2M+2γλ_<1. {\left({{{\overline \lambda} \over {\underline \lambda}}L} \right)^2}M' + {{2\gamma} \over {\underline \lambda}} < 1.

Let gLip(X). If the initial distribution μ of the chain {g(Yn, ξn)}n∈ℕ0 belongs to 1V(X) {\cal M}_1^V\left(X \right) , where V is defined by (9), then {g(Yn, ξn)}n∈ℕ0 passes the CLT.

Proof

Before we proceed to justifying that the conditions (B0)–(B5) of Theorem 1 are met, we check that the inequality (10) implies (8) with M:=M M: = \sqrt {M'} .

From (10) we see that γλ_<12 \underline {{\gamma \over \lambda}} < {1 \over 2} . Let us set M:=M M: = \sqrt {M'} . Proving by contradiction let us assume that LMλ¯+γλ_. LM\overline \lambda + \gamma \ge \underline \lambda. We have LMλ¯λ_1γλ_. {{LM\overline \lambda} \over {\underline \lambda}} \ge 1 - {\gamma \over {\underline \lambda}}. Using the Bernoulli inequality we obtain (LMλ¯λ_)2(1γλ_)212γλ_, {({{LM\bar \lambda} \over {\underline \lambda}})^2} \ge {(1 - {\gamma \over {\underline \lambda}})^2} \ge 1 - {{2\gamma} \over {\underline \lambda}}, which contradicts condition (10).

Given that conditions (A1)′ and (A3)′ imply (A1) and (A3), respectively, studying the proof of [6, Theorem 3.1], we conclude that the assumptions (A1)′, (A2), (A3)′, (A4)–(A6) guarantee that all conditions (B0)–(B5) beyond (B1) are fulfilled. It remains to show that (B1) is also met.

We start from describing the left-hand side of this condition. UV2(y,i)=Θ0[ρ(ω(θ,Πi(t,y),y˜))]2λ(Πi(t,y))eL(t,y,i)p(θ,Πi(t,y))dtdθ. {UV}^2\left({y,i} \right) = \int_\Theta {\int_0^\infty {{{[\rho \left({\omega \left({\theta,{\Pi_i}\left({t,y} \right),\tilde y} \right)} \right)]}^2}\lambda \left({{\Pi_i}\left({t,y} \right)} \right){e^{- L\left({t,y,i} \right)}}p\left({\theta,{\Pi_i}\left({t,y} \right)} \right)dtd\theta}}. Let us fix (y, i) ∈ X and define ν ∈ ℳ1(ℝ+ × Θ) as ν(A)=0λ(Πi(t,y))eL(t,y,i)Θ𝟙A(t,θ)p(θ,Πi(t,y))dθdt \nu \left(A \right) = \mathop \int \nolimits_0^\infty \lambda \left({{\Pi_i}\left({t,y} \right)} \right){e^{- L\left({t,y,i} \right)}}\int_\Theta {{{\mathbb{1}}_A}\left({t,\theta} \right)p\left({\theta,{\Pi_i}\left({t,y} \right)} \right)d\theta dt} and a function φ0 : ℝ+ × Θ → ℝ by φ0(t,θ)=ρ(ω(θ,Πi(t,y)),y˜)for(t,θ)+×Θ. \matrix{{{\varphi_0}\left({t,\theta} \right) = \rho (\omega (\theta,{\Pi_i}(t,y)),\tilde y)} \hfill & {{\rm{for}}\,\,\left({t,\theta} \right) \in {{\mathbb{R}}_ +} \times \Theta.} \hfill \cr} Now, using the triangle inequality together with the Minkowski inequality we get (UV2(y,i))12=[+×Θφ02(t,θ)ν(dt×dθ)]12[+×Θ[ρ(ω(θΠi(t,y))),ω(θ,Πi(t,y˜))]2ν(dt×dθ)]12+[+×Θ[ρ(ω(θ,Πi(t,y˜))),y˜]2ν(dt×dθ)]12. \matrix{{{{(U{V^2}\left({y,i} \right))}^{{1 \over 2}}}} \hfill & {= {{\left[ {\int_{{{\mathbb{R}}_ +} \times \Theta} {\varphi_0^2\left({t,\theta} \right)\nu \left({dt \times d\theta} \right)}} \right]}^{{1 \over 2}}}} \hfill \cr {} \hfill & {\le {{\left[ {\int_{{{\mathbb{R}}_ +} \times \Theta} {{{[\rho (\omega (\theta {\Pi_i}(t,y))),\,\omega \left({\theta,{\Pi_i}\left({t,\tilde y} \right)} \right)]}^2}\nu \left({dt \times d\theta} \right)}} \right]}^{{1 \over 2}}}} \hfill \cr {} \hfill & {+ {{\left[ {\int_{{{\mathbb{R}}_ +} \times \Theta} {{{[\rho (\,\omega (\theta,{\Pi_i}(t,\tilde y))),\,\tilde y]}^2}\nu \left({dt \times d\theta} \right)}} \right]}^{{1 \over 2}}}.} \hfill \cr} To shorten the notation, we denote I1=+×Θ[ρ(ω(θ,Πi(t,y)),ω(θ,Πi(t,y˜)))]2ν(dt×dθ),I2=+×Θ[ρ(ω(θ,Πi(t,y˜))),y˜]2ν(dt×dθ). \matrix{{{I_1} = \int_{{{\mathbb{R}}_ +} \times \Theta} {{{[\rho (\omega (\theta,{\Pi_i}(t,y)),\,\omega (\theta,{\Pi_i}(t,\tilde y)))]}^2}\nu \left({dt \times d\theta} \right)},} \hfill \cr {{I_2} = \int_{{{\mathbb{R}}_ +} \times \Theta} {{{[\rho (\omega (\theta,{\Pi_i}(t,\tilde y))),\tilde y]}^2}\nu \left({dt \times d\theta} \right)}.} \hfill \cr} Using conditions (A2) and (A3)′ we have I10λ(Πi(t,y))eL(t,y,i)M[ρ(Πi(t,y),Πi(t,y˜))]2dt0λ(Πi(t,y))eL(t,y,i)M[Leλtρ(y,y˜)]2dtλ¯L2M[ρ(y,y˜)]20e(λ_2γ)tdt, \matrix{{{I_1}} \hfill & {\le \mathop \int \nolimits_0^\infty \lambda \left({{\Pi_i}\left({t,y} \right)} \right){e^{- L\left({t,y,i} \right)}}M'{{[\rho \left({{\Pi_i}\left({t,y} \right),{\Pi_i}\left({t,\tilde y} \right)} \right)]}^2}dt} \hfill \cr {} \hfill & {\le \mathop \int \nolimits_0^\infty \lambda \left({{\Pi_i}\left({t,y} \right)} \right){e^{- L\left({t,y,i} \right)}}M'{{[L{e^{\lambda t}}\rho \left({y,\tilde y} \right)]}^2}dt} \hfill \cr {} \hfill & {\le \bar \lambda {L^2}M'{{[\rho \left({y,\tilde y} \right)]}^2}\mathop \int \nolimits_0^\infty {e^{- \left({\underline \lambda - 2\gamma} \right)t}}dt,} \hfill \cr} where in the last inequality we used (5).

We obtain that I1λ¯L2Mλ_2γV2(y,i). {I_1} \le {{\overline \lambda {L^2}M'} \over {\underline \lambda - 2\gamma}}{V^2}\left({y,i} \right). Using (10) we check that 0<λ¯L2Mλ_2γλ¯L2Mλ_(λ_2γ)<1. 0 < {{\overline \lambda {L^2}M'} \over {\underline \lambda - 2\gamma}} \le {{\overline \lambda {L^2}M'} \over {\underline \lambda \left({\underline \lambda - 2\gamma} \right)}} < 1.

Obviously I2 is finite because of the condition (A1)′. Finally, setting the constant to be a=λ¯L2Mλ_2γ a = \sqrt {{{\overline \lambda {L^2}M} \over {\underline \lambda - 2\gamma}}} and b=supyY(0eλ_tΘ[ρ(ω(θ,Πi(t,y˜)),y˜]2p(θ,Πi(t,y))dθdt)12 b = \mathop {\sup}\limits_{y \in Y} (\mathop \int \nolimits_0^\infty {e^{- \underline \lambda t}}\int_\Theta {{{[\rho {{(\omega (\theta,{\Pi_i}(t,\tilde y)),\tilde y]}^2}p\left({\theta,{\Pi_i}\left({t,y} \right)} \right)d\theta dt)}^{{1 \over 2}}}} we showed that UV2(y,i)(aV(y,i)+b)2. {UV}^2\left({y,i} \right) \le {(aV\left({y,i} \right) + b)^2}. This establishes (B1) and completes the proof of the theorem.

DOI: https://doi.org/10.2478/amsil-2025-0004 | Journal eISSN: 2391-4238 | Journal ISSN: 0860-2107
Language: English
Submitted on: Sep 8, 2024
Accepted on: Jan 15, 2025
Published on: Mar 9, 2025
Published by: University of Silesia in Katowice, Institute of Mathematics
In partnership with: Paradigm Publishing Services
Publication frequency: 2 issues per year

© 2025 Joanna Kubieniec, published by University of Silesia in Katowice, Institute of Mathematics
This work is licensed under the Creative Commons Attribution 4.0 License.

AHEAD OF PRINT