Have a personal or library account? Click to login
Central Limit Theorem for Random “Contractive” Functions on Interval Cover

Central Limit Theorem for Random “Contractive” Functions on Interval

Open Access
|Apr 2025

Full Article

1.
Introduction

We are given a family of continuous functions {fi : [0, 1] → [0, 1] : i ∈ {1, . . . , N}} that are increasing and injective, possessing the following “contractive” properties:

  • (1)

    x∈(0,1)i,j∈{1,...,N} fi(x) < x < fj (x),

  • (2)

    i∈{1,...,N} fi(0) > 0,

  • (3)

    i∈ {1,...,N} fi(1) < 1.

We are also given a probability vector (p1, . . . , pN). The family (f1, . . . , fN; p1, . . . , pN) is called an iterated function system with probabilities.

This system generates a Markov chain, the distribution of which we will investigate. First, we are interested in whether this chain possesses a unique, non-atomic invariant measure. We will show that every measure will converge weakly to the unique invariant measure. Finally, we will prove the central limit theorem for this chain.

We use the approach from [1] as arguments for proving the properties turn out to be similar and easier to grasp. However, we took a different approach to proving the existence of a unique invariant measure as we use e-property introduced in [2] which made the proof quicker.

2.
Theoretical base

Let (S, d) be a metric space. By (C(S), ‖ · ‖), we denote the family of all continuous and bounded functions f : S → ℝ, equipped with the supremum norm ‖ · ‖.

Let (S) denote the set of all finite measures on the σ-algebra (S) of Borel subsets of the set S. Let 1(S) ⊂ (S) denote the subset of all probability measures on S.

An operator P : (S) → (S) is called a Markov operator if it satisfies the following conditions:

  • (1)

    P(λ1μ1 + λ2μ2) = λ11 + λ22 for λ1, λ2 ≥ 0, μ1, μ2(S),

  • (2)

    (S) = μ(S) for μ(S).

A Markov operator P is called a Feller operator if there exists a linear operator U : C(S) → C(S) such that ∫S Ufdμ = ∫S fdPμ for every fC(S) and μ(S).

A measure μ is called P-invariant for the Markov operator P if = μ.

A Markov operator P is called asymptotically stable if it has a unique invariant measure μ1(S) and, moreover, for every measure μ1(S), the sequence (Pnμ)n∈ℕ converges weakly to μ, i.e., fCSlimnSfxPnμdx=Sfxμ*dx. {\forall _{f \in C\left( S \right)}}\mathop {\lim }\limits_{n \to \infty } \int_S {f\left( x \right){P^n}\mu \left( {{\rm{d}}x} \right)} = \int_S {f\left( x \right){\mu _*}\left( {{{\rm{d}}x}} \right).}

Theorem 2.1 (Krylov-Bogolyubov [3]).

Let (S, d) be a compact metric space, and let P : (S) → (S) be a Feller operator defined on finite Borel measures on this space.

Then P has at least one invariant probability measure, i.e., there exists a measure μ1(S) such that for any A(S), PμA=μA. P\mu \left( A \right) = \mu \left( A \right).

Let (Ω, ℱ, μ) be a measure space. A set A ∈ ℱ is called an atom if, for any B ∈ ℱ, BA, and μ(B) < μ(A), we have μ(B) = 0. A measure is called non-atomic if the measure space has no atoms, i.e., if A, B ∈ ℱ and BA, then μ(A) > μ(B) > 0.

We will consider the following Feller operator. Assume that fi : [0, 1] → [0, 1] for i ∈ {1, . . . ,N} are continuous functions and let (p1, . . . , pN) be a probability vector. The family (f1, . . . , fN; p1, . . . , pN) generates a Markov operator P : (S) → (S) of the form PμA=i=1Npiμfi1AforA0,1. P\mu \left( A \right) = \sum\limits_{i = 1}^N {{p_i}\mu \left( {f_i^{ - 1}\left( A \right)} \right)\;\;\;{\rm{for}}\;A \in {{\cal B}}} \left( {\left[ {0,1} \right]} \right). This operator is a Feller operator, whose predual operator is the operator U : C([0, 1]) → C([0, 1]) given by the formula Uϕx=i=1NpiϕfixforϕC0,1,x0,1. U\phi \left( x \right) = \sum\limits_{i = 1}^N {{p_i}\phi \left( {{f_i}\left( x \right)} \right)} \;\;\;{\rm{for}}\;\phi \in C\left( {\left[ {0,1} \right]} \right),x \in \left[ {0,1} \right].

Let H be the space of continuous functions f : [0, 1] → [0, 1] that are increasing and injective. Let {f1, . . . , fN} ⊂ H be a finite set of functions satisfying the following properties:

  • (1)

    x∈(0,1)i,j∈{1,...,N} fi(x) < x < fj(x),

  • (2)

    i∈{1,...,N} fi(0) > 0,

  • (3)

    i∈{1,...,N} fi(1) < 1

and (p1, . . . , pN) is a probability vector. fd denotes a function satisfying condition (2), and fg denotes a function satisfying condition (3). The corresponding probabilities are denoted by pd and pg.

The system (f1, . . . , fN; p1, . . . , pN) has a corresponding Markov operator P defined as before. For each measure ν1(S), we describe the Markov chain (Xn) with the transition probability π(x, A) = x(A) for x ∈ [0, 1], A([0, 1]), and initial distribution ν using the probabilistic measure ℙν on the space ([0, 1], ([0, 1])⊗ℕ) such that: ν[Xn+1A|Xn=x]=πx,AandνX0A=νA, {\mathbb{P}_\nu }[{X_{n + 1}} \in A|{X_n} = x] = \pi \left( {x,A} \right)\;\;\;{\rm{and}}\;\;\;{\mathbb{P}_\nu }\left[ {{X_0} \in A} \right] = \nu \left( A \right), where x ∈ [0, 1], A([0, 1]).

Since the interval [0, 1] is a compact set, the Markov operator P has at least one invariant measure μ, which is a consequence of Krylov-Bogoliubov’s theorem. We now need to show that there is only one invariant measure. We also need to show that it is nonatomic.

3.
The invariant measure

We introduce a few theorems and lemmas needed to prove atomlessness and existence of a unique measure.

Theorem 3.1 ([4], Corollary 2.13).

Let ωgωnn0 \omega \to {\left( {g_\omega ^n} \right)_{n \ge 0}} be a non-degenerate random walk generated by a subgroup GHomeo([0, 1]), such that:

  • (1)

    There is no non-trivial interval I ⊂ [0, 1] invariant under G.

  • (2)

    There exists at least one probability measure μ on (0, 1) that is stationary for the random walk.

Then there exists a q < 1 such that for every x ∈ (0, 1), there exists a neighborhood Ix of x, such that foralmost all i ∈ Σ, the following holds: ginIxqnforeveryn. \left| {g_i^n\left( {{I_x}} \right)} \right| \le {q^n}\;\;\;for\;every\;n \in {\mathbb{N}}. Moreover, if the first condition is violated, the theorem holds for xinfsuppμ,supsuppμ. x \in \left( {\inf \left( {{\rm{supp}}\;\mu } \right),\sup \left( {{\rm{supp}}\;\mu } \right)} \right).

Remark 3.1.

Note that the theorem is also true on the interval [a, b].

Theorem 3.2.

There exists a q < 1 such that for every x ∈ (0, 1), there exists a neighborhood Ix of x, such that foralmost all i ∈ Σ, the following holds: finIxqnforeveryn. \left| {f_i^n\left( {{I_x}} \right)} \right| \le {q^n}\;\;\;for\;every\;n \in {\mathbb{N}}.

Proof

Let giHomeo([−ε, 1 + ε]), such that for x ∈ [0, 1] we have gi(x) = fi(x). This family does not satisfy the first assumption of the previous theorem, because [0, 1] is invariant under G. We will prove that there exists a stationary measure for the random walk generated by G.

By Krylov–Bogoliubov’s theorem, we know that the Markov operator generated by the family (f1, . . . , fn, p1, . . . , pn) has an invariant measure μ. Let μM([−ε, 1 + ε]) be defined by μ(A) = μ(A ∩ [0, 1]). It is easy to verify that μ is an invariant measure for the operator PG generated by G. We will show that μ is a stationary measure for the random walk generated by G. We need to show that for every measurable set A, the following holds: ε,1+επx,Adμ*x=μ*A. \int_{\left[ { - \varepsilon ,1 + \varepsilon } \right]} {\pi \left( {x,A} \right)d{\mu ^*}\left( x \right) = {\mu ^*}\left( A \right)} . However, ε,1+επx,Adμ*x=ε,1+εPGδxAdμ*x=i=1npiε,1+εδxgi1Adμ*x. \int_{\left[ { - \varepsilon ,1 + \varepsilon } \right]} {\pi \left( {x,A} \right)d{\mu ^*}\left( x \right)} = \int_{\left[ { - \varepsilon ,1 + \varepsilon } \right]} {{P_G}{\delta _x}\left( A \right)d{\mu ^*}\left( x \right)} = \sum\limits_{i = 1}^n {{p_i}} \int_{\left[ { - \varepsilon ,1 + \varepsilon } \right]} {{\delta _x}\left( {g_i^{ - 1}A} \right)d{\mu ^*}\left( x \right)} . But the last integral is μ*gi1A {\mu ^*}\left( {g_i^{ - 1}A} \right) , so μ is a stationary measure. From the definition of μ, it follows that supp(μ) = supp(μ), and later in the work, we will prove that {0, 1} ∈ supp(μ), so by the previous theorem, the statement holds.

Before we proceed to prove the uniqueness of the invariant measure, we will prove that invariant measures must be atomless, and their supports contain the endpoints of the interval.

Lemma 3.1.

Every invariant measure of the operator P is atomless.

Proof

Suppose there exists an invariant measure μ for the operator P that has an atom. Let a ∈ (0, 1) be a point such that μ*a=  supx0,1 μ*x. {\mu ^*}\left( {\left\{ a \right\}} \right) = \;{\rm{\;}}\mathop {{\rm{sup}}}\limits_{x \in \left( {0,1} \right)} {\rm{\;}}{\mu ^*}\left( {\left\{ x \right\}} \right). Then a is an atom of the measure μ, which means that μ({a}) > 0. Since μ is invariant under the operator P, we know that μ*a=i=1npiμ*fi1a. {\mu ^*}\left( {\left\{ a \right\}} \right) = \sum\limits_{i = 1}^n {{p_i}{\mu ^*}\left( {\left\{ {f_i^{ - 1}\left( a \right)} \right\}} \right)} . For each function fi, there is a point fi1a f_i^{ - 1}\left( a \right) such that the measure μ assigns the same value μ({a}), because fi1a f_i^{ - 1}\left(\{a\} \right) for i ∈ Σ forms an infinite set of points mapped to a. This means that for each i, we have μ*fi1a=μ*a {\mu ^*}\left( {\left\{ {f_i^{ - 1}\left( a \right)} \right\}} \right) = {\mu ^*}\left( {\left\{ a \right\}} \right) , which suggests that μ({a}) should be split between infinitely many different points fi1a f_i^{ - 1}\left( a \right) . However, this leads to a contradiction with the property that μ is a finite measure, as it would imply that μ((0, 1)) = ∞, which is impossible. Thus, we arrive at a contradiction, and therefore the measure μ cannot have atoms. Hence, every invariant measure of the operator P is atomless.

Lemma 3.2.

Every invariant measure of the operator P contains the points 0 and 1 in its support.

Proof

Suppose that 0 ∉ supp(μ). Then, by the atomlessness of μ, there exists the largest a > 0 such that μ([0, a]) = 0. However, from the invariance of μ, we obtain: 0=μ0,a=i=1npiμfi10,a. 0 = \mu \left( {\left[ {0,a} \right]} \right) = \sum\limits_{i = 1}^n {{p_i}\mu \left( {f_i^{ - 1}\left( {\left[ {0,a} \right]} \right)} \right)} . This implies that for every i, we have μfi10,a=0 \mu \left( {f_i^{ - 1}\left( {\left[ {0,a} \right]} \right)} \right) = 0 . However, there exists some i for which fi(a) < a, which implies that 0,afi10,a \left[ {0,a} \right] \subset f_i^{ - 1}\left( {\left[ {0,a} \right]} \right) , contradicting the maximality of a. A similar proof can be conducted for 1.

Definition 3.1 ([2]).

Let S be a compact metric space. We say that a Feller operator P has the e-property at the point xS if for every Lipschitz function ϕ: S → ℝ, the following holds: limyx supn UnϕxUnϕy=0. \mathop {\lim }\limits_{y \to x} {\rm{\;}}\mathop {\sup }\limits_{n \in {\mathbb{N}}} {\rm{\;}}\left| {{U^n}\phi \left( x \right) - {U^n}\phi \left( y \right)} \right| = 0. If P has the e-property at every point, we say that P has the e-property.

Theorem 3.3 ([3, Lemma 2.8]).

Let P be a Feller operator that possesses the e-property. Then for any two distinct ergodic measures μ, νM1(S), the following holds: suppμsuppν=. {\rm{supp}}\;\mu \cap {\rm{supp}}\;\nu = \emptyset {\rm{.}} In particular, from the proof of the lemma, it follows that if P has the e-property at x, then x ∉ supp μ ∩ supp ν.

We now proceed with the proof of the uniqueness of the invariant measure.

Theorem 3.4.

There exists exactly one invariant measure for the Feller operator P.

Proof

We know that if μ is an invariant measure, then 0 ∈ supp μ. Therefore, we only need to check that the operator P has the e-property at 0. Let ε > 0 be given. We need to show that there exists a δ > 0 such that if h < δ, then for every nN0, the following holds: iΣnpifihiΣnpifi0=iΣnpifi0,h<ε. \left| {\sum\limits_{i \in {\Sigma _n}} {{p_i}{f_i}\left( h \right)} - \sum\limits_{i \in {\Sigma _n}} {{p_i}{f_i}\left( 0 \right)} } \right| = \sum\limits_{i \in {\Sigma _n}} {{p_i}\left| {{f_i}\left( {\left[ {0,h} \right]} \right)} \right| < \varepsilon } . Let N1 be large enough such that (1 − pd)N1 < ε/3. Let Σn1=iΣn:fikfdfor1kN1 \Sigma _n^1 = \left\{ {i \in {\Sigma _n}:{f_{{i_k}}} \ne {f_d}\;{\rm{for}}\;1 \le k \le {N_1}} \right\} . Note that nΣn1=(1pd)N1 {\mathbb{P}_n}\left( {\Sigma _n^1} \right) = {(1 - {p_d})^{{N_1}}} . Let Σn2=iΣn:i|N1ΣN11 \Sigma _n^2 = \left\{ {i \in {\Sigma _n}:{i_{|{N_1}}} \notin \Sigma _{{N_1}}^1} \right\} . For iΣn2 i \in \Sigma _n^2 , let xi = fi(0). Since 0 < xi < 1, there exists a neighborhood Ji such that for almost all j ∈ Σ, we have fjkJi<qk \left| {f_j^k\left( {{J_i}} \right)} \right| < {q^k} . Let Σ3=jΣ:foreveryiΣn2,fjkJi<qk {\Sigma ^3} = \left\{ {j \in \Sigma :{\rm{for}}\;{\rm{every}}\;i \in \Sigma _n^2,\left| {f_j^k\left( {{J_i}} \right)} \right| < {q^k}} \right\} . Clearly, ℙ(Σ3) = 1. Choose δ small enough such that for every iΣn2 i \in \Sigma _n^2 , we have fi([0, δ]) ⊂ Ji. Let N2 > N1 be sufficiently large such that qN2N1 < ε/3. Let Σn4=j*=ij|n:iΣn2andjΣ3 \Sigma _n^4 = \left\{ {{j^*} = {{\left( {ij} \right)}_{|n}}:i \in \Sigma _n^2\;{\rm{and}}\;j \in {\Sigma ^3}} \right\} and Σn5=j*=(ij)|n:iΣn2andjΣ3 \Sigma _n^5 = \left\{ {{j^*} = {{(ij)}_{|n}}:i \in \Sigma _n^2\;{\rm{and}}\;j \notin {\Sigma ^3}} \right\} . Let N3 > N2 be sufficiently large such that N3Σn5<ε/3 {\mathbb{P}_{{N_3}}}\left( {\Sigma _n^5} \right) < \varepsilon /3 . For n > N3, the following holds: iΣnpifi0,h=iΣn1pifi0,h+iΣn4pifi0,h+iΣn5pifi0,hiΣn1pi+iΣn4piqnN1+iΣn5pi(1pd)N1+qN2N1+iΣn5piε. \begin{array}{*{20}{l}}{\sum\limits_{i \in {\Sigma _n}} {{p_i}\left| {{f_i}\left( {\left[ {0,h} \right]} \right)} \right|} }&{ = \sum\limits_{i \in \Sigma _n^1} {{p_i}\left| {{f_i}\left( {\left[ {0,h} \right]} \right)} \right|} + \sum\limits_{i \in \Sigma _n^4} {{p_i}\left| {{f_i}\left( {\left[ {0,h} \right]} \right)} \right|} + \sum\limits_{i \in \Sigma _n^5} {{p_i}\left| {{f_i}\left( {\left[ {0,h} \right]} \right)} \right|} }\\{}&{ \le \sum\limits_{i \in \Sigma _n^1} {{p_i}} + \sum\limits_{i \in \Sigma _n^4} {{p_i}{q^{n - {N_1}}}} + \sum\limits_{i \in \Sigma _n^5} {{p_i}} }\\{}&{ \le {{(1 - {p_d})}^{{N_1}}} + {q^{{N_2} - {N_1}}} + \sum\limits_{i \in \Sigma _n^5} {{p_i} \le \varepsilon } .}\end{array} Consequently, P has the e-property at 0, so if μ and ν are two distinct ergodic measures, then 0 ∉ supp μ∩supp ν and on the other hand, 0 ∈ supp μ∩supp ν – a contradiction, which completes the proof.

We will now proceed with the proof of the asymptotic stability of the operator P.

Theorem 3.5 ([1]).

Let μ be the unique invariant measure of the operator P. Then every probabilistic measure μ converges weakly to μ. For continuous functions ϕ, we have: limnPnμ,ϕ=μ*,ϕ. \mathop {\lim }\limits_{n \to \infty } \left\langle {{P^n}\mu ,\phi } \right\rangle = \left\langle {{\mu _*},\phi } \right\rangle .

Proof

For ψC([0, 1]), define the sequence of random variables on (Σ, ℝ) by ξnψi=μ*,ψfin,in1,,i1. \xi _n^\psi \left( i \right) = \left\langle {{\mu _*},\psi \circ {f_{\left( {{i_n},{i_{n - 1}}, \ldots ,{i_1}} \right)}}} \right\rangle . We will show that ξnψ \left( {\xi _n^\psi } \right) is a bounded martingale. The boundedness is obvious, so it is sufficient to show that 𝔼(ξn+1ψ|ξ1ψ,,ξnψ)=ξnψ. {\mathbb{E}}(\xi _{n + 1}^\psi |\xi _1^\psi , \ldots ,\xi _n^\psi ) = \xi _n^\psi . Notice that Uσξ1ψ,,ξnψ U \in \sigma \left( {\xi _1^\psi , \ldots ,\xi _n^\psi } \right) has the form Un × Σ1 × Σ for some Un ∈ Σn. We have: U𝔼(ξn+1ψ|ξ1ψ,,ξnψ)d=Uξn+1ψdn+1=iUn+1piμ*,ψfi=iΣ1piμ*,jUnpjψfjfix=μ*,UjUnpjψfj=μ*,jUnpjψfj=Unξnψdn=Uξnψd. \begin{array}{*{20}{l}}{\int_U {{\mathbb{E}}(\xi _{n + 1}^\psi |\xi _1^\psi , \ldots ,\xi _n^\psi )d\mathbb{P}} }&{ = \int_U {\xi _{n + 1}^\psi d{\mathbb{P}_{n + 1}}} = \sum\limits_{i \in {U_{n + 1}}} {{p_i}\left\langle {{\mu _*},\psi \circ {f_i}} \right\rangle } }\\{}&{ = \sum\limits_{i \in {\Sigma _1}} {{p_i}\left\langle {{\mu _*},\sum\limits_{j \in {U_n}} {{p_j}\psi \circ {f_j}\left( {{f_i}\left( x \right)} \right)} } \right\rangle } }\\{}&{ = \left\langle {{\mu _*},U\left( {\sum\limits_{j \in {U_n}} {{p_j}\psi \circ {f_j}} } \right)} \right\rangle = \left\langle {{\mu _*},\sum\limits_{j \in {U_n}} {{p_j}\psi \circ {f_j}} } \right\rangle }\\{}&{ = \int_{{U_n}} {\xi _n^\psi d{\mathbb{P}_n}} = \int_U {\xi _n^\psi d\mathbb{P}} .}\end{array} This completes this part of the proof. By the Martingale Convergence Theorem, we know that ξnψ \xi _n^\psi converges for almost every i ∈ Σ. The space of continuous functions on the interval is central, so there exists a set Σ0 ⊂ Σ of full measure such that for every i ∈ Σ0, the sequence ξnψ \xi _n^\psi converges. This follows from the fact that we can specify such a set for the center. From the Riesz Representation Theorem, we know that for i ∈ Σ0, there exists a probabilistic measure μi such that (3.1) limnξnψi=μi,ψ. \mathop {\lim }\limits_{n \to \infty } \xi _n^\psi \left( i \right) = \left\langle {{\mu _i},\psi } \right\rangle . We will show that for almost every i, the measure μi = δv(i) for some v(i). It is sufficient to show that for ε > 0, there exists a set Σε ⊂ Σ of full measure such that for each i ∈ Σε, there exists an interval I of length less than ε such that μiI1ε. {\mu _i}\left( I \right) \ge 1 - \varepsilon . Then for iΣ1=n=1Σ1/n i \in {\Sigma ^1} = \bigcap\nolimits_{n = 1}^\infty {{\Sigma _{1/n}}} , we have μi = δv(i). The set Σ1 is a set of full measure. For a fixed ε > 0, choose a natural number l such that 1/l < ε. Let the interval [a, b] satisfy μ*a,b>1ε {\mu _*}\left[ {a,b} \right] > 1 - \varepsilon and contain fd(fg(0)) and fd(fg(1)). We can specify j ∈ Σ such that fjn(b) → 0. Notice that there exists an n1 such that fjn1 (b) < a. Let i1 = jn1. In general, for kl, there exists nk such that fjnkb<fjnk1a. {f_{{j_{{n_k}}}}}\left( b \right) < {f_{{j_{{n_{k - 1}}}}}}\left( a \right). Let ik = jnk. Notice that the intervals Jn = fin[a, b] are disjoint. It follows that for some n, we have Jn*<1/l<ε/2. \left| {{J_{{n^*}}}} \right| < 1/l < \varepsilon /2. Notice that Σ1=iΣ:forinfinitelymanyn,fina,b<ε/2. {\Sigma ^1} = \left\{ {i \in \Sigma :{\rm{for}}\;{\rm{infinitely}}\;{\rm{many}}\;n,\left| {{f_{{i_n}}}\left[ {a,b} \right]} \right| < \varepsilon /2} \right\}. is a set of full measure. This is true because if i ∉ Σ1, there exists N such that for n > N |fin[a, b]| ≥ ε/2. However, with probability 1, the sequence σN i contains the fragment (id, ig)in. So with probability 1, there exists n′ > N such that |fin [a, b]| < ε/2. By the compactness of the interval, it follows that for infinitely many n, the image fin [a, b] ∈ I, where I is an interval of length ε. Notice that Σ1 ⊂ Σε, which completes this part of the proof.

To show the asymptotic stability, it is sufficient to show that for a Lipschitz function ψ and any points x, y ∈ (0, 1), the following holds: limnPnδx,ψPnδy,ψ=0. \mathop {\lim }\limits_{n \to \infty } \left| {\left\langle {{P^n}{\delta _x},\psi } \right\rangle - \left\langle {{P^n}{\delta _y},\psi } \right\rangle } \right| = 0. This is true because for any measure μM1(0, 1), we have Pnμ,ψμ*,ψ0,10,1Pnδx,ψPnδy,ψμdxμ*dy. \left| {\left\langle {{P^n}\mu ,\psi } \right\rangle - \left\langle {{\mu _*},\psi } \right\rangle } \right| \le \int_{\left( {0,1} \right)} {\int_{\left( {0,1} \right)} {\left| {\left\langle {{P^n}{\delta _x},\psi } \right\rangle - \left\langle {{P^n}{\delta _y},\psi } \right\rangle } \right|\mu \left( {dx} \right){\mu _*}\left( {dy} \right).} } Let us fix points x and y belonging to the interval (0, 1), with x < y. Choose any ε > 0. Since the measure μ is invariant, according to the proof of uniqueness from Theorem 4.4, its support contains the points 0 and 1, which means that μ((0, x)) > 0 and μ((y, 1)) > 0. We know that for almost every sequence i = (i1, i2, . . . ) ∈ Σ with respect to the measure ℙ, the sequence of measures μ*fi1,i2,,in1viε2,vi+ε20,11 {\mu ^*} \circ f_{\left( {{i_1},{i_2}, \ldots ,{i_n}} \right)}^{ - 1}\left( {\left( {v\left( i \right) - \frac{\varepsilon }{2},v\left( i \right) + \frac{\varepsilon }{2}} \right) \cap \left( {0,1} \right)} \right) \to 1 as n → ∞. Since μ((0, x)) > 0 and μ((y, 1)) > 0, we can find points un ∈ (0, x) and vn ∈ (y, 1) such that for sufficiently large n, we have un,vnfi1,i2,,in1viε2,vi+ε20,1 {u_n},{v_n} \in f_{\left( {{i_1},{i_2}, \ldots ,{i_n}} \right)}^{ - 1}\left( {\left( {v\left( i \right) - \frac{\varepsilon }{2},v\left( i \right) + \frac{\varepsilon }{2}} \right) \cap \left( {0,1} \right)} \right) . Then it follows that f(i1,...,in) (un) and f(i1,...,in) (vn) lie in the interval viε2,vi+ε2 \left( {v\left( i \right) - \frac{\varepsilon }{2},v\left( i \right) + \frac{\varepsilon }{2}} \right) , and in particular, for sufficiently large n, f(i1,...,in) (x) and f(i1,...,in) (y) lie in this interval. Since ε > 0 was arbitrary, we conclude that for almost every i = (i1, i2, . . . ) ∈ Σ with respect to ℙ, the following convergence holds: limnfi1,,inxfi1,,iny=0. \mathop {\lim }\limits_{n \to \infty } \left| {{f_{\left( {{i_1}, \ldots ,{i_n}} \right)}}\left( x \right) - {f_{\left( {{i_1}, \ldots ,{i_n}} \right)}}\left( y \right)} \right| = 0. By equation (3.1) and the fact that ⟨Pnδz, ϕ⟩ = Unϕ(z) for z ∈ [0, 1], we have: (3.2) Pnδx,ϕPnδy,ϕ=UnϕxUnϕyLi1,,inΣnfi1,,inxfi1,,inypi1pin, \left| {\left\langle {{P^n}{\delta _x},\phi } \right\rangle - \left\langle {{P^n}{\delta _y},\phi } \right\rangle } \right| = \left| {{U^n}{\phi _x} - {U^n}{\phi _y}} \right| \le L\sum\limits_{\left( {{i_1}, \ldots ,{i_n}} \right) \in {\Sigma _n}} {\left| {{f_{\left( {{i_1}, \ldots ,{i_n}} \right)}}\left( x \right) - {f_{\left( {{i_1}, \ldots ,{i_n}} \right)}}\left( y \right)} \right|} {p_{{i_1}}} \cdots {p_{{i_n}}}, where L is the Lipschitz constant for the function ϕ. We will analyze why the right-hand side of the above inequality tends to zero as n → ∞. For i = (i1, . . . , in, . . . ), define gn(i) := |fin (x) − fin (y)|, where in = (i1, . . . , in). Then, gn(i) → 0 as n → ∞ for almost every i ∈ Σ with respect to ℙ. From the construction of the probability measures ℙ and ℙn for n ∈ ℕ, it follows that the following relationship holds Bn×Σ1×Σ1×=nBnforBnΣn. \mathbb{P}\left( {{B_n} \times {\Sigma _1} \times {\Sigma _1} \times \ldots } \right) = {\mathbb{P}_n}\left( {{B_n}} \right)\;\;\;{\rm{for}}\;{B_n} \in {\Sigma _n}. Since gn(i) depends solely on the first n coordinates, it follows that Σgnidi=Σngnidni=i1,,inΣnfi1,,inxfi1,,inypi1pin. \begin{array}{*{20}{l}}{\int_\Sigma {{g_n}\left( i \right)d\mathbb{P}\left( i \right)} }&{ = \int_{{\Sigma _n}} {{g_n}\left( i \right)d{\mathbb{P}_n}\left( i \right)} }\\{}&{ = \sum\limits_{\left( {{i_1}, \ldots ,{i_n}} \right) \in {\Sigma _n}} {\left| {{f_{\left( {{i_1}, \ldots ,{i_n}} \right)}}\left( x \right) - {f_{\left( {{i_1}, \ldots ,{i_n}} \right)}}\left( y \right)} \right|{p_{{i_1}}} \cdots \cdot \cdot {p_{{i_n}}}.} }\end{array} As a result, according to Lebesgue’s theorem, we have limni1,,inΣnfi1,,inxfi1,,inypi1pin=0, \mathop {\lim }\limits_{n \to \infty } \sum\limits_{\left( {{i_1}, \ldots ,{i_n}} \right) \in {\Sigma ^n}} {\left| {{f_{\left( {{i_1}, \ldots ,{i_n}} \right)}}\left( x \right) - {f_{\left( {{i_1}, \ldots ,{i_n}} \right)}}\left( y \right)} \right|{p_{{i_1}}} \cdots \cdot \cdot {p_{{i_n}}} = 0} , which proves inequality (3.2) and thus completes the proof.

4.
Central Limit Theorem
Lemma 4.1 ([1]).

Let the family (f1, . . . , fN; p1, . . . , pN) be a contracting iterated function system, and let a0,12 a \in \left( {0,\frac{1}{2}} \right) . Then there exists r ∈ ℕ and Ω ⊆ Σ with probability ℙ(Ω) > 0 such that J = [a, 1 − a] and we get n=1finJr+q1q \sum\limits_{n = 1}^\infty {\left| {f_i^n\left( J \right)} \right| \le r} + \frac{q}{{1 - q}} for every i ∈ Ω, where q is a constant given by Theorem 3.2.

Lemma 4.2.

Let J = [a, 1 − a] be such that fd(fg(0)), fd(fg(1)) ∈ J. Let En=iΣ:fin1/40,1J {E_n} = \left\{ {i \in \Sigma :f_i^{\lfloor {{n^{1/4}}} \rfloor }\left[ {0,1} \right] \in J} \right\} . Then, for n ≥ 16, there exists β = pdpg > 0 such that ℙ(En) ≥ β.

Lemma 4.3.

Let 𝒜 ⊂ Σ be a set such that ℙ(𝒜) ≥ β for some β > 0 and let k, n ∈ ℕ with k < n. Then, there exists a set A ⊂ Σn such thatnn\A) ≤ (1 − β)k and for any iA there exist i1, i2, . . . , ik ∈ Σ such that i = i1i2 . . . ik and for j = 1, . . . , k at least one of the sequences ij, σij , . . . , σk−1ij is dominated by A.

Theorem 4.1 (Central Limit Theorem).

Let Xn be a stationary Markov chain generated by a random walk with the initial distribution given by μ. If ϕ: [0, 1] → ℝ is a Lipschitz function such that ∫[0,1] ϕdμ = 0, then the process ϕ(Xn) satisfies the central limit theorem. That is, σ2=limn𝔼ϕX0++ϕXnn2 {\sigma ^2} = \mathop {\lim }\limits_{n \to \infty } {\mathbb{E}}{\left( {\frac{{\phi \left( {{X_0}} \right) + \cdots + \phi \left( {{X_n}} \right)}}{{\sqrt n }}} \right)^2} exists, and ϕX0++ϕXnn𝒩0,σasn. \frac{{\phi \left( {{X_0}} \right) + \cdots + \phi \left( {{X_n}} \right)}}{{\sqrt n }} \Rightarrow {{\cal N}}\left( {0,\sigma } \right)\;\;\;as\;\;\;n \to \infty .

Proof

From the uniqueness of the ergodic measure μ, we know that the chain is ergodic. Therefore, by Theorem 1 from [5], it is sufficient to show that n=1n3/2j=1nUjϕL2<. \sum\limits_{n = 1}^\infty {{n^{ - 3/2}}} {\left\| {\sum\limits_{j = 1}^n {{U^j}\phi } } \right\|_{{L^2}}} < \infty . Let En be as in Lemma 4.2. Clearly, ℙ(En) ≥ β > 0. Let Ω be as in Lemma 4.1. Let A=ij|n:iEnn1/4,jΩ. \mathcal{A} = \left\{ {i{j_{|n}}:i \in E_n^{\lfloor {{n^{1/4}}} \rfloor },j \in \Omega } \right\}. Clearly, ℙ(𝒜) ≥ α > 0 for some α (independent of n). Then, by Lemma 4.3, for k = ⌊n1/8⌋, we get a sequence An ∈ Σn satisfying ℙ(Bn = Σn/An) ≤ (1 − α)n1/8. Moreover, if iAn, then i = i1 . . . ik, where at least one of im, σim, . . . , σk−1im is dominated by 𝒜. Therefore, for any x, y, we have: j=1imfijxfijyn1/8+n1/4+r+q1q2n1/4+c. \sum\limits_{j = 1}^{\left| {{i_m}} \right|} {\left| {f_i^j\left( x \right) - f_i^j\left( y \right)} \right|} \le \lfloor {{n^{1/8}}} \rfloor + \lfloor {{n^{1/4}}} \rfloor + r + \frac{q}{{1 - q}} \le 2\left( {\lfloor {{n^{1/4}}} \rfloor + c} \right). Notice that j=1nfijxfijy2kn1/4+cCn3/4. \sum\limits_{j = 1}^n {\left| {f_i^j\left( x \right) - f_i^j\left( y \right)} \right|} \le 2k\left( {\lfloor {{n^{1/4}}} \rfloor + c} \right) \le C\lfloor {{n^{3/4}}} \rfloor . Let L be the Lipschitz constant of the function ϕ. Since 0,1ϕdμ*=0,1Ujϕdμ*=0, \int_{\left[ {0,1} \right]} {\phi d{\mu _*} = } \int_{\left[ {0,1} \right]} {{U^j}} \phi d{\mu _*} = 0, we have: 0,1j=1nUjϕx2dμ*x0,10,1j=1nUjϕxUjϕy2dμ*xμ*y. {\int_{\left[ {0,1} \right]} {\left| {\sum\limits_{j = 1}^n {{U^j}\phi \left( x \right)} } \right|} ^2}d{\mu _*}\left( x \right) \le \int_{\left[ {0,1} \right]} {\int_{\left[ {0,1} \right]} {{{\left( {\sum\limits_{j = 1}^n {\left| {{U^j}\phi \left( x \right) - {U^j}\phi \left( y \right)} \right|} } \right)}^2}d{\mu _*}\left( x \right){\mu _*}\left( y \right)} } . We know that j=1nUjϕxUjϕyAnj=1nfijϕxfijϕydi+Bnj=1nfijϕxfijϕydiLCn3/8+2nLBnC*n3/8. \begin{array}{*{20}{l}}{\sum\limits_{j = 1}^n {\left| {{U^j}\phi \left( x \right) - {U^j}\phi \left( y \right)} \right|} }&{ \le \int_{{A_n}} {\sum\limits_{j = 1}^n {\left| {f_i^j\phi \left( x \right) - f_i^j\phi \left( y \right)} \right|d\mathbb{P}\left( i \right)} } }\\{}&{\;\;\; + \;\int_{{B_n}} {\sum\limits_{j = 1}^n {\left| {f_i^j\phi \left( x \right) - f_i^j\phi \left( y \right)} \right|d\mathbb{P}\left( i \right)} } }\\{}&{ \le LC{n^{3/8}} + 2nL\mathbb{P}\left( {{B_n}} \right) \le {C_*}{n^{3/8}}.}\end{array} This implies that 0,10,1j=1nUjϕxUjϕy2dμ*xμ*yC*2n3/4. \int_{\left[ {0,1} \right]} {\int_{\left[ {0,1} \right]} {{{\left( {\sum\limits_{j = 1}^n {\left| {{U^j}\phi \left( x \right) - {U^j}\phi \left( y \right)} \right|} } \right)}^2}d{\mu _*}\left( x \right){\mu _*}\left( y \right) \le C_*^2{n^{3/4}}} } . Therefore, j=1nUjϕL2<C*n3/8 {\left\| {\sum\nolimits_{j = 1}^n {{U^j}\phi } } \right\|_{{L^2}}} < C^*{n^{3/8}} . By Theorem 1 from [5], we obtain the result.

DOI: https://doi.org/10.2478/amsil-2025-0005 | Journal eISSN: 2391-4238 | Journal ISSN: 0860-2107
Language: English
Page range: 1 - 12
Submitted on: Dec 26, 2024
|
Accepted on: Mar 12, 2025
|
Published on: Apr 26, 2025
In partnership with: Paradigm Publishing Services
Publication frequency: 2 issues per year
Keywords:

© 2025 Maciej Block, Wiktoria Kacprzak, Dorian Falęcki, published by University of Silesia in Katowice, Institute of Mathematics
This work is licensed under the Creative Commons Attribution 4.0 License.