1.Introduction
Iterative functional differential equations as a special kind of functional differential equations, have been discussed in many monographs and research articles ([1], [7],[8],[9],[10],[11], [15]). Recently, several papers have appeared that are concerned with the form of
x'(t) = H(t,x(t),{x^{[2]}}(t)), \ldots,{x^{[n]}}(t)),
where x[k](t) = x(x[k−1](t)), k = 1, 2, . . .. More specifically, in [4], Cooke showed that it is highly desirable to consider the properties of periodic solutions of the equation
x'(t) + ax(t - h(t,x(t))) = F(t).
Eder ([5]) studied the iterative functional differential equation
x'(t) = x(x(t))
and proved that every solution either vanishes identically or is strictly monotonic. In [6], Fečkan considered the equation of the form
x'(t) = f(x(x(t))),
he got the solutions which satisfy x(0) = 0 by an existence theorem. Later, through a reversible transformation, Si, Wang and Cheng ([13]) studied the equation
x'(t) = {x^{[m]}}(t)
and gave the sufficient conditions for the existence of the analytic solutions. In [12], using fixed point theorem, Si and Cheng continued to discuss the existence of smooth solutions for the equation
(1.1)
x'(t) = {\lambda_1}x(t) + {\lambda_2}{x^{[2]}}(t) +\cdots+ {\lambda_n}{x^{[n]}}(t) + F(x),\,\forall x \in {\rm{\mathbb R}},
where x[k](t) = x(x[k−1](t)), k = 1, 2, . . . are unknown functions, λ1, λ2, . . . , λn are real constants, F (x) is a given function. In 2017, Zhao and Liu ([21]) proved the existence and stability for periodic solution of (1.1) by fixed point theorem. For some various properties of solutions for iterative functional differential equations, we refer the interested reader to [2, 3], [14], [16, 17, 18].
Convexity ([9, 11]) is one of the most important concepts in optimization. Let I be an interval of the real line ℝ. A function x : I → ℝ is said to be convex if
x(\alpha t + (1 - \alpha )s) \le \alpha x(t) + (1 - \alpha )x(s)
holds for all t, s ∈ I and α ∈ [0, 1]. Convex functions play an important role in many areas of mathematics. They are especially important in the study of optimization problems where they are distinguished by a number of convenient properties. In [20, 19], the authors consider the convex solutions of an iterative functional equation, they found the sets of convex solutions depending on the convexity of a given function. In this paper, we study the existence, uniqueness and stability of convex solutions for (1.1), and we see that the convex solutions are not affected by the convexity of a known function F.
The rest of the paper is organized as follows. In Section 2, we consider some class of functions of convexity. Then we give the existence of monotonic solutions of Eq. (1.1) under some certain assumption in Section 3. Section 4 deals with the existence of convex solutions of (1.1), also we give the approximate convex solutions. In Section 5, we consider the uniqueness and stability of those solutions. The final section presents some examples.
2.Preliminaries
In this section we give some notations and several preparatory lemmas.
Fix ξ ∈ ℝ and δ > 0. Let I = [ξ − δ, ξ + δ], C1(I, ℝ) is the set of all continuous derivative functions defined on I, C1(I, I) is the set in which x ∈ C1 maps the closed interval I into I. Consider the norm
\matrix{ {{{\left\| x \right\|}_1} = \left\| x \right\| + \left\| {x'} \right\|,} \hfill & {{\rm{where}}\,\,\,\,\left\| x \right\| = \max \{ \left| {x(t)} \right|:t \in I\},} \hfill\cr}
then C1(I, ℝ) with ‖x‖ 1 is a Banach space, and C1(I, I) is a subset of C1(I, ℝ).
Let M ≥ 1 ≥ m ≥ 0, define
\matrix{ {\Phi \left( {I;m,M} \right) = } \hfill & {\left\{ {x \in {C^1}(I,I):x(\xi ) = \xi,\left| {x'(t)} \right| \le M,} \right.} \hfill\cr{} \hfill & {\left. {m({t_1} - {s_1}) \le x({t_1}) - x({s_1}),\forall t,\,{t_1} \ge {s_1}\,{\rm{in}}\,I} \right\},} \hfill\cr}
and denote by Φcv(I; m, M) and Φcc(I; m, M) the families consisting of all convex functions and concave ones in Φ(I; m, M).
Lemma 2.1
Φ(I; m, M), Φcv(I; m, M), and Φcc(I; m, M) are compact convex subsets of C1(I, ℝ).
Proof
We first show that Φ(I; m, M) is a convex set. For any x1, x2 ∈ Φ(I; m, M) and α ∈ [0, 1], put
\matrix{ {X(t) = \alpha {x_1}(t) + (1 - \alpha ){x_2}(t),} \hfill & {\forall t \in I.} \hfill\cr}
It is easy to check that X ∈ C1(I, I), X(ξ) = ξ and
(2.1)
\left| {X'(t)} \right| = \left| {\alpha x_1^{'}(t) + (1 - \alpha )x_2^{'}(t)} \right| \le \alpha M + (1 - \alpha )M = M,\forall t \in I.
Similarly, we obtain
(2.2)
\matrix{ {X\left( t \right) - X\left( s \right)} \hfill & { = \alpha \left( {{x_1}\left( t \right) - {x_2}\left( s \right)} \right) + \left( {1 - \alpha } \right)\left( {{x_2}\left( t \right) - x\left( s \right)} \right)} \hfill\cr{} \hfill & { \ge \alpha m\left( {t - s} \right) + \left( {1 - \alpha } \right)m\left( {t - s} \right) = m\left( {t - s} \right)} \hfill\cr}
for t ≥ s. Thus X ∈ Φ(I; m, M), that is Φ(I; m, M) is a convex set.
Furthermore, for a convergent sequence xn → x∗, n → ∞ in Φ(I; m, M), it is easy to check that x∗(t) ∈ C1(I, I), x∗(ξ) = ξ and
\left| {x_*^{'}(t)} \right| \le M
, m(t2 − t1) ≤ x∗(t2) − x∗(t1), ∀ t, t2 ≥ t1 in I, thus x∗ ∈ Φ(I; m, M), and Φ(I; m, M) is a closed set. Since Φ(I; m, M) is a bounded set, we know it is a closed bounded subset of C1(I, ℝ). From the definition of Φ(I; m, M), for any t1, t2 ∈ I, we have |x(t2) − x(t1)| ≤ M |t2 − t1|, which means Φ(I; m, M) is equicontinuous. By Ascoli-Arzela lemma, Φ(I; m, M) is a compact convex subset of C1(I, ℝ).
Now we prove that Φcv(I; m, M) is compact convex subset of C1(I, ℝ). Suppose that x1, x2 ∈ Φcv(I; m, M) are convex functions, α ∈ [0, 1], and let
\matrix{ {{X_1}\left( t \right) = \alpha {x_1}\left( t \right) + \left( {1 - \alpha } \right){x_2}\left( t \right),} \hfill & {\forall t \in I.} \hfill\cr}
Then (2.1), (2.2) still hold as X = X1. Moreover,
\matrix{ {{X_1}\left( {\beta t + \left( {1 - \beta } \right)s} \right)} \hfill\cr{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \le \alpha \left( {\beta {x_1}\left( t \right) + \left( {1 - \beta } \right){x_1}\left( s \right)} \right) + \left( {1 - \alpha } \right)(\beta {x_2}\left( t \right) + \left( {1 - \beta ){x_2}\left( s \right)} \right)} \hfill\cr{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, = \beta \left( {\alpha {x_1}\left( t \right) + \left( {1 - \alpha } \right){x_2}\left( t \right)} \right) + \left( {1 - \beta } \right)(\alpha {x_1}\left( s \right) + \left( {1 - \alpha ){x_2}\left( s \right)} \right)} \hfill\cr{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, = \beta {X_1}\left( t \right) + \left( {1 - \beta } \right){X_1}\left( s \right),\,\,\,\,\,\,\,\forall t,s \in I,\beta\in \left[ {0,1} \right].} \hfill\cr}
We obtain X1 is a convex function, and Φcv(I; m, M) is a convex set. As in the case of Φ(I; m, M), we can prove that Φcv(I; m, M) is a compact convex subset of C1(I, ℝ).
Similarly, we can prove that the set Φcc(I; m, M) is a compact convex subset of C1(I, ℝ).
The following lemma can be found in [12].
Lemma 2.2
Suppose that x, y ∈ Φ(I; m, M), then the following inequalities hold:
- (i)
|x[i](t2) − x[i](t1)| ≤ Mi|t2 − t1|, ∀ t1, t2 ∈ I, i = 0, 1, . . . , n.
- (ii)
\left\| {{x^{[i]}} - {y^{[i]}}} \right\| \le \mathop \sum \nolimits_{k = 1}^i {M^{k - 1}}\left\| {x - y} \right\|
, i = 1, 2, . . . , n.
- (iii)
‖x − y‖ ≤ δ‖x′ − y′‖.
Now, we further consider the set
{\Omega \left( {I;m,M,k,K} \right) = \left\{ {x \in \Phi \left( {I;m,M} \right):\ } {k \le {{x'\left( {{t_2}} \right) - x'\left( {{t_1}} \right)} \over {{t_2} - {t_1}}} \le K,\,\,\,\,\,\forall {t_1} < {t_2}\,{\rm{in}}\,I} \right\},}
where m, M, k, K are real constants, 0 ≤ m ≤ 1 ≤ M and K ≥ k.
Lemma 2.3
Ω(I; m, M, k, K) is a compact and convex subset of C1(I, ℝ). Moreover, Ω(I; m, M, k, K) ⊂ Φcv(I; m, M) if k > 0; Ω(I; m, M, k, K) ⊂ Φcc(I; m, M) if K < 0.
Proof
First, we shall prove that Ω(I; m, M, k, K) is a convex set. In fact, for any x1, x2 ∈ Ω(I; m, M, k, K), we only consider α ∈ (0, 1), for all t1 < t2 ∈ I. Letting
X\left( t \right) = \alpha {x_1}\left( t \right) + \left( {1 - \alpha } \right){x_2}\left( t \right),
it is easy to see that X ∈ Φ(I; m, M),
\matrix{ {{{X'\left( {{t_2}} \right) - X'\left( {{t_1}} \right)} \over {{t_2} - {t_1}}}} \hfill & { = {{\alpha x_1^{'}\left( {{t_2}} \right) + \left( {1 - \alpha } \right)x_2^{'}\left( {{t_2}} \right)} \over {{t_2} - {t_1}}} - {{\alpha x_1^{'}\left( {{t_1}} \right) + \left( {1 - \alpha } \right)x_2^{'}\left( {{t_1}} \right)} \over {{t_2} - {t_1}}}} \hfill\cr{} \hfill & { = {{\alpha \left( {x_1^{'}\left( {{t_2}} \right) - x_1^{'}\left( {{t_1}} \right)} \right)} \over {{t_2} - {t_1}}} + {{\left( {1 - \alpha } \right)\left( {x_2^{'}\left( {{t_2}} \right) - x_2^{'}\left( t \right)} \right)} \over {{t_2} - {t_1}}}} \hfill\cr{} \hfill & { \ge \alpha k\left( {{t_2} - {t_1}} \right) + \left( {1 - \alpha } \right)k\left( {{t_2} - {t_1}} \right)} \hfill\cr{} \hfill & { = k\left( {{t_2} - {t_1}} \right),} \hfill\cr}
and
{{X'\left( {{t_2}} \right) - X'\left( {{t_1}} \right)} \over {{t_2} - {t_1}}} \le \alpha K\left( {{t_2} - {t_1}} \right) + \left( {1 - \alpha } \right)K\left( {{t_2} - {t_1}} \right) = K\left( {{t_2} - {t_1}} \right).
Therefore, Ω(I; m, M, k, K) is a convex subset of Φ(I; m, M) ⊂ C1(I, ℝ). Clearly, we can check that the functions in Ω(I; m, M, k, K) are uniformly bounded and equicontinuous in I. By Ascoli–Arzela's lemma, Ω(I; m, M, k, K) is a compact and convex subset of C1(I, ℝ).
Taking x ∈ Ω(I; m, M, k, K), for k > 0 and t1 < t2 ∈ I, we have
{{x'\left( {{t_2}} \right) - x'\left( {{t_1}} \right)} \over {{t_2} - {t_1}}} \ge k > 0,
or x′(t2) > x′(t1). Therefore derivative function x′(t), t ∈ I is increasing, which means x(t) is a convex function on I. So we get Ω(I; m, M, k, K) ⊂ Φcv(I; m, M) for k > 0.
Similarly, for K < 0, taking x ∈ Ω(I; m, M, k, K), for any t1 < t2 ∈ I, we have
\matrix{ {{{x'\left( {{t_2}} \right) - x'\left( {{t_1}} \right)} \over {{t_2} - {t_1}}} \le K < 0} \hfill & {{\rm{or}}} \hfill & {x'\left( {{t_2}} \right) < x'\left( {{t_1}} \right)} \hfill\cr}.
So x is a concave function and Ω(I; m, M, k, K) ⊂ Φcc(I; m, M) for K < 0. This completes the proof.
3.Existence of monotonic solutions
In this section, we will prove the existence of continuous monotonic solution of Eq. (1.1). Suppose Eq. (1.1) satisfies the following conditions:
- (H1)
λi ≥ 0, i = 1, 2, . . . , n,
- (H2)
\mathop \sum \nolimits_{i = 1}^n {\lambda_i} >- 1
with λi < 0, i = 1, 2, . . . , n.
Then we have the following theorem.
Theorem 3.1
Suppose that (H1) holds and 0 ≤ m ≤ 1 ≤ M. Let I = [ξ − δ, ξ + δ], where ξ and δ satisfy
(3.1)
{m \over {1 + \mathop \sum \nolimits_{i = 1}^n {\lambda_i}}} < \left| \xi\right| < {1 \over {1 + \mathop \sum \nolimits_{i = 1}^n {\lambda_i}}},
(3.2)
0 < \delta< \min \left\{ {{1 \over {1 + \mathop \sum \nolimits_{i = 1}^n {\lambda_i}}} - \left| \xi\right|,\left| \xi\right| - {m \over {1 + \mathop \sum \nolimits_{i = 1}^n {\lambda_i}}}} \right\},
and
F \in \Phi \left( {I;\tilde m,\widetilde M} \right)
with
0 \le \tilde m \le 1 \le \widetilde M
. Then Eq. (1.1) has a continuous solution x ∈ Φ(I; m, M).
Proof
We will use Schauder's fixed point theorem to finish the proof. Define a mapping T : Φ(I; m, M) → C(I, ℝ) by
(3.3)
\matrix{ {\left( {Tx} \right)\left( t \right) = \xi+ \sum\limits_{i = 1}^n {{\lambda_i}} \mathop \smallint \nolimits_\xi^t {x^{\left[ i \right]}}\left( s \right)ds + \mathop \smallint \nolimits_\xi^t F\left( s \right)ds,} \hfill & {\forall t \in I.} \hfill\cr}
First, we will prove that for any x ∈ Φ(I; m, M), Tx ∈ Φ(I; m, M). It is easy to see that (Tx)(ξ) = ξ. Moreover,
\matrix{ {\left| {\left( {Tx} \right)\left( t \right) - \xi } \right|} \hfill & { \le \sum\limits_{i = 1}^n {{\lambda_i}} \left| {\mathop \smallint \nolimits_\xi^t {x^{\left[ i \right]}}\left( s \right)ds} \right| + \left| {\mathop \smallint \nolimits_\xi^t F\left( s \right)ds} \right|} \hfill\cr{} \hfill & { \le \sum\limits_{i = 1}^n {{\lambda_i}} \left( {\left| \xi\right| + \delta } \right)\left| {t - \xi } \right| + \left( {\left| \xi\right| + \delta } \right)\left| {t - \xi } \right|} \hfill\cr{} \hfill & { = (1 + \sum\limits_{i = 1}^n {{\lambda_i}} )(\left| \xi\right| + \delta )\left| {t - \xi } \right| \le \delta,} \hfill\cr}
where the last inequality is from (3.2), x ∈ Φ(I; m, M) and
F \in \Phi \left( {I;\tilde m,\widetilde M} \right)
. Thus Tx ∈ C1(I, I). Using (3.2), we get
|{(Tx)^{'}}\left( t \right)| \le \sum\limits_{i = 1}^n {{\lambda_i}} \left( {\left| \xi\right| + \delta } \right) + \left( {\left| \xi\right| + \delta } \right) \le 1 \le M
and
\matrix{ {\left( {Tx} \right)\left( {{t_2}} \right) - \left( {Tx} \right)\left( {{t_1}} \right)} \hfill & { = \sum\limits_{i = 1}^n {{\lambda_i}} \mathop \smallint \nolimits_{{t_1}}^{{t_2}} {x^{\left[ i \right]}}\left( s \right)ds + \mathop \smallint \nolimits_{{t_1}}^{{t_2}} F\left( s \right)ds} \hfill\cr{} \hfill & { \ge \sum\limits_{i = 1}^n {{\lambda_i}} \left( {\left| \xi\right| - \delta } \right)\left( {{t_2} - {t_1}} \right) + \left( {\left| \xi\right| - \delta } \right)\left( {{t_2} - {t_1}} \right)} \hfill\cr{} \hfill & { \ge m\left( {{t_2} - {t_1}} \right),\,\,\,\,\,\forall {t_2} > {t_1} \in I.} \hfill\cr}
Therefore, T is a self-mapping on Φ(I; m, M).
Next, we will prove that T is continuous. For any x, y ∈ Φ(I; m, M), using (ii) and (iii) in Lemma 2.2, we have
(3.4)
\matrix{ {{{\left\| {Ty - Tx} \right\|}_1} = \left\| {Ty - Tx} \right\| + \left\| {{{(Ty)}^{'}} - {{(Tx)}^{'}}} \right\|} \hfill\cr{\,\,\,\,\,\,\,\, \le \sum\limits_{i = 1}^n {{\lambda _i}} \mathop {\max }\limits_{t \in I} \left| {\int_\xi ^t {({y^{\left[ i \right]}}\left( s \right) - {x^{\left[ i \right]}}\left( s \right))ds} } \right| + \sum\limits_{i = 1}^n {{\lambda _i}} \mathop {\max }\limits_{t \in I} \left| {\left( {{y^{\left[ i \right]}}\left( t \right) - {x^{\left[ i \right]}}\left( t \right)} \right)} \right|} \hfill\cr{\,\,\,\,\,\,\,\, \le \delta \sum\limits_{i = 1}^n {\sum\limits_{k = 1}^i {{\lambda _i}{M^{k - 1}}} } \left\| {y - x} \right\| + \sum\limits_{i = 1}^n {\sum\limits_{k = 1}^i {{\lambda _i}{M^{k - 1}}} } \left\| {y - x} \right\|} \hfill\cr{\,\,\,\,\,\,\,\, \le \delta \sum\limits_{i = 1}^n {\sum\limits_{k = 1}^i {{\lambda _i}{M^{k - 1}}} } \left\| {y - x} \right\| + \delta \sum\limits_{i = 1}^n {\sum\limits_{k = 1}^i {{\lambda _i}{M^{k - 1}}} } \left\| {y' - x'} \right\|} \hfill\cr{\,\,\,\,\,\,\,\, \le \delta \sum\limits_{i = 1}^n {\sum\limits_{k = 1}^i {{\lambda _i}{M^{k - 1}}{{\left\| {y - x} \right\|}_1}} },} \hfill\cr}
which shows that T is continuous.
From Lemma 2.3 and Schauder's fixed point theorem, we conclude that
x\left( t \right) = \xi+ \sum\limits_{i = 1}^n {\int_\xi^t {{\lambda_i}{x^{\left[ i \right]}}\left( s \right)ds}+ } \int_\xi^t {F\left( s \right)ds},
for some x(t) in Φ(I; m, M). By differentiating both sides of the above equality, we see that x is the desired solution of (1.1). This completes the proof.
Similarly as Theorem 3.1, we can prove the following theorem.
Theorem 3.2
Suppose that (H2) holds and 0 ≤ m ≤ 1 ≤ M. Let I = [ξ − δ, ξ + δ], where ξ and δ satisfy
(3.5)
0 < {m \over {1 + \mathop \sum \nolimits_{i = 1}^n {\lambda_i}}} < \left| \xi\right| < {1 \over {1 - \mathop \sum \nolimits_{i = 1}^n {\lambda_i}}},
(3.6)
0 < \delta< \min \left\{ {{1 \over {1 - \mathop \sum \nolimits_{i = 1}^n {\lambda_i}}} - \left| \xi\right|,{{2\left| \xi\right| - m} \over {1 - \mathop \sum \nolimits_{i = 1}^n {\lambda_i}}} - \left| \xi\right|} \right\},
and
F \in \Phi \left( {I;\tilde m,\widetilde M} \right)
with
0 \le \tilde m \le 1 \le \widetilde M
. Then Eq. (1.1) has a continuous solution x ∈ Φ(I; m, M).
Next, we consider approximate solutions for the monotonic solutions of (1.1).
Theorem 3.3
In addition to the assumption of Theorem 3.1, suppose that
(3.7)
\delta \sum\limits_{i = 1}^n {\sum\limits_{k = 1}^i {{\lambda_i}{M^{k - 1}} \le 1} }.
Then for any x0 ∈ Φ(I; m, M), there exists a sequence
({x_k})_{k = 0}^\infty\subset \Phi \left( {I;m,M} \right)
which is defined by xk = Txk−1, k = 1, 2, . . . convergent to x∗ which is a solution of Eq. (1.1).
Proof
Consider the mapping T on Φ(I; m, M) as in (3.3). Put
\matrix{ {{x_k} = T{x_{k - 1}},} \hfill & {{x_0} \in \Phi \left( {I;m,M} \right),} \hfill & {k \in {\rm{\mathbb N}}} \hfill\cr}.
Since T is a self-mapping on Φ(I; m, M), we know that
({x_k})_{k = 0}^\infty
is a subset of Φ(I; m, M). By (3.4), we have
{\left\| {T{x_{k + 1}} - T{x_k}} \right\|_1} \le \delta \sum\limits_{i = 1}^n {\sum\limits_{k = 1}^i {{\lambda_i}{M^{k - 1}}{{\left\| {{x_k} - {x_{k - 1}}} \right\|}_1} = \Gamma {{\left\| {{x_k} - {x_{k - 1}}} \right\|}_1}} },
where
\Gamma= \delta \sum\nolimits_{i = 1}^n {\sum\nolimits_{k = 1}^i {{\lambda_i}{M^{k - 1}}} }
. Then, we obtain
{\left\| {T{x_{k + 1}} - T{x_k}} \right\|_1} \le {\Gamma^k}{\left\| {{x_1} - {x_0}} \right\|_1}.
Let
{x_r}\left( t \right) = {x_0}\left( t \right) + \sum\limits_{k = 0}^{r - 1} {({x_{k + 1}}\left( t \right) - {x_k}\left( t \right))}.
We now show that
\sum\nolimits_{k = 0}^{r - 1} {({x_{k + 1}}(t) - {x_k}(t))}
converges on the interval [ξ−δ, ξ+δ]. This would imply that xr(t) has a limit on this interval as r → ∞. Clearly, from (3.7), the series
\sum\limits_{k = 0}^\infty{{{\left\| {{x_{k + 1}} - {x_k}} \right\|}_1}}\le \sum\limits_{k = 0}^\infty{{\Gamma^k}{{\left\| {{x_1} - {x_0}} \right\|}_1} = {1 \over {1 - \Gamma }}} {\left\| {{x_1} - {x_0}} \right\|_1},
converges.
Therefore,
({x_k})_{k = 0}^\infty
is a Cauchy sequence under the supreme norm and uniformly converges to a continuous function x∗ on I. Noting that Φ(I; m, M) ⊂ C(I, ℝ) is compact,
({x_k})_{k = 0}^\infty
converges to x∗ in Φ(I; m, M). From (3.6), we can see that T is continuous, thus x∗ ← xk+1 = Txk → Ψx∗, and consequently Tx∗ = x∗. Therefore, the sequence of functions given by S = (x0(t), x1(t), . . . , xk(t), . . .) can be regarded as approximate solutions of Eq. (1.1). This completes the proof.
Similarly, corresponding to Theorem 3.2, we have the following theorem.
Theorem 3.4
In addition to the assumption of Theorem 3.2, suppose that
(3.8)
- \delta \sum\limits_{i = 1}^n {\sum\limits_{k = 1}^i {{\lambda_i}{M^{k - 1}}} }\le 1.
Then for any x0 ∈ Φ(I; m, M), there exists a sequence
({x_k})_{k = 0}^\infty\subset \Phi \left( {I;m,M} \right)
which is defined by xk = Txk−1, k = 1, 2,... convergent to x∗ which is a solution of Eq. (1.1).
4.Existence of convex solutions
In this section, we will show that under certain conditions, Eq. (1.1) has convex solutions.
Theorem 4.1
Suppose that Eq. (1.1) satisfies the conditions of Theorem 3.1, and
F \in \Phi \left( {I;\tilde m,\widetilde M} \right)
, where
\widetilde M \ge 1 \ge \tilde m \ge 0
. If M ≥ 1 ≥ m ≥ 0 and
(4.1)
0 < k \le \tilde m + \sum\limits_{i = 1}^n {{\lambda_i}{m^i}}\le \widetilde M + \sum\limits_{i = 1}^n {{\lambda_i}{m^i}}\le K,
then Eq. (1.1) has a convex solution x ∈ Ω(I; m, M, k, K).
Proof
Define the mapping T : Ω(I; m, M, k, K) → C(I) as in (3.3). From Theorem 3.1, we see that Tx ∈ Φ(I; m, M). Note that
{(Tx)^{'}}\left( t \right) = \sum\limits_{i = 1}^n {\matrix{ {{\lambda_i}{x^{\left[ i \right]}}\left( t \right) + F\left( t \right),} \hfill & {t \in i} \hfill\cr} }.
for all t1 < t2 ∈ I. Using (4.1), we have
\matrix{ {{{{{(Tx)}^{'}}\left( {{t_2}} \right) - {{(Tx)}^{'}}\left( {{t_1}} \right)} \over {{t_2} - {t_1}}}} \hfill & { = {{\mathop \sum \nolimits_{i = 1}^n {\lambda_i}\left( {{x^{\left[ i \right]}}\left( {{t_2}} \right) - {x^{\left[ i \right]}}\left( {{t_1}} \right)} \right) + \left( {F\left( {{t_2}} \right) - F\left( {{t_1}} \right)} \right)} \over {{t_2} - {t_1}}}} \hfill\cr{} \hfill & { \ge \tilde m + \sum\limits_{i = 1}^n {{\lambda_i}{m^i} \ge k},} \hfill\cr}
and
\matrix{ {{{{{(Tx)}^{'}}\left( {{t_2}} \right) - {{(Tx)}^{'}}\left( {{t_1}} \right)} \over {{t_2} - {t_1}}}} \hfill & { = {{\mathop \sum \nolimits_{i = 1}^n {\lambda_i}\left( {{x^{\left[ i \right]}}\left( {{t_2}} \right) - {x^{\left[ i \right]}}\left( {{t_1}} \right)} \right) + \left( {F\left( {{t_2}} \right) - F\left( {{t_1}} \right)} \right)} \over {{t_2} - {t_1}}}} \hfill\cr{} \hfill & { \le \widetilde M + \sum\limits_{i = 1}^n {{\lambda_i}{M^i} \le K}.} \hfill\cr}
By the definition of Ω(I; m, M, k, K), we know that T is a self-mapping on Ω(I; m, M, k, K). Furthermore, similarly as in (3.4), we have
{\left\| {T{x_1} - T{x_2}} \right\|_1} \le \delta \sum\limits_{i = 1}^n {\sum\limits_{k = 1}^i {{\lambda_i}{M^{k - 1}}} } {\left\| {x - y} \right\|_1},
which means T is continuous. By Lemma 2.3 and Schauder's fixed point theorem, T has a fixed point x ∈ Ω(I; m, M, k, K), i.e.,
\matrix{ {Tx\left( t \right) = \xi+ \sum\limits_{i = 1}^n {\int_\xi^1 {{\lambda_i}{x^{\left[ i \right]}}\left( s \right)ds}+ \int_\xi^1 {F\left( s \right)ds} },} \hfill & {t \in I} \hfill\cr}.
Differentiating both sides of the above equality, we know that x is the convex solution of (1.1). This completes the proof.
Remark 4.1
From Theorem 4.1, we see that
F \in \Phi \left( {I;\tilde m,\widetilde M} \right)
, it means the convexity of solutions x of (1.1) is not affected by the convexity of F.
In the same way, we have the following theorem.
Theorem 4.2
Suppose that Eq. (1.1) satisfies the conditions of Theorem 3.2, and
F \in \Phi \left( {I;\tilde m,\widetilde M} \right)
, where
\widetilde M \ge 1 \ge \tilde m \ge 0
. If M ≥ 1 ≥ m ≥ 0 and
(4.2)
0 < k \le \tilde m + \sum\limits_{i = 1}^n {{\lambda_i}{M^i}}\le \widetilde M + \sum\limits_{i = 1}^n {{\lambda_i}{M^i}}\le K,
then Eq. (1.1) has a convex solution x ∈ Ω(I; m, M, k, K).
Next, similarly as in Theorem 3.3, we obtain the convexity approximate solutions of (1.1). The proof is the same as the proof of Theorem 3.3, therefore we omit it.
Theorem 4.3
In addition to the assumption of Theorem 4.1 (Theorem 4.2), suppose that (3.7) ((3.8)) holds, then for any x0 ∈ Ω(I; m, M, k, K), there exists a sequence
({x_k})_{k = 0}^\infty\subset \Omega \left( {I;m,M,k,K} \right)
which is defined by xk = Txk−1, k = 1, 2, . . . convergent to x∗ which is a convex solution of Eq. (1.1).
5.Uniqueness and stability
In this section, the uniqueness and stability (the continuous dependence for the given function F) of the solutions of (1.1) will be proved. In order to prove these results, we need the Banach contraction principle to study the uniqueness of the solution. Furthermore, using the uniqueness, we obtain that the unique solution depends continuously on a known function F.
Theorem 5.1
In addition to the assumption of Theorem 3.1, suppose that (3.7) holds. Then Eq. (1.1) has a unique solution in Φ(I; m, M), and the unique solution depends continuously on the given functions F. Furthermore, the unique solution can be obtained by the sequence
({x_k})_{k = 0}^\infty\subset \Phi \left( {I;m,M} \right)
, where x0 ∈ Φ(I; m, M), xk+1 = Txk, k = 0, 1, . . . and T is defined as in (3.3).
Proof
From the proof of Theorem 3.1, we know the map T is a self-mapping on Φ(I; m, M). By (3.7), we have
\matrix{ {{{\left\| {Ty - Tx} \right\|}_1} \le \delta \sum\limits_{i = 1}^n {\sum\limits_{k = 1}^i {{\lambda_i}{M^{k - 1}}{{\left\| {y - x} \right\|}_1}} },} \hfill & {\forall x,y \in \Phi \left( {I;m,M} \right)} \hfill\cr}.
By (3.7) and Banach fixed point theorem, we know the the solution of Eq. (1.1) must be unique.
Given F and
G \in \Phi \left( {I;\tilde m,\widetilde M} \right)
, consider the corresponding operators T, T̃ defined by (3.3). Assuming the corresponding conditions (3.1), (3.2) and (3.7), there are two unique corresponding functions x and y in Φ(I; m, M) such that
\matrix{ {Tx = x,} \hfill & {\widetilde Ty = y} \hfill\cr}.
Then
\matrix{ {{{\left\| {y - x} \right\|}_1}} \hfill & { = {{\left\| {\widetilde Ty - Tx} \right\|}_1} = {{\left\| {\widetilde Ty - \widetilde Tx} \right\|}_1} + {{\left\| {\widetilde Tx - Tx} \right\|}_1}} \hfill\cr{} \hfill & { \le \Gamma {{\left\| {y - x} \right\|}_1} + \delta {{\left\| {F - G} \right\|}_1},} \hfill\cr}
with Γ defined as in the proof of Theorem 3.3. From (3.7), we have
{\left\| {y - x} \right\|_1} = {\delta\over {1 - \Gamma }}{\left\| {F - G} \right\|_1}.
This proves the continuous dependence of solution x upon F and G, otherwise referred to as stability. From Theorem 3.3, we can finish the proof.
Similarly, we have the following theorems corresponding to Theorems 3.2, 4.1, 4.2. We omit their proofs.
Theorem 5.2
In addition to the assumption of Theorem 3.2, suppose that (3.8) holds. Then Eq. (1.1) has a unique solution in Φ(I; m, M), and the unique solution depends continuously on the given functions F. Furthermore, the unique solution can be obtained by the sequence
({x_k})_{k = 0}^\infty\subset \Phi \left( {I;m,M} \right)
, where x0 ∈ Φ(I; m, M), xk+1 = Txk, k = 0, 1, . . . and T is defined as in (3.3).
Theorem 5.3
In addition to the assumption of Theorem 4.1 (Theorem 4.2), suppose that (3.7) ((3.8)) holds. Then Eq. (1.1) has a unique solution in Ω(I; m, M, k, K), and the unique solution depends continuously on the given functions F. Furthermore, the unique solution can be obtained by the sequence
({x_k})_{k = 0}^\infty\subset \Omega \left( {I;m,M,k,K} \right)
, where x0 ∈ Ω(I; m, M, k, K), xk+1 = Txk, k = 0, 1, . . . and T is defined as in (3.3).
6.Examples
In this section, some examples are provided to illustrate that the assumptions of our theorems do not self-contradict.
Example 6.1
Consider the following equation:
(6.1)
\matrix{ {x'\left( t \right) = {1 \over 2}x\left( t \right) + {1 \over 2}x\left( {x\left( t \right)} \right) + {t^2},} \hfill & {t \in \left[ {{1 \over 8},{3 \over 8}} \right],} \hfill\cr}
where
{\lambda_1} = {\lambda_2} = {1 \over 2}
F (t) = t2 and
\xi= {1 \over 4}
,
\delta= {1 \over 8}
, then
F \in \Phi ([{1 \over 8},{3 \over 8}];{1 \over 4},1)
. Taking
m = {1 \over 5}
, M = 1, a simple calculation yields
{m \over {1 + {\lambda_1} + {\lambda_2}}} = {1 \over {10}} < \left| \xi\right| = {1 \over 4} < {1 \over 2} = {1 \over {1 + {\lambda_1} + {\lambda_2}}}
and
0 < \delta= {1 \over 8} < {3 \over {20}} = \min \{ {1 \over 4},{3 \over {20}}\}= \min \{ {1 \over {1 + \mathop \sum \nolimits_{i = 1}^2 {\lambda_i}}} - \left| \xi\right|,\left| \xi\right| - {m \over {1 + \mathop \sum \nolimits_{i = 1}^2 {\lambda_i}}}\}.
Thus (H1), (3.1) and (3.2) are satisfied. From Theorem 3.1, we know that there exits a monotonically increasing solution x in
\Phi \left( {\left[ {{1 \over 8},{3 \over 8}} \right];{1 \over 5},1} \right)
.
Furthermore, taking
k = {3 \over {10}}
, K = 3, then
0 < k = {3 \over {10}} < \tilde m + \sum\limits_{i = 1}^2 {{\lambda_i}{m^i}}= {{37} \over {100}} < \widetilde M + \sum\limits_{i = 1}^2 {{\lambda_i}{M^i}}= 2 < 3 = K,
which means (4.1) is satisfied. Noting
\delta \left( {{\lambda_1} + {\lambda_2}\left( {1 + M} \right)} \right) = {3 \over {16}} < 1,
so (3.7) is satisfied. By Theorem 5.3, we know that the monotonically increasing convex solution is the unique one in
\Omega ([{1 \over 8},{3 \over 8}];{1 \over 5},1,{3 \over {10}},3)
. For any
{x_0} \in \Omega ([{1 \over 8},{3 \over 8}];{1 \over 5},1,{3 \over {10}},3)
, the unique solution of (6.1) can be approximated by the sequence xk+1 = Txk, k = 0, 1, 2, . . ., with T defined as in (3.3).
Next consider an example under condition (H2).
Example 6.2
Consider the following equation:
(6.2)
\matrix{ {x'\left( t \right) =- {1 \over {15}}x\left( t \right) - {2 \over {15}}x\left( {x\left( t \right)} \right) + {t^2},} \hfill & {t \in \left[ {{1 \over 8},{3 \over 8}} \right],} \hfill\cr}
where
{\lambda_1} =- {1 \over {15}}
,
{\lambda_2} =- {2 \over {15}}
, F (t) = t2 and
\xi= {1 \over 4}
,
\delta= {1 \over 8}
, then
F \in \Phi \left( {\left[ {{1 \over 8},{3 \over 8}} \right];{1 \over 4},1} \right)
. Taking
m = {1 \over {25}}
, M = 1, as in Example 6.1, we have
\matrix{ { - 1 < {\lambda_1} + {\lambda_2} =- {1 \over 5} < 0,}\cr{0 < {m \over {1 + {\lambda_1} + {\lambda_2}}} = {1 \over {20}} < \left| \xi\right| = {1 \over 4} < {5 \over 6} = {1 \over {1 - {\lambda_1} - {\lambda_2}}}}\cr}
and
0 < \delta= {1 \over 8} < {2 \over {15}} = \min \left\{ {{7 \over {12}},{2 \over {15}}} \right\} = \min \left\{ {{1 \over {1 - {\lambda_1} - {\lambda_2}}} - \left| \xi\right|,{{2\left| \xi\right| - m} \over {1 - {\lambda_1} - {\lambda_2}}} - \left| \xi\right|} \right\}.
Thus (H2), (3.5) and (3.6) are satisfied. From Theorem 3.2, we know there exits a monotonically increasing solution x in
\Phi \left( {\left[ {{1 \over 8},{3 \over 8}} \right];{1 \over {25}},1} \right)
.
Furthermore, taking
k = {1 \over {25}}
, K = 1, then
0 < k = {1 \over {25}} < \tilde m + \sum\limits_{i = 1}^2 {{\lambda_i}{M^i}}= {1 \over {20}} < \widetilde M + \sum\limits_{i = 1}^2 {{\lambda_i}{m^i}}= {{9348} \over {9375}} < 1 = K,
which means (4.2) is satisfied. Noting
- \delta \left( {{\lambda_1} + \left( {1 + M} \right){\lambda_2}} \right) = {1 \over {24}} < 1,
so (3.8) holds. By Theorem 5.3, we know that the monotonically increasing convex solution is the unique one in
\Omega ([{1 \over 8},{3 \over 8}];{1 \over {25}},1,{1 \over {25}},1)
. For any
{x_0} \in \Omega ([{1 \over 8},{3 \over 8}];{1 \over {25}},1,{1 \over {25}},1)
, the unique solution of (6.2) can be approximated by the sequence xk+1 = Txk, k = 0, 1, 2, . . ., with T defined as in (3.3).
Example 6.3
Consider the following equation:
(6.3)
\matrix{ {x'\left( t \right) = \lambda x\left( t \right) + {1 \over 2}x\left( {x\left( t \right)} \right) + {t^2},} \hfill & {t \in \left[ {{1 \over 8},{3 \over 8}} \right],} \hfill\cr}
where λ1 = λ is a parameter,
{\lambda_2} = {1 \over 2}
, F (t) = t2 and
\xi= {1 \over 4}
,
\delta= {1 \over 8}
. Similarly as in Example 6.1,
F \in \Phi \left( {\left[ {{1 \over 8},{3 \over 8}} \right];{1 \over 4},1} \right)
. Seeing
{\lambda_2} = {1 \over 2} > 0
, which means we need to use condition (H1), thus λ1 = λ > 0. In order to apply (3.1) and (3.2), we need
(6.4)
{m \over {1 + {\lambda_1} + {\lambda_2}}} = {m \over {{3 \over 2} + \lambda }} \le \left| \xi\right| = {1 \over 4} \le {1 \over {{3 \over 2} + \lambda }}
and
(6.5)
\matrix{ {0 < \delta= {1 \over 8}} \hfill & { \le \min \left\{ {{1 \over {{3 \over 2} + \lambda }} - {1 \over 4},{1 \over 4} - {m \over {{3 \over 2} + \lambda }}} \right\}} \hfill\cr{} \hfill & { = \min \left\{ {{1 \over {1 + \mathop \sum \nolimits_{i = 1}^2 {\lambda_i}}} - \left| \xi\right|,\left| \xi\right| - {m \over {1 + \mathop \sum \nolimits_{i = 1}^2 {\lambda_i}}}} \right\}.} \hfill\cr}
From (6.4) and (6.5), we have
\matrix{ {4m \le \lambda+ {3 \over 2},} \hfill & {\lambda\le {5 \over 2}} \hfill\cr}
and
\matrix{ {8m \le \lambda+ {3 \over 2},} \hfill & {\lambda\le {7 \over 6}.} \hfill\cr}
Thus
\matrix{ {0 < 8m \le \lambda+ {3 \over 2},} \hfill & {0 < \lambda\le {7 \over 6},} \hfill\cr}
i.e.,
(6.6)
\matrix{ {0 < m \le {1 \over 3},} \hfill & {0 < \lambda\le {7 \over 6}.} \hfill\cr}
For the uniqueness, we need
{1 \over 8}(\lambda+ {1 \over 2}(1 + M)) < 1,
which corresponds to (3.7). Thus we obtain
(6.7)
1 \le M < 15 - 2\lambda\le {{38} \over 3}.
Until now, we see that there exits unique monotonically increasing solution x in
\Phi \left( {\left[ {{1 \over 8},{3 \over 8}} \right];m,M} \right)
with
0 < m \le {1 \over 3}
,
1 \le M < {{38} \over 3}
.
Furthermore, in order to find the convex solution, we need
(6.8)
0 < k \le {1 \over 4} + \lambda m + {1 \over 2}{m^2} \le 1 + \lambda M + {1 \over 2}{M^2} \le K,
with
0 < m \le {1 \over 3}
,
1 \le M < {{38} \over 3}
and
0 < \lambda\le {7 \over 6}
. By Theorem 5.3, we know that the monotonically increasing convex solution is the unique one in
\Omega ([{1 \over 8},{3 \over 8}];m,M,k,K)
, where m, M, k, K satisfy (6.6)–(6.8). For any
{x_0} \in \Omega ([{1 \over 8},{3 \over 8}];m,M,k,K)
, the unique solution of (6.3) can be approximated by the sequence xk+1 = Txk, k = 0, 1, 2, . . ., with T defined as in (3.3). This improves the estimates of Example 6.1.