1 Introduction

Let \(I_{a}^{b}~\text{ denote }~ [a,b]\subset {\mathbb {R}}\). Assume \({\mathcal {B}}(I_{a}^{b}):=\{h:I_{a}^{b}\rightarrow {\mathbb {R}}|~h~ \text{ is } \text{ bounded } \text{ on }~I_{a}^{b}\}\) with the uniform norm, \(\Vert {.}\Vert _{\infty }\), and \({\mathcal {C}}(I_{a}^{b}):=\{h\in {\mathcal {B}}(I_{a}^{b})|~h~ \text{ is } \text{ continuous } \text{ in }~ I_{a}^{b} \}.\) For \(g\in {\mathcal {C}}(I_{0}^{1})\), Bernstein (1912) provided an easy proof of the fundamental Weierstrass approximation theorem by constructing a sequence of polynomials as:

$$\begin{aligned} {{\mathfrak {B}}_n}\left( {g;y} \right) = \sum \limits _{\nu = 0}^n {{p_{n,\nu }}\left( y \right) g\left( {\frac{\nu }{n}} \right) }, \end{aligned}$$
(1.1)

where \({p_{n,\nu }}\left( y \right) : = \left( {\begin{array}{*{20}{c}} n\\ \nu \end{array}} \right) {y^{\nu }}{\left( {1 - y} \right) ^{n - {\nu }}}\), and \(y\in I_{0}^{1}\). In 1937, Chlodowsky (1937) introduced a generalization of Bernstein polynomials to the infinite interval \([0,\infty )\). Subsequently, Szász (1959) proposed another modification of the operators (1.1) to approximate functions defined on the interval \([0,\infty )\). In 1969, Stancu (1969) defined a generalization of Bernstein polynomials as

$$\begin{aligned} {{\mathfrak {B}}_n^{\alpha ,\beta }}\left( {g;y}\right) = \sum _{\nu =0}^n{p_{n,\nu }}\left( y \right) g \bigg (\frac{\nu +\alpha }{n+\beta }\bigg ), \forall \; g\in {\mathcal {C}}(I_{0}^{1}) \end{aligned}$$
(1.2)

where \(0\le \alpha \le \beta\). If \(\alpha =\beta =0\), then operators (1.2) reduce to (1.1).

For \(g\in {\mathcal {C}}(I_{0}^{1})\), Cao (1997) generalized the Bernstein polynomials with the aid of a sequence \((t_n)\) of natural numbers as:

$$\begin{aligned} {\mathfrak {C}}_{n,{t_n}}\left( {g(s),y}\right) : = \frac{1}{{{t_n}}}\sum \limits _{\nu = 0}^n {\left( {\sum \limits _{j = 0}^{{t_{n} - 1}} {g\left( {\frac{{\nu + j}}{{n + {t_n} - 1}}} \right) } } \right) } {p_{n,\nu }}\left( y \right) ,~~\forall y\in I_{0}^{1}. \end{aligned}$$
(1.3)

It is evident that if \(t_n=1,\forall \;n\in {\mathbb {N}}\), then the operators (1.3) reduce to (1.1). The author (Cao 1997) gave the characterization of the uniform convergence of the operators (1.3) to g, for all \(g\in {\mathcal {C}}(I_{0}^{1})\) and also determined the approximation degree via modulus of continuity.

In order to approximate the Lebesgue integrable functions on the interval \(I_{0}^{1}\), in 1930 Kantorovich (1930) introduced an integral modification of the Bernstein operators. Subsequently in 1967, Durrmeyer (1967) introduced another kind of integral modification of the Bernstein polynomials on the interval \(I_{0}^{1}\), which was extensively studied by Derriennic (1981). Later, researchers defined and studied analogous integral modifications of several other sequences of positive linear operators, for instance see (Aral et al. 2013; Gupta and Agarwal 2014) and the references therein. Gadjiev and Ghorbanalizadeh (2010) presented a new generalization of Bernstein-Stancu type polynomials for one and two variables and studied the order of convergence. Very recently, Aslan (2022) investigated the rate of convergence by Szász–Mirakjan Stancu–Kantorovich operators based on Bézier basis functions with shape parameter \(\lambda \in [-1,1]\), by means of the modulus of continuity, Peetre’s K-functional and Voronovskaya type asymptotic result. Subsequently, Aslan (2022) discussed the degree of approximation for Szász–Mirakjan–Durrmeyer operators with shape parameter \(\lambda \in [-1,1]\) with the aid of the usual modulus of continuity and Peetre’s K-functional. Furthermore in the same paper, the author established some results in weighted approximation, e.g. Korovkin type approximation theorem and an asymptotic result of Voronovskaya type.

In recent years, the study of GBS (Generalized Boolean Sum) operators associated with the positive linear operators has been an active area of research in the field of approximation theory. The notions of B-continuous (B\(\ddot{o}\)gel continuous) and B-differentiable (B\(\ddot{o}\)gel differentiable) functions were introduced by Bögel (1934). Dobrescu and Matei (1966) showed that any B-continuous function on a bounded interval can be uniformly approximated by the boolean sum of bivariate Bernstein polynomials. Badea et al. (1986) presented the Korovkin type test function theorem for B-continuous functions. Badea et al. (1988) gave a quantitative variant of the Korovkin type theorem for B-continuous functions while Badea and Cottin (1991) proved the Shisha-Mond type theorem for the approximation of B-continuous functions. Barbosu et al. (2017) determined the approximation degree for GBS operators of q-Durrmeyer–Stancu type by means of mixed modulus of smoothness. Very recently, Aslan and Mursaleen (2022) examined the weighted approximation properties of certain bivariate Chlodowsky type Szász–Durrmeyer operators by using the partial and complete modulus of continuity, weighted modulus of continuity and Peetre’s K-functional. Furthermore in the same paper, the authors constructed the associated GBS operators and determined the degree of approximation for B-continuous functions by means of the mixed modulus of continuity. Agrawal et al. (2022) investigated some direct approximation theorems for a q-analogue of the modified \(\alpha\)-Bernstein operators introduced by Kajla and Acar (2019) in the univariate and bivariate cases and also examined the convergence behavior of the associated GBS operators. For a detailed account of the research in this direction, the reader may refer to Gupta et al. (2018) and the references therein.

Statistical convergence is a generalization of the usual notion of convergence. The concept of statistical convergence was given by Zygmund (1979). Later, it was formalized by Steinhaus (1951) and Fast (1951). In the field of approximation theory, the concept of statistical convergence was introduced by Gadjiev and Orhan (2002) in 2002. They proved the Bohman-Korovkin type theorem for the statistical convergence of positive linear operators. In 1993, Kolk (1993) extended the notion of statistical convergence to A-statistical convergence with the aid of a non negative regular matrix. Karakaya and Chishti (2009) introduced the concept of weighted statistical convergence which was modified by Mursaleen et al. (2012) in 2012. Srivastava et al. (2018) introduced a certain notion of deferred weighted A-statistical convergence and investigated the Korovkin type approximation theorem and the rate of convergence for a sequence of positive linear operators based upon the (pq)-Lagrange polynomials. Agrawal et al. (2022) studied the Korovkin type approximation thereoms and the rate of convergence with the aid of the modulus of continuity by using deferred statistical convergence of order \(\alpha =1\), as defined in Et et al. (2020) and the power series summability technique for an operator based on q-Laguerre polynomials introduced by Özarslan (2007). Very recently, Agrawal et al. (2023) established two general non-trivial Korovkin-type approximation results for positive linear operators by using the deferred type statistical convergence defined by Srivastava et al. (2018), and power summability method and showed the applicability of these results to an operator based on multivariate q-Lagrange-Hermite polynomials.

Motivated by the above studies, the purpose of the present article is to characterize the deferred statistical of convergence of order \(\alpha\), \((0<\alpha \le 1)\) for any sequence of positive linear operators \({\mathfrak {L}}_n\left( {f} \right) ~to~ f\), for all \(f\in {\mathcal {C}}(I_{a}^{b})\) and also obtain the rate of deferred statistical convergence. We show the application of the Korovkin type result to the operators (1.3). Finally, we extend our study to operators defined on \({\mathcal {C}}[0,\infty )\).

2 Preliminaries

Assume \(e_k(s)=s^k\), \(\forall ~k\in {\mathbb {N}}\cup {\{0\}}\) and \(s\in I_{0}^{1}\).

Lemma 1

Cao (1997) For the operators (1.3), for all \(y\in I_{0}^{1}\), we have

  1. (i)

    \({\mathfrak {C}}_{n,{t_n}}\left( {e_0,y} \right) = 1;\)

  2. (ii)

    \({\mathfrak {C}}_{n,{t_n}}\left( {e_1,y} \right) = \frac{{ny}}{{n + {t_n} - 1}} + \frac{{{t_{n} - 1}}}{{2\left( {n + {t_n} - 1} \right) }};\)

  3. (iii)

    \({\mathfrak {C}}_{n,{t_n}}\left( {{e_2},y} \right) = \frac{{n\left( {n - 1} \right) {y^2}}}{{{{\left( {n + {t_n} - 1} \right) }^2}}} + \frac{{n{t_n}y}}{{{{\left( {n + {t_n} - 1} \right) }^2}}} + \frac{{\left( {{t_n} - 1} \right) \left( {2{t_n} - 1} \right) }}{{6{{\left( {n + {t_n} - 1} \right) }^2}}}.\)

Throughout our further discussion, we assume that \(\mathop {\lim }\limits _{n \rightarrow \infty } \dfrac{t_n}{n}=0\).

Let \(J\subset {\mathbb {N}}\), and \(\chi _J\) be the characteristic function of J, then the natural density of J is defined by

$$\begin{aligned} \delta (J)=\lim _{n\rightarrow \infty }\dfrac{1}{n}\sum _{j=1}^{n}\chi _{J}(j), \end{aligned}$$

provided the limit exists.

Any real or complex sequence \(\langle y_{n}\rangle \) is called statistically convergent to \(y_0\) if, for each \(\epsilon >0\), the density \(\delta \left( \{\nu \in {\mathbb {N}}:|y_{\nu }-y_0|\ge \epsilon \}\right) =0.\) Evidently, every convergent sequence is statistically convergent but the converse need not be true, for example, let

$$\begin{aligned} \langle y_n\rangle = {\left\{ \begin{array}{ll} 1, &{}\quad \text {if}\,\, n=p^3, p\in {\mathbb {N}},\\ 0, &{}\quad \text {otherwise} \end{array}\right. }. \end{aligned}$$

Evidently, \(\langle y_n\rangle\) does not converge in the ordinary sense but it converges statistically to 0.

Let the sequences \(u=\langle u_{n}\rangle\) and \(v=\langle v_{n}\rangle\) in \({\mathbb {N}}\cup {\{0\}}\), satisfy

$$\begin{aligned} (i)\; u_n<v_n,~\forall ~n\in {\mathbb {N}},\; \text{ and }~ (ii) \lim _{n\rightarrow \infty }v_n=\infty . \end{aligned}$$
(2.1)

Then the deferred Cesaro mean of \(\langle y_{n}\rangle\) is given by

$$\begin{aligned} \left( D_{u,v}y\right) _n=\dfrac{1}{v_n-u_n}\sum _{k=u_n+1}^{v_n}y_k. \end{aligned}$$

Further, the deferred density of the set J is defined by

$$\begin{aligned} \delta _{u,v}(J)=\lim _{n\rightarrow \infty }\dfrac{1}{v_n-u_n}\left| \{k\in J:u_n<k\le v_n\}\right| , \end{aligned}$$

where the vertical bars represent the cardinality.

A real sequence \(\langle y_{n}\rangle \) is called deferred statistically convergent to \(y_0\), if for every \(\epsilon >0\), we have

$$\begin{aligned} \lim _{n\rightarrow \infty }\dfrac{1}{v_n-u_n}\left| \{\nu \in {\mathbb {N}}:u_n<\nu \le v_n,~\text {and}~|y_{\nu }-y_0|\ge \epsilon \}\right| =0. \end{aligned}$$

We denote this convergence by writing \(stat_{u,v}^{D}-\lim y_n=y_0\). It is evident that for \(u_n=0,\) and \(v_n=n\), the deferred statistical convergence includes the statistical convergence (Kucükaslan and Yılmaztürk 2016).

Following (Et et al. 2020), the sequence \(\langle y_{n}\rangle \) is said to be deferred statistically convergent to \(y_0\) of order \(\alpha \), \(0<\alpha \le 1\), if provided

$$\begin{aligned} \lim _{n\rightarrow \infty }\dfrac{1}{\left( v_n-u_n\right) ^{\alpha }}\left| \{\nu \in {\mathbb {N}}:u_n<\nu \le v_n,~\text {and}~|y_{\nu }-y_0|\ge \epsilon \}\right| =0. \end{aligned}$$

This convergence is represented by \(stat_{u,v}^{D,\alpha }-\lim y_n=y_0\). It is obvious that the deferred statistical convergence of order \(\alpha \) includes the deferred statistical convergence for \(\alpha =1\), and in the particular case of \(u_n=0,~v_n=n,~and~\alpha =1\), it includes the statistical convergence.

The sequence \(\langle y_n\rangle\) is called deferred statistically convergent of order \(\alpha\) to the number \(y_0\) with the rate \(o(\gamma _n)\) (Duman and Orhan 2005; Duman et al. 2003) if

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{\gamma _n}\bigg \{\frac{1}{(v_n-u_n)^{\alpha }}\left| \{\nu \in {\mathbb {N}}:u_n<\nu \le v_n,~\text {and}~|y_{\nu }-y_0|\ge \epsilon \}\right| \bigg \}= & {} 0. \end{aligned}$$
(2.2)

In this case, we write \(y_n-y_0 = stat_{u,v}^{D,\alpha }-o(\gamma _n)\).

The following lemma is crucial to determine the rate of deferred statistical convergence of order \(\alpha\).

Lemma 2

Let \(\langle a_n\rangle\) and \(\langle b_n\rangle\) be monotonically decreasing sequences in \((0,\infty )\). Further, let \(\langle x_n\rangle\) and \(\langle y_n\rangle\) be two sequences such that

$$\begin{aligned} x_n-x_0=stat_{u,v}^{D,\alpha }-o(a_n),\\ y_n-y_0=stat_{u,v}^{D,\alpha }-o(b_n), \end{aligned}$$

respectively. Then the following relations hold true:

  1. (i)

    \((x_n-x_0)\pm (y_n-y_0)=stat_{u,v}^{D,\alpha }-o(c_n);\)

  2. (ii)

    \( (x_n-x_0)(y_n-y_0)=stat_{u,v}^{D,\alpha }-o(c_n);\)

  3. (iii)

    \( \mu (x_n-x_0)=stat_{u,v}^{D,\alpha }-o(a_n),\) (for any real scalar \(\mu\));

where \(c_n=\max \{a_n,b_n\}\), for all \(n\in {\mathbb {N}}\).

Proof

For an arbitrary \(\epsilon >0\), consider the sets:

$$\begin{aligned}{} & {} U=\bigg \{\nu \in {\mathbb {N}}:u_n<\nu \le v_n,~\text {and}~|(x_{\nu }-x_0)\pm (y_{\nu }-y_0)|\ge \epsilon \bigg \},\\{} & {} U_1=\bigg \{\nu \in {\mathbb {N}}:u_n<\nu \le v_n,~\text {and}~|x_{\nu }-x_0|\ge \frac{\epsilon }{2}\bigg \}, \end{aligned}$$

and

$$\begin{aligned} U_2=\left\{ \nu \in {\mathbb {N}}:u_n<\nu \le v_n,~\text {and}~|y_{\nu }-y_0|\ge \frac{\epsilon }{2}\right\} . \end{aligned}$$

Then clearly, we have

$$\begin{aligned} U\subseteq U_1\cup U_2. \end{aligned}$$

Now since \(c_n=\max \{a_n,b_n\}\), we obtain

$$\begin{aligned} \frac{|U|}{c_n\left( v_n-u_n\right) ^{\alpha }}\le \frac{|U_1|}{a_n\left( v_n-u_n\right) ^{\alpha }} +\frac{|U_2|}{b_n\left( v_n-u_n\right) ^{\alpha }}. \end{aligned}$$

Hence, by our hypotheses we obtain

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{|U|}{c_n\left( v_n-u_n\right) ^{\alpha }}=0, \end{aligned}$$

which establishes assertion (i).

To prove (ii), for a given \(\epsilon >0\), define the sets:

$$\begin{aligned}{} & {} W=\{\nu \in {\mathbb {N}}:u_n<\nu \le v_n,~\text {and}~|(x_{\nu }-x_0)(y_{\nu }-y_0)|\ge \epsilon \},\\{} & {} W_1=\{\nu \in {\mathbb {N}}:u_n<\nu \le v_n,~\text {and}~|x_{\nu }-x_0|\ge \sqrt{\epsilon }\}, \end{aligned}$$

and

$$\begin{aligned} W_2=\{\nu \in {\mathbb {N}}:u_n<\nu \le v_n,~\text {and}~|y_{\nu }-y_0|\ge \sqrt{\epsilon }\}. \end{aligned}$$

Obviously, \(W\subseteq W_1\cup W_2.\) Now proceeding as in the proof of assertion (i), we obtain the assertion (ii).

The assertion (iii) follows similarly, hence we omit the details. \(\square\)

3 Main Results

In the following theorem, we give a characterization of the deferred statistical convergence of order \(\alpha\), \(0<\alpha \le 1\) for any sequence of positive linear operators \({\mathfrak {L}}_{n}(h)\), to h, for all \(h\in {\mathcal {C}}(I_{a}^{b})\). We assume that \(h_j(s) = s^j, \forall ~ j=0,1,2,\) and \(s\in I_{a}^{b}.\) Also, let

$$\begin{aligned} \beta= & {} \sup _{y\in I_{a}^{b}} |y|\quad \text {and}\quad \Vert {{\mathfrak {L}}_n((s-y)^2)}\Vert _{\infty }=\zeta _{n}^{(2)}. \end{aligned}$$

Theorem 1

Let \({\mathfrak {L}}_n: {\mathcal {C}}(I_{a}^{b}) \rightarrow {\mathcal {B}}(I_{a}^{b})\). Suppose \(\langle u_n\rangle\) and \(\langle v_n\rangle\) be sequences as defined by (2.1). Then,

$$\begin{aligned} stat_{u,v}^{D,\alpha }-\lim _{n}\Vert {{\mathfrak {L}}_n(h_i)-h_i}\Vert _{\infty } = 0, \quad ~ \text {for}~ i=0,1,2; \end{aligned}$$
(3.1)

if and only if

$$\begin{aligned} stat_{u,v}^{D,\alpha }-\lim _{n}\Vert {{\mathfrak {L}}_n(h)-h}\Vert _{\infty } = 0, \quad ~ \forall ~ h\in {\mathcal {C}}(I_{a}^{b}). \end{aligned}$$
(3.2)

Proof

Suppose \(h\in {\mathcal {C}}(I_{a}^{b})\), then

$$\begin{aligned}{} & {} stat_{u,v}^{D,\alpha }-\lim _{n}\Vert {{\mathfrak {L}}_n(h)-h}\Vert _{\infty } = 0\Rightarrow stat_{u,v}^{D,\alpha }\nonumber \\{} & {} \quad -\lim _{n}\Vert {{\mathfrak {L}}_n(h_i)-h_i}\Vert _{\infty } = 0,\; for\; i=0,1,2 \end{aligned}$$

is trivial. In order to prove the converse part, let us suppose that

$$\begin{aligned} stat_{u,v}^{D,\alpha }-\lim _{n}\Vert {{\mathfrak {L}}_n(h_i)-h_i} \Vert _{\infty } = 0,\; for\; i=0,1,2. \end{aligned}$$

Since \(h\in {\mathcal {C}}(I_{a}^{b}),\) we have

$$\begin{aligned} | h(s)-h(y)|\le & {} 2\Vert {h}\Vert _{\infty }, \quad \forall ~ s,y\in I_{a}^{b}. \end{aligned}$$

Further, h is uniformly continuous on \(I_{a}^{b}\), therefore for a given \(\epsilon > 0,\) \(\exists\) a \(\delta >0,\) such that

\(|h(s)-h(y)| < \epsilon\), whenever \(|s-y| < \delta \), \(s,y\in I_{a}^{b}\). For \(|s-y| \ge \delta ,\) we have

$$\begin{aligned} |h(s)-h(y)|\le & {} 2\Vert {h}\Vert _{\infty } \frac{(s-y)^2}{\delta ^2}. \end{aligned}$$

Thus,

$$\begin{aligned} |h(s)-h(y)|< & {} \epsilon + 2\Vert {h}\Vert _{\infty } \frac{(s-y)^2}{\delta ^2} \quad \forall ~ s,y \in I_{a}^{b}. \end{aligned}$$
(3.3)

Since \({\mathfrak {L}}_n\) is a monotone operator, we get

$$\begin{aligned} |{\mathfrak {L}}_n(h(s);y)-h(y)| \le {\mathfrak {L}}_n(|h(s)-h(y)|;y) + |h(y)||{\mathfrak {L}}_n(h_0(s);y)-h_0(y)|, \end{aligned}$$

hence using (3.3), we obtain

$$\begin{aligned}{} & {} |{\mathfrak {L}}_n(h(s);y)-h(y)| < {\mathfrak {L}}_n\big (\epsilon + 2\Vert {h}\Vert _{\infty } \frac{(s-y)^2}{\delta ^2};y\big ) \\{} & {} \qquad +|h(y)||{\mathfrak {L}}_{n}(h_0(s);y)-h_0(y)|\\{} & {} \quad = \epsilon + \epsilon \big ( {\mathfrak {L}}_n(h_0(s);y)-h_0(y)\big ) \\{} & {} \qquad + \frac{2\Vert {h}\Vert _{\infty }}{\delta ^2} \bigg [\big ({\mathfrak {L}}_n(h_2(s);y)-h_2(y)\big )\\{} & {} \qquad -2y\big ({\mathfrak {L}}_n(h_1(s);y)-h_1(y)\big )+y^2 \big ({\mathfrak {L}}_n(h_0(s);y)-h_0(y)\big )\bigg ]\\{} & {} \qquad +|h(y)||{\mathfrak {L}}_{n}(h_0(s);y)-h_0(y)|. \end{aligned}$$

Consequently,

$$\begin{aligned}{} & {} \Vert {{\mathfrak {L}}_n(h)-h}\Vert _{\infty } < \epsilon + \bigg (\Vert {h}\Vert _{\infty }+\frac{2\Vert {h}\Vert _{\infty }{\beta }^2}{\delta ^2} +\epsilon \bigg )\Vert {\mathfrak {L_n}(h_0)-h_0}\Vert _{\infty } \nonumber \\{} & {} \quad +\frac{2\Vert {h}\Vert _{\infty }}{\delta ^2} \Vert {\mathfrak {L_n}(h_2)-h_2}\Vert _{\infty } +\frac{4\Vert {h}\Vert _{\infty }\beta }{\delta ^2} \Vert {\mathfrak {L_n}(h_1)-h_1}\Vert _{\infty }. \end{aligned}$$
(3.4)

Now, for any \(\epsilon ^{'} > 0\), let us choose the above \(\epsilon \), such that \(0<\epsilon <\epsilon ^{'}\), and consider the following sets:

$$\begin{aligned} W_1= & {} \bigg \{n\in {\mathbb {N}}: \Vert {\mathfrak {L_n}(h)-h}\Vert _{\infty } \ge \epsilon ^{'}\bigg \};\\ W_2= & {} \bigg \{n\in {\mathbb {N}}: \bigg (\Vert {h}\Vert _{\infty }+\frac{2\Vert {h}\Vert _{\infty }{\beta }^2}{\delta ^2} +\epsilon \bigg )\\{} & {} \Vert {\mathfrak {L_n}(h_0)-h_0}\Vert _{\infty }\ge \frac{\epsilon ^{'}-\epsilon }{3} \bigg \};\\ W_3= & {} \bigg \{n\in {\mathbb {N}}: \frac{2\Vert {h}\Vert _{\infty }}{\delta ^2} \Vert {\mathfrak {L_n}(h_2)-h_2}\Vert _{\infty }\ge \frac{\epsilon ^{'}-\epsilon }{3}\bigg \};\\ W_4= & {} \bigg \{n \in {\mathbb {N}}: \frac{4\Vert {h}\Vert _{\infty }\beta }{\delta ^2}\Vert {\mathfrak {L_n}(h_1)-h_1}\Vert _{\infty } \ge \frac{\epsilon ^{'}-\epsilon }{3}\bigg \}, \end{aligned}$$

then from (3.4) it is clear that, \(W_1\subseteq \cup _{j=2}^{4} W_j\), and hence

$$\begin{aligned} \dfrac{1}{\left( v_n-u_n\right) ^{\alpha }}\left| \{\nu \in W_1:u_n<\nu \le v_n\}\right|\le & {} \sum _{j=2}^{4}\dfrac{1}{\left( v_n-u_n\right) ^{\alpha }}\left| \{\nu \in W_j:u_n<\nu \le v_n\}\right| . \end{aligned}$$

By our assumption, we have

$$\begin{aligned} \lim _{n \rightarrow \infty } \dfrac{1}{\left( v_n-u_n\right) ^{\alpha }}\left| \{\nu \in W_j:u_n<\nu \le v_n\}\right| = 0, \end{aligned}$$

for all \(2\le j\le 4,~ (j\in {\mathbb {N}})\). Therefore,

$$\begin{aligned} \lim _{n \rightarrow \infty } \dfrac{1}{\left( v_n-u_n\right) ^{\alpha }}\left| \{\nu \in W_1:u_n<\nu \le v_n\}\right| = 0, \end{aligned}$$

which implies that

$$\begin{aligned} stat_{u,v}^{D,\alpha }-\lim _{n}\Vert {{\mathfrak {L}}_n(h)-h}\Vert _{\infty } = 0. \end{aligned}$$

\(\square \)

Remark 1

Taking \(u_n=0,~ v_n=n, ~\forall n\in {\mathbb {N}}~\text{ and }~ \alpha =1\), Theorem 1 turns into the Korovkin type result of Gadjiev and Orhan (2002).

In the papers (Duman et al. 2003) and Duman (2008), the authors discussed the rates of A-statistical convergence of linear positive operators and positive linear convolution operators, respectively. For more research in this area, one can refer to Agrawal et al. (2021), Baxhaku et al. (2021), Duman and Orhan (2008) and Jena et al. (2018) etc. Inspired by these studies, we examine the rate of deferred statistical convergence of order \(\alpha \) by the operators \({\mathfrak {L}}_n(h)~ to~ h\), for all \(h\in {\mathcal {C}}(I_{a}^{b})\).

Theorem 2

Let \(h\in {\mathcal {C}}(I_{a}^{b})\), and \(\langle \gamma _{n}\rangle\) be a monotonically decreasing sequence in \((0,\infty )\). If

$$\begin{aligned} \{\Vert {h}\Vert _{\infty }+\omega (h;\sqrt{\zeta _{n}^{(2)}})\}\Vert {{\mathfrak {L}}_n(h_0)-h_0}\Vert _{\infty } + 2~ \omega (h;\sqrt{\zeta _{n}^{(2)}})= & {} stat_{u,v}^{D,\alpha }- o(\gamma _n),~as~n\rightarrow \infty ,\,\,\,\,\,\,\,\,\,\, \end{aligned}$$
(3.5)

then,

$$\begin{aligned} \Vert {\mathfrak {L_n}(h)-h}\Vert _{\infty }= & {} stat_{u,v}^{D,\alpha }-o(\gamma _n),~as~n\rightarrow \infty . \end{aligned}$$

Proof

For any \(h \in {\mathcal {C}}(I_{a}^{b})~and~\delta >0\), we may write

$$\begin{aligned} |h(s)-h(y)|\le & {} \left\{ 1+\frac{(s-y)^2}{\delta ^2}\right\} \omega (h;\delta ) \quad \forall ~s,y\in I_{a}^{b}. \end{aligned}$$

Hence,

$$\begin{aligned} |{\mathfrak {L}}_n(h(s);y)-h(y)|\le & {} |h(y)||{\mathfrak {L}}_n(h_0(s);y)-h_0(y)| + \omega (h;\delta ) + \omega (h;\delta )|{\mathfrak {L}}_n(h_0(s);y)-h_0(y)|\\{} & {} +~ \frac{\omega (h;\delta )}{\delta ^2} {\mathfrak {L}}_n((s-y)^2;y). \end{aligned}$$

Consequently,

$$\begin{aligned}{} & {} \Vert {{\mathfrak {L}}_n(h)-h}\Vert _{\infty } \le \{\Vert {h}\Vert _{\infty }+\omega (h;\delta )\}\Vert {{\mathfrak {L}}_n(h_0)-h_0}\Vert _{\infty } \nonumber \\{} & {} \quad + \omega (h;\delta ) + \frac{\omega (h;\delta )}{\delta ^2}\Vert {{\mathfrak {L}}_n((s-y)^2)\Vert _{\infty }}. \end{aligned}$$
(3.6)

Taking \(\delta = \sqrt{\Vert {{\mathfrak {L}}_n((s-y)^2)\Vert _{\infty }}}=\sqrt{\zeta _{n}^{(2)}}\), we get

$$\begin{aligned} \Vert {{\mathfrak {L}}_n(h)-h}\Vert _{\infty }\le & {} \big |{h}\Vert _{\infty }+\omega (h;\sqrt{\zeta _{n}^{(2)}})\} \Vert {{\mathfrak {L}}_n(h_0)-h_0}\Vert _{\infty } \\{} & {} \quad + 2~ \omega (h;\sqrt{\zeta _{n}^{(2)}}). \end{aligned}$$

Considering (3.5), the proof is completed. \(\square \)

4 Application

We illustrate the application of Theorem 1 to the generalized Bernstein polynomials (1.3) and show that for \(h\in {\mathcal {C}}\left( I_{0}^{1}\right) \), the operators \({\mathfrak {C}}_{n,t_n}(h)\) converge to h deferred statistically with the order \(\alpha ,~0<\alpha \le 1\).

Using Theorem 1, we note that the sequence \(\langle {\mathfrak {C}}_{n,t_n}{\left( h\right) }\rangle\) converges deferred statistically with order \(\alpha \) to h, for all \(h\in {\mathcal {C}}\left( I_{0}^{1}\right) \), if the following holds true:

$$\begin{aligned} stat_{u,v}^{D,\alpha }-\lim _{n}\Vert {{\mathfrak {C}}_{n,t_n} {\left( h_j\right) }-h_j}\Vert _{\infty } = 0, \quad ~ \text {for}~ j=0,1,2. \end{aligned}$$
(4.1)

Using Lemma 1, the condition (4.1) holds trivially for \(j=0\). For \(j=1\), again by Lemma 1, one has

$$\begin{aligned} \Vert {{\mathfrak {C}}_{n,t_n}\left( h_1\right) -h_1}\Vert _{\infty }\le & {} \frac{{\left( {{t_n} - 1} \right) }}{{2\left( {n + {t_n} - 1} \right) }}. \end{aligned}$$
(4.2)

For a given \(\epsilon >0\), we define the sets:

$$\begin{aligned} L_1= & {} \{n\in {\mathbb {N}}: \Vert {{\mathfrak {C}}_{n,t_n}\left( h_1\right) -h_1}\Vert _{\infty }\ge \epsilon \};\\ L_2= & {} \{n\in {\mathbb {N}}:\frac{{\left( {{t_n} - 1} \right) }}{{2\left( {n + {t_n} - 1} \right) }}\ge \epsilon \}, \end{aligned}$$

then from (4.2), we have \(L_1 \subseteq L_2\) and hence

$$\begin{aligned} \lim _{n \rightarrow \infty }\dfrac{1}{\left( v_n-u_n\right) ^{\alpha }}\left| \{\nu \in L_1:u_n<\nu \le v_n\}\right|\le & {} \lim _{n \rightarrow \infty } \dfrac{1}{\left( v_n-u_n\right) ^{\alpha }}\left| \{\nu \in L_2:u_n<\nu \le v_n\}\right| .\,\,\,\,\,\,\, \end{aligned}$$
(4.3)

Using our assumption \(\lim _{n \rightarrow \infty }\dfrac{t_n}{n}=0,\) it follows that \(\lim _{n \rightarrow \infty }\frac{{\left( {{t_n} - 1} \right) }}{{2\left( {n + {t_n} - 1} \right) }}=0.\) Consequently, \(stat_{u,v}^{D,\alpha }-\underset{n}{\lim }\frac{{\left( {{t_n} - 1} \right) }}{{2\left( {n + {t_n} - 1} \right) }} = 0.\) Thus from (4.3), we obtain \(\lim \limits _{n \rightarrow \infty }\dfrac{1}{\left( v_n-u_n\right) ^{\alpha }}\left| \{\nu \in L_1:u_n<\nu \le v_n\}\right| =0,\) which implies that the condition (4.1) holds true for \(j=1\).

For the case \(j=2\), again applying Lemma 1, we are led to

$$\begin{aligned} \Vert {{\mathfrak {C}}_{n,t_n}\left( h_2\right) -h_2}\Vert _{\infty }\le & {} \frac{{\left( {{t_n} - 1} \right) }}{{\left( {n + {t_n} - 1} \right) }}+\frac{{\left( {{t_n} - 1} \right) }{\left( {2{t_n} - 1} \right) }}{6{\left( {n + {t_n} - 1} \right) ^2}}. \end{aligned}$$
(4.4)

Again, for an arbitrary \(\epsilon >0\), define the sets:

$$\begin{aligned} E_1= & {} \{n\in {\mathbb {N}}: \Vert {{\mathfrak {C}}_{n,t_n}\left( h_2\right) -h_2}\Vert _{\infty }\ge \epsilon \};\\ E_2= & {} \{n\in {\mathbb {N}}:\frac{{\left( {{t_n} - 1} \right) }}{{\left( {n + {t_n} - 1} \right) }}\ge \dfrac{\epsilon }{2} \};\\ E_3= & {} \{n\in {\mathbb {N}}:\frac{{\left( {{t_n} - 1} \right) }{\left( {2{t_n} - 1} \right) }}{6{\left( {n + {t_n} - 1} \right) ^2}}\ge \dfrac{\epsilon }{2} \}, \end{aligned}$$

then from (4.4), we have \(E_1 \subseteq E_2\cup E_3\), and hence

$$\begin{aligned}{} & {} \lim _{n \rightarrow \infty }\dfrac{1}{\left( v_n-u_n\right) ^{\alpha }}\left| \{\nu \in E_1:u_n<\nu \le v_n\}\right| \\{} & {} \quad \le \sum _{i=2}^{3}\left\{ \lim _{n \rightarrow \infty } \dfrac{1}{\left( v_n-u_n\right) ^{\alpha }}\left| \{\nu \in E_i:u_n<\nu \le v_n\}\right| \right\} . \end{aligned}$$

Now, by a reasoning similar to the case \(j=1\) above, the condition (4.1) is true for \(j=2\), which completes the proof.

5 Korovkin-Type Approximation Theorem Involving Exponential Functions and Rate of Convergence via Deferred Statistical Convergence of Order \(\alpha \)

We extend the concept of deferred statistical convergence of order \(\alpha \), \((0<\alpha \le 1)\) for the approximation by positive linear operators defined for continuous functions on \({\mathbb {R}}^+:=[0,\infty )\) and establish an analogue of Theorem 1. We show that the test functions in this case are \(h_{\nu }(y)=e^{-\nu y},~\nu =0,1,2\). Further, we also derive a result analogous to Theorem 2.

Let \(B({\mathbb {R}}^+):=\{h:{\mathbb {R}}^+\rightarrow {\mathbb {R}}|~h~\text{ is } \text{ bounded }~on~{\mathbb {R}}^+\}\), with the sup- norm \(\Vert \cdot \Vert \) and \(C({\mathbb {R}}^+):=\{h\in B({\mathbb {R}}^+)|~h~\text{ is } \text{ continuous }~on~{\mathbb {R}}^+\}\).

Our following result is an analogue of Theorem 1 for positive linear operators \({\mathcal {L}}_n:C({\mathbb {R}}^+)\rightarrow B({\mathbb {R}}^+).\)

Theorem 3

For all \(h\in C({\mathbb {R}}^+)\),

$$\begin{aligned} stat_{u,v}^{D,\alpha }-\lim \limits _n\Vert {\mathcal {L}}_n(h)-h\Vert =0, \end{aligned}$$
(5.1)

if and only if

$$\begin{aligned} stat_{u,v}^{D,\alpha }-\lim \limits _n\Vert {\mathcal {L}}_ n(h_{\nu })-h_{\nu }\Vert =0,~\text{ for }~\nu =0,1,2. \end{aligned}$$
(5.2)

Proof

Note that \(h_{\nu }(y)=e^{-\nu y}\in C({\mathbb {R}}^+),\,\,\nu =0,1,2\), hence

$$\begin{aligned} (5.1)\Rightarrow (5.2) \end{aligned}$$

holds true. To establish the converse part, assume (5.2) to be true. Since \(h\in C({\mathbb {R}}^+)\), \(|h(y)|\le M,\) for some constant \(M>0\), and \(\forall y\in {\mathbb {R}}^+,\) which implies

$$\begin{aligned} |h(s)-h(y)|\le 2M,\,\,\forall \,s,y\in {\mathbb {R}}^+. \end{aligned}$$

Further, for an arbitrary \(\epsilon >0,\) we can find a \(\delta >0\), in such a way that

$$\begin{aligned} |h(s)-h(y))|< \epsilon ,\,\,\text {whenever}\,\,|e^{-s}-e^{-y}|<\delta . \end{aligned}$$
(5.3)

Define \(\phi _1=\phi _1(s,y)=\left( e^{-s}-e^{-y}\right) ^2.\) If \(|e^{-s}-e^{-y}|\ge \delta ,\) then clearly,

$$\begin{aligned} |h(s)-h(y)|\le \frac{2M}{\delta ^2}\phi _1(s,y). \end{aligned}$$
(5.4)

From the Eqs. (5.3) and (5.4), we thus find that

$$\begin{aligned} |h(s)-h(y)|< \epsilon +\frac{2M}{\delta ^2}\phi _1(s,y), \forall s,y\in {\mathbb {R}}^+ \end{aligned}$$

or,

$$\begin{aligned} -\epsilon -\frac{2M}{\delta ^2}\phi _1(s,y)< h(s)-h(y)< \epsilon +\frac{2M}{\delta ^2}\phi _1(s,y). \end{aligned}$$

Hence, using the linearity and monotonicity of \({\mathcal {L}}_n(.;y)\), we obtain

$$\begin{aligned}{} & {} -{\mathcal {L}}_n(h_0(s);y)\epsilon -\frac{2M}{\delta ^2}{\mathcal {L}}_n(\phi _1(s,y);y)\nonumber \\{} & {} \quad<{\mathcal {L}}_n(h(s);y)-h(y){\mathcal {L}}_n(h_0(s);y)\nonumber \\{} & {} \quad <{\mathcal {L}}_n(h_0(s);y)\epsilon +\frac{2M}{\delta ^2}{\mathcal {L}}_n(\phi _1(s,y);y). \end{aligned}$$
(5.5)

Observe that

$$\begin{aligned}{} & {} |{\mathcal {L}}_n(h(s);y)-h(y)|\le \left[ {\mathcal {L}}_n(h(s);y)-h(y) {\mathcal {L}}_n(h_0(s);y)\right] \nonumber \\{} & {} \quad +|h(y)|\left[ {\mathcal {L}}_n(h_0(s);y)-h_0(y)\right] . \end{aligned}$$
(5.6)

Therefore, using (5.6) in (5.5), one gets,

$$\begin{aligned}{} & {} |{\mathcal {L}}_n(h(s);y)-h(y)|<{\mathcal {L}}_n(h_0(s);y)\epsilon +\frac{2M}{\delta ^2}{\mathcal {L}}_n(\phi _1(s,y);y)\nonumber \\{} & {} \quad +|h(y)|\left[ {\mathcal {L}}_n(h_0(s);y)-h_0(y)\right] . \end{aligned}$$
(5.7)

Note that

$$\begin{aligned} {\mathcal {L}}_n(\phi _1(s,y);y)= & {} {\mathcal {L}}_n(\left( e^{-s}-e^{-y}\right) ^2;y) =\left\{ {\mathcal {L}}_n\left( h_2(s);y\right) -h_2(y)\right\} \nonumber \\{} & {} \quad -2h_1(y) \left\{ {\mathcal {L}}_n\left( h_1(s);y\right) -h_1(y)\right\} \nonumber \\{} & {} \quad +h_2(y)\left\{ {\mathcal {L}}_n\left( h_0(s);y\right) -h_0(y)\right\} . \end{aligned}$$
(5.8)

Hence combining (5.7) and (5.8), we obtain

$$\begin{aligned}{} & {} |{\mathcal {L}}_n(h(s);y)-h(y)|<\epsilon +\epsilon \left| {\mathcal {L}}_n(h_0(s);y)-h_0(y)\right| +\frac{2M}{\delta ^2} \bigg \{\left| {\mathcal {L}}_n\left( h_2(s);y\right) -h_2(y)\right| \\{} & {} \quad +2h_1(y)\left| {\mathcal {L}}_n\left( h_1(s);y\right) -h_1(y)\right| +h_2(y) \left| {\mathcal {L}}_n\left( h_0(s);y\right) -h_0(y)\right| \bigg \}\\{} & {} \quad +|h(y)|\left[ {\mathcal {L}}_n\left( h_0(s);y\right) -h_0(y)\right] . \end{aligned}$$

Consequently,

$$\begin{aligned}{} & {} |{\mathcal {L}}_n(h(s);y)-h(y)|<\epsilon +\left( \epsilon +M+\frac{2M}{\delta ^2}\right) \left| {\mathcal {L}}_n\left( h_0(s);y\right) -h_0(y)\right| \\{} & {} \qquad +\frac{2M}{\delta ^2}\left| {\mathcal {L}}_n\left( h_2(s);y\right) -h_2(y)\right| \\{} & {} \qquad +\frac{4M}{\delta ^2}\left| {\mathcal {L}}_n\left( h_1(s);y\right) -h_1(y)\right| \\{} & {} \quad <\epsilon +B\left\{ \left| {\mathcal {L}}_n\left( h_2(s);y\right) -h_2(y)\right| +\left| {\mathcal {L}}_n\left( h_1(s);y\right) -h_1(y)\right| +\left| {\mathcal {L}}_n\left( h_0(s);y\right) -h_0(y)\right| \right\} , \end{aligned}$$

where

$$\begin{aligned} B=\max \left( \epsilon +M+\frac{2M}{\delta ^2}, \frac{4M}{\delta ^2}\right) . \end{aligned}$$

Now, for any \(\epsilon {'}>0,\) choose the above \(\epsilon \) such that \(0<\epsilon <\epsilon {'}\), and define:

$$\begin{aligned} W=\bigg \{n\in {\mathbb {N}}:|{\mathcal {L}}_n(h(s);y)-h(y)|\ge \epsilon {'}\bigg \}, \end{aligned}$$

and for \(\nu =0,1,2,\)

$$\begin{aligned} W_{\nu }=\bigg \{n\in {\mathbb {N}}:|{\mathcal {L}}_n(h_{\nu }(s);y) -h_{\nu }(y)|\ge \frac{\epsilon {'}-\epsilon }{3B}\bigg \}, \end{aligned}$$

then clearly, \(W\subseteq \cup _{\nu =0}^2W_{\nu }\) and hence

$$\begin{aligned}{} & {} \frac{1}{(u_n-v_n)^{\alpha }}\left| \bigg \{k\in W: u_n<k\le v_n\bigg \} \right| \\{} & {} \quad \le \sum \limits _{\nu =0}^2\frac{1}{(u_n-v_n)^{\alpha }} \left| \bigg \{k\in W_{\nu }: u_n<k\le v_n\bigg \}\right| . \end{aligned}$$

In view of our hypothesis, we have

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\frac{1}{(u_n-v_n)^{\alpha }} \left| \bigg \{k\in W_{\nu }: u_n<k\le v_n\bigg \}\right| =0 \end{aligned}$$

for \(\nu =0,1\) and 2. Hence,

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\frac{1}{(u_n-v_n)^{\alpha }} \left| \bigg \{k\in W: u_n<k\le v_n\bigg \}\right| =0, \end{aligned}$$

which implies that

$$\begin{aligned} stat_{u,v}^{D,\alpha }-\lim \limits _{n}\Vert {\mathcal {L}}_n(h)-h\Vert =0. \end{aligned}$$

\(\square \)

Now we find the rate of deferred statistical convergence of order \(\alpha \) by the operators \({\mathcal {L}}_n(h),~ to~ h\), \(\forall ~ h\in C({\mathbb {R}}_+)\).

Theorem 4

For \(h\in C({\mathbb {R}}_+)\), assume that there hold the following conditions:

$$\begin{aligned} \Vert {\mathcal {L}}_n(h_0)-h_0\Vert =stat_{u,v}^{D,\alpha }-o(a_n), \end{aligned}$$
(5.9)
$$\begin{aligned} \omega (h;\lambda _n)=stat_{u,v}^{D,\alpha }-o(b_n). \end{aligned}$$
(5.10)

Then, we have

$$\begin{aligned} \Vert {\mathcal {L}}_n(h)-h\Vert =stat_{u,v}^{D,\alpha }-o(c_n), \end{aligned}$$
(5.11)

where \(c_n=\max \{a_n,b_n\},~\forall ~n\in {\mathbb {N}}\), \(\lambda _n=\sqrt{\Vert {\mathcal {L}}_n(\phi _1(s,y))\Vert }\) and \(\phi _1(s,y)\) is defined as in Theorem 3.

Proof

Let \(y\in {\mathbb {R}}_+\), be arbitrary but fixed. Then, we have

$$\begin{aligned}{} & {} |{\mathcal {L}}_n(h(s);y)-h(y)|\le {\mathcal {L}}_n\left( |h(s)-h(y)|;y\right) \\{} & {} \qquad +|h(y)||{\mathcal {L}}_n(h_0(s);y)-h_0(y)|\\{} & {} \quad \le {\mathcal {L}}_n\left( 1+\frac{(e^{-s}-e^{-y})^2}{{\lambda _n}^2};y\right) \omega (h;\lambda _n)\\{} & {} \qquad +|h(y)||{\mathcal {L}}_n(h_0(s);y)-h_0(y)|\\{} & {} \quad =\left( {\mathcal {L}}_n(h_0(s);y)+\frac{1}{{\lambda _n}^2}{\mathcal {L}}_n (\phi _1(s,y);y)\right) \omega (h;\lambda _n)\\{} & {} \qquad +|h(y)||{\mathcal {L}}_n(h_0(s);y)-h_0(y)|. \end{aligned}$$

Taking into account \(\lambda _n=\sqrt{\Vert {\mathcal {L}}_n(\phi _1(s,y))\Vert },\) we get

$$\begin{aligned} \Vert {\mathcal {L}}_n(h)-h\Vert\le & {} 2\omega (h;\lambda _n) +\Vert {\mathcal {L}}_n(h_0)-h_0\Vert \left( \omega (h;\lambda _n)+ \Vert h\Vert \right) . \end{aligned}$$
(5.12)

Hence, considering (5.9), (5.10) and Lemma 2, (5.12) leads us to the assertion (5.11). \(\square \)

Remark 2

Let \(h\in C({\mathbb {R}}_+)\) be such that \(h\in Lip_{K}1\) (Lipschitz class) and

$$\begin{aligned} \Vert {\mathcal {L}}_n(h_{\nu })-h_{\nu }\Vert =stat_{u,v}^ {D,\alpha }-o(u_{n_{\nu }}),\,\,\,\forall ~\nu =0,1,2, \end{aligned}$$
(5.13)

where \(\langle u_{n_{\nu }}\rangle\), \(\nu =0,1,2\), are monotonically decreasing sequences in \((0,\infty )\).

Then, from relation (5.8), for all \(y\in {\mathbb {R}}_+\), we can write

$$\begin{aligned} {\mathcal {L}}_n(\phi _1(s,y);y)\le 2 \sum \limits _{\nu =0}^2|{\mathcal {L}}_n(h_{\nu }(s);y)-h_{\nu }(y)|. \end{aligned}$$
(5.14)

Hence from (5.13), (5.14) and Lemma 2, it follows that

$$\begin{aligned} \lambda _n=\sqrt{\Vert {\mathcal {L}}_n(\phi _1(s,y)) \Vert }=stat_{u,v}^{D,\alpha }-o(d_{n}), \end{aligned}$$
(5.15)

where,

$$\begin{aligned} d_n=\sqrt{\underset{0\le \nu \le 2}{\max }u_{n_{\nu }}}. \end{aligned}$$

Now in view of our assumption \(h\in Lip_{K}1\), we get

$$\begin{aligned} \omega (h;{\lambda }_n)=stat_{u,v}^{D,\alpha }-o(d_{n}). \end{aligned}$$

Applying Theorem 4, we immediately see that for all \(h\in C({\mathbb {R}}_+)\),

$$\begin{aligned} \Vert {\mathcal {L}}_n(h)-h\Vert =stat_{u,v}^{D,\alpha }-o(e_{n}), \end{aligned}$$

where \(e_n=\max \left\{ d_n, u_{n_0}\right\} \).

We now present below an example to demonstrate that our Theorem 3 is a non-trivial generalization of the classical version of the Korovkin type theorem (Theorem 1, Boyanov and Veselinov (1970)).

Example 1

Let us consider a sequence of positive linear operators defined by

$$\begin{aligned} {\mathcal {S}}_n(f(s);y)=(1+y_n)B_n(f(s);y), \end{aligned}$$
(5.16)

where,

$$\begin{aligned} B_n(f(s);y)=\sum \limits _{\nu =0}^{\infty }f \left( \frac{\nu }{n}\right) {n+{\nu }-1 \atopwithdelims ()k}y^{\nu }(1+y)^{-n-{\nu }}, \end{aligned}$$

is the Baskakov operator and the sequence \((y_n)\) is defined as: \(y_n = {\left\{ \begin{array}{ll} 1, &{}\,\, \text {if} \,\,n=m^3,m\in {\mathbb {N}},\\ 0, &{} \,\,\text {otherwise}. \end{array}\right. }\). Further, let us assume \(u_n=2n,~v_n=3n,~and~\alpha =1\). Then the sequence \(\langle y_n\rangle\) converges deferred statistically of order 1, to zero but does not converge in the usual sense.

Now from (5.16), by a simple computation we have

$$\begin{aligned}{} & {} {\mathcal {S}}_n(h_0(s);y)=(1+y_n), \\ {\mathcal {S}}_n(h_1(s);y)=(1+y_n)\left( 1+y-ye^{-\frac{1}{n}}\right) ^{-n}, \end{aligned}$$

and,

$$\begin{aligned} {\mathcal {S}}_n(h_2(s);y)=(1+y_n)\left( 1+y-ye^{\frac{-2}{n}}\right) ^{-n}. \end{aligned}$$

Then, clearly \(stat_{u,v}^{D,\alpha }-\lim \limits _{n}\Vert {\mathcal {S}}_n(h_{\nu })-h_{\nu }\Vert =0\), where \(h_{\nu }(y)=e^{-{\nu }y},~for~\nu =0,1,2.\)

Hence by using Theorem 3, we obtain \(stat_{u,v}^{D,\alpha }-\lim \limits _{n}\Vert {\mathcal {S}}_n(h)-h\Vert =0.\)

However, the classical version of the Korovkin type theorem proved by Boyanov and Veselinov (Theorem 1, Boyanov and Veselinov (1970)) for the approximation of functions on \({\mathcal {C}}({\mathbb {R}}_+)\) does not work here because the sequence \(\langle y_n\rangle\) does not converge to zero in the usual sense.