Abstract

In this paper, we consider the multiple-set split common fixed point problem in Hilbert spaces. We first study a couple of critical properties of strictly pseudocontractive mappings and particularly the property under mix activity. By utilizing these properties, we propose new iterative strategies for settling this problem as well as several connected issues. Under delicate conditions, we state weak convergence of the proposed strategies that expands the current works from the case of two subsets to the instance of multiple subsets. As an application, we give an exhibit of the theoretical results to the multiple-set split equality problem and the elastic net regularization.

1. Introduction

Let and be the two positive integers, and and stand for two Hilbert spaces. The well-known split feasibility problem (SFP) [1] is formulated as follows: find a point satisfying the property where and are nonempty closed convex subset of and , respectively, and is a bounded linear mapping from into . There are many generalizations of the SFP, one of which is from two groups to multiple groups, that is, multiple-set split feasibility problem (MSFP) [2]. Actually, it can be formulated as the problem of finding such that where is as above and and are two classes of nonempty convex closed subsets.

The split common fixed point problem (SCFP) [3] is another generalization of the SFP, which requires to find an element in a fixed point set such that its image under a linear transformation belongs to another fixed point set. Formally, it consists in finding such that where is as above and and are, respectively, the fixed point sets of nonlinear mappings and . Specially, if and are both metric projections, then problem (3) is reduced to the SFP. As a further extension of the SFP, we recall the multiple-set split common fixed point problem (MSCFP). Indeed, the MSCFP extends the SCFP from two groups to the case of multiple groups. Formally, it consists in finding such that where is as above and and are, respectively, the fixed point sets of nonlinear mappings and . Recently, we [4] considered problem (4) whenever the involved mappings are demicontractive. These issues have been concentrated on broadly in different regions like image reconstruction and signal processing [59].

There are many algorithms in the literature that can solve the SCFP problem (see, e.g., [1016]). However, in most of these algorithms, the choice of the stepsize is related to Thus, to implement these algorithms, one has to compute (or at least estimate) the norm , which is generally not easy in practice. A way avoiding this is to adopt variable stepsize which ultimately has no relation with [11, 12, 17]. In this connection, Wang [18] recently proposed the following method: where is the conjugate of , stands for the identity mapping, and is chosen such that

It is shown that if mappings and are firmly nonexpansive, then the sequence generated by (5) converges weakly to a solution of problem (3). It is clear that such a choice of the stepsize does not rely on the norm . Kraikaew and Saejung [16] weakened condition (6) as follows:

Furthermore, we [19] extended the above results from the class of firmly nonexpansive mappings to the class of strictly pseudocontractive mappings.

Inspired by the above work, we will continue to present and investigate strategies for addressing the MSCFP in Hilbert spaces. We initially explore a few properties of strictly pseudocontractive mappings and track down its soundness under arched combinatorial operation. Exploiting these properties, we propose another iterative algorithm to address the MSCFP, as well as the MSFP. Under gentle conditions, we acquire weak convergence of the proposed algorithm. Our outcomes broaden related work from the instance of two groups to the case of multiple groups.

2. Preliminary

Throughout the paper, assume that , , , and are real Hilbert spaces, and denotes its fixed point set of a mapping . For any and it is well known that [20]

Recall that the mapping is called nonexpansive if

It is called firmly nonexpansive if

It is called -strictly pseudocontractive if

It is clear that the class of strictly pseudocontractive mappings includes the class of nonexpansive mappings, while the latter includes the class of firmly nonexpansive mappings. Indeed, a firmly nonexpansive mapping is -strictly pseudocontractive, while a nonexpansive mapping is -strictly pseudocontractive. In general, these inclusion are proper (cf. [20, 21]). The following properties of strictly pseudocontractive mappings play an import role in the subsequent analysis. It was shown [21] that if is -strictly pseudocontractive, then it follows that

Moreover, the fixed point set of is convex and closed. We now collect further properties of strictly pseudocontractive mappings.

Lemma 1. A mapping is -strictly pseudocontractive with if and only if there is a nonexpansive mapping such that

Proof. ” Assume is -strictly pseudocontractive. Let . It is easy to verify that fulfils (13). It remains to show that is nonexpansive. To this end, fix any . It then follows from (8) and the property of strictly pseudocontractive mappings that Hence, we have ; that is, is nonexpansive.
” Assume that there is a nonexpansive mapping such that (13) follows. Choose any . It then follows from (8) and the property of nonexpansive mappings that Hence, is strictly pseudocontractive, and thus, the proof is complete.

Remark 2. Note that a firmly nonexpansive mapping is -strictly pseudocontractive. It is well known that a mapping is firmly nonexpansive if and only if there is a nonexpansive mapping such that The following lemma can be regarded as an extension of this assertion.

Lemma 3. Assume that is strictly pseudocontractive for each . Let where . If is nonempty, then

Proof. It suffices to show that Fix and choose any . By our hypothesis, there exists such that for every . Adding up these inqualities, we have Thus, Since , we have for all . Moreover, since is chosen arbitrarily, we get Hence, the proof is complete.

Lemma 4. For each , let and , and is strictly pseudocontractive with Then, is strictly pseudocontractive with

Proof. By our hypothesis, for each , there exists a nonexpansive mapping such that . Now, let us define a mapping as where is defined as in (19). It is readily seen that From Lemma 1, it remains to show that is nonexpansive. To this end, choose any . By , we have Hence, is nonexpansive, and thus, the proof is complete.

3. The Case for Strictly Pseudocontractive Mappings

First, let us recall a weak convergence theorem of iterative method (5) for approximating a solution of the two-set split common fixed point problem.

Theorem 5 ([19], Theorem 3.1). Let . Assume that and are, respectively, - and -strictly pseudocontractive mappings, and where Then, the sequence , generated by (5), converges weakly to a solution of problem (3).
We next consider the MSCFP under the following basic assumption. (i)MSCFP is consistent; that is, it admits at least one solution(ii) is -strictly pseudocontractive with (iii) is -strictly pseudocontractive with

Algorithm 1. Let be arbitrary. Given , update the next iteration via where with , with , and are properly chosen stepsizes.

Theorem 6. Assume that conditions (A1)-(A3) hold and is chosen so that where Then, the sequence , generated by Algorithm 1, converges weakly to a solution of MSCFP.

Proof. Let and By Lemma 4, we conclude that is -strictly pseudocontractive with , and is -strictly pseudocontractive with Hence, by formula (23), we have Moreover, by Lemma 3, and . Therefore, by applying Theorem 5, we at once get the assertion as desired.
It seems that the choice of the stepsize above requires the prior information of and the norm . However, as shown below, there is a special case in which the selection of stepsizes ultimately has no relation with and the norm .

Corollary 7. Assume that conditions (A1)-(A3) hold, and the stepsize is chosen so that Then, the sequence generated by Algorithm 1 converges weakly to a solution of MSFP.
Significantly, if the nonlinear mappings in (4) are all metric projections, then the MSCFP is reduced to the MSFP. Consequently, we can apply our outcome to solve the MSFP. As an application of Algorithm 1, we get the following algorithm for solving problem (2).

Algorithm 2. Let be arbitrary. Given , update the next iteration via where with , with , and are properly chosen stepsize.

Corollary 8. Assume that MSFP is consistent. If the stepsize is chosen so that then the sequence , generated by Algorithm 2, converges weakly to a solution of MSFP.

Proof. Let and By Lemma 4, we conclude that and are both -strictly pseudocontractive, that is, firmly nonexpansive. In this situation, we have By applying Theorem 6, we at once get the assertion as desired.

Corollary 9. Assume MSFP is consistent. If the stepsize is chosen so that then the sequence , generated by Algorithm 2, converges weakly to a solution of MSFP.

4. Applications

In this part, we first give an application of our theoretical results to the multiple-set split equality problem (MSEP), which is more general than the original split equality problem [22].

Example 1. The multiple-set split equality problem (MSEP) expects to find such that where and are two positive integers, and are two bounded linear mappings, and and are two classes of nonlinear mappings.
We next consider the MSFP under the following basic assumption. (i)MSEP is consistent; that is, it admits at least one solution(ii) is -strictly pseudocontractive with (iii) is -strictly pseudocontractive with Under this situation, we propose a new method for solving problem (32).

Algorithm 3. For an arbitrary initial guess , define recursively by where is a sequence of positive numbers.
To proceed the convergence analysis, we consider the product space , in which the inner product and the norm are, respectively, defined by where with Define a linear mapping by Let be the the metric projection onto the set , and define a nonlinear mapping as where and are as above.

Lemma 10 ([23], Lemma 12). Let the mapping be defined as in (35). Then is linear bounded. Moreover, for , it follows

Lemma 11. Let the mapping be defined as in (36). Then, . Moreover, if conditions (B1)-(B3) are met, then is -strictly pseudocontractive with

Proof. By Lemma 3, it is easy to verify the first assertion. To show the second assertion, fix any By our hypothesis, is -strictly pseudocontractive with is -strictly pseudocontractive with It then follows that From (38), we obtain the result as desired.

Theorem 12. Assume that conditions (B1)-(B3) hold. If is chosen so that where with defined as in (38), then the sequence generated by Algorithm 3 converges weakly to a solution of problem (32).

Proof. Let and let be defined as above. Thus, problem (32) is equivalently changed into finding such that Moreover, Algorithm 3 can be rewritten as Note that by Lemma 10, is -strictly pseudocontractive and is -strictly pseudocontractive. Hence, by Theorem 5, we conclude that converges weakly to some such that By Lemma 11, it is readily seen that and .

We next give an application of our theoretical results to a problem derived from the real world. In statistics and machine learning, least absolute shrinkage and selection operator (LASSO for short) is a regression analysis method that performs both variable selection and regularization in order to enhance the prediction accuracy and interpretability of the statistical model it produces. It was originally introduced by Tibshirani in [24] who coined out the term and provided further insights into the observed performance.

Subsequently, a number of LASSO variants have been created in order to remedy certain limitations of the original technique and to make the method more useful for particular problems. Among them, elastic net regularization adds an additional ridge regression-like penalty which improves performance when the number of predictors is larger than the sample size, allows the method to select strongly correlated variables together, and improves overall prediction accuracy. More specifically, the LASSO is a regularized regression method with the penalty, while the elastic net is a regularized regression method that linearly combines the and penalties of the LASSO and ridge methods. Here, the penalty is defined as , and the penalty is defined as .

Example 2 (see [25]). The elastic net requires to solve the problem where , , and are given parameters. This problem is a specific SCFP with and where and

Algorithm 4. Let be arbitrary. Given , update the next iteration via where with and is a properly chosen stepsize.

It is clear that the above mappings are, respectively, firmly nonexpansive and firmly quasi-nonexpansive, which implies that they are, respectively, -strictly pseudocontractive and -demicontractive mappings. As an application of Theorem 6, we can deduce that the sequence generated by Algorithm 4 converges to a solution to problem (46) provided that the stepsize is chosen so that

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.