Archives

  • 2018-07
  • 2019-04
  • 2019-05
  • 2019-06
  • 2019-07
  • 2019-08
  • 2019-09
  • 2019-10
  • 2019-11
  • 2019-12
  • 2020-01
  • 2020-02
  • 2020-03
  • 2020-04
  • 2020-05
  • 2020-06
  • 2020-07
  • 2020-08
  • 2020-09
  • 2020-10
  • 2020-11
  • 2020-12
  • 2021-01
  • 2021-02
  • 2021-03
  • 2021-04
  • 2021-05
  • 2021-06
  • 2021-07
  • 2021-08
  • 2021-09
  • 2021-10
  • 2021-11
  • 2021-12
  • 2022-01
  • 2022-02
  • 2022-03
  • 2022-04
  • 2022-05
  • 2022-06
  • 2022-07
  • 2022-08
  • 2022-09
  • 2022-10
  • 2022-11
  • 2022-12
  • 2023-01
  • 2023-02
  • 2023-03
  • 2023-04
  • 2023-05
  • 2023-06
  • 2023-07
  • 2023-08
  • 2023-09
  • 2023-10
  • 2023-11
  • 2023-12
  • 2024-01
  • 2024-02
  • 2024-03
  • 2024-04
  • In modern science and technology many optimization problems

    2020-09-07

    In modern science and technology, many optimization problems need to be solved in real time, while these classical methods cannot render real-time solutions to these optimization problems, especially large-scale problems. As a new metaheuristic, particle swarm optimization (PSO) [18], [19] has proved to be a competitive algorithm for optimization problems compared with other algorithms such as genetic algorithm (GA) and simulating algorithm (SA). It can converge to the optimal solution rapidly [20], [21], and this RGDS peptide advantage has been attracting researchers to RGDS peptide solve BLP problem using PSO approach. [22], [23] proposed a hierarchical particle swarm optimization for solving BLP problem. In this paper, for a class of nolinear bilevel programming (NBLP) problem, replaced the lower level problem by its Kraush-Kuhn-Tucker optimality conditions, the NBLP problem is reduced into a regular nonlinear programming with complementary constraints. It is then smoothed by CHKS smoothing function. Finally, a particle swarm optimization approach is proposed to solve the smoothed nonlinear programming for getting the approximate optimal solution of the NBLP problem. This paper is organized as follows: Section 2 introduces the formulation and basic definitions of bilevel nonlinear programming, and also introduces the smoothing method for nonlinear complementarity problem. Section 3 introduces a particle swarm optimization for solving the smoothed programming problem. Numerical examples are reported in Section 4. And the conclusion is given in Section 5.
    Nonlinear bilevel programming problem and smoothing method We consider the nonlinear bilevel programming (NBLP) formulated as follows [24]:where are continuous differentiable functions. The term (UP) is called the upper-level problem and the term (LP) is called the lower-level problem and correspondingly the terms are the upper-level variable and the lower-level variable respectively. The notations are defined as follow: Here, in order to ensure that the problem (1) is well posed, S is assumed to be nonempty and compact, and that for each decision taken by the upper decision maker, the lower decision maker has some room to respond, i.e. . Then we can reduce the NBLP problem to the one-level programming problem:where are Lagrange multipliers. Its optimal solution can be defined as follow: Consider the nonlinear complementarity problem (NCP for short) in the problem (2): We note that the problem (3) is a non-smooth problem, hence standard methods are not guaranteed to solve such problem [25], [26]. In the last few decades, various methods have been developed for solving NCP, where the smoothing-type algorithm is one of the most effective methods for NCP. Fukushima and Pang [27] proposed a smooth method to solve the mathematical programming problem with complementarity constraints. Chen and Ma [28] propose a regularization smoothing Newton method for solving nonlinear complementarity problem with -function based on Fischer-Burmeister function with perturbed parameter. The problem (3) with nonlinear complementarity condition is non-convex and non-differential, and even not satisfies the regularity assumptions. For this reason, we apply smoothing method to solve this problem. The Chen-Mangasarian smoothing function has the property if and only if , but Karyotye is non-differentiable at . But, the CHKS smoothing function has the property if and only if for , and the function is smooth with respect to for . Hence, by applying the CHKS smoothing function , the problem (3) can be approximated by:Hence, the problem (2) can be transformed as follows: The smoothing factor is treated as a variable rather than a parameter in the problem (4), (5), which avoids the difficulty of not satisfying the regularity assumptions induced by complementarity condition in the problem (2). For a given and x fixed, if one wants to get the optimal solution y of the lower level problem, what he needs to do is to solve the problem (4). If satisfies all constraints of the lower level problem, then gets its optimal value at y. Thus what we need to do is to solve the following single programming problem to get the optimal solution x and is then the approximate optimal solution of :