Archives

# In modern science and technology many optimization problems

In modern science and technology, many optimization problems need to be solved in real time, while these classical methods cannot render real-time solutions to these optimization problems, especially large-scale problems. As a new metaheuristic, particle swarm optimization (PSO) [18], [19] has proved to be a competitive algorithm for optimization problems compared with other algorithms such as genetic algorithm (GA) and simulating algorithm (SA). It can converge to the optimal solution rapidly [20], [21], and this RGDS peptide advantage has been attracting researchers to solve BLP problem using PSO approach. [22], [23] proposed a hierarchical particle swarm optimization for solving BLP problem.
In this paper, for a class of nolinear bilevel programming (NBLP) problem, replaced the lower level problem by its Kraush-Kuhn-Tucker optimality conditions, the NBLP problem is reduced into a regular nonlinear programming with complementary constraints. It is then smoothed by CHKS smoothing function. Finally, a particle swarm optimization approach is proposed to solve the smoothed nonlinear programming for getting the approximate optimal solution of the NBLP problem. This paper is organized as follows: Section 2 introduces the formulation and basic definitions of bilevel nonlinear programming, and also introduces the smoothing method for nonlinear complementarity problem. Section 3 introduces a particle swarm optimization for solving the smoothed programming problem. Numerical examples are reported in Section 4. And the conclusion is given in Section 5.

Nonlinear bilevel programming problem and smoothing method
We consider the nonlinear bilevel programming (NBLP) formulated as follows [24]:where are continuous differentiable functions. The term (UP) is called the upper-level problem and the term (LP) is called the lower-level problem and correspondingly the terms are the upper-level variable and the lower-level variable respectively.
The notations are defined as follow:
Here, in order to ensure that the problem (1) is well posed, S is assumed to be nonempty and compact, and that for each decision taken by the upper decision maker, the lower decision maker has some room to respond, i.e. . Then we can reduce the NBLP problem to the one-level programming problem:where are Lagrange multipliers.
Its optimal solution can be defined as follow:
Consider the nonlinear complementarity problem (NCP for short) in the problem (2):
We note that the problem (3) is a non-smooth problem, hence standard methods are not guaranteed to solve such problem [25], [26]. In the last few decades, various methods have been developed for solving NCP, where the smoothing-type algorithm is one of the most effective methods for NCP. Fukushima and Pang [27] proposed a smooth method to solve the mathematical programming problem with complementarity constraints. Chen and Ma [28] propose a regularization smoothing Newton method for solving nonlinear complementarity problem with -function based on Fischer-Burmeister function with perturbed parameter.
The problem (3) with nonlinear complementarity condition is non-convex and non-differential, and even not satisfies the regularity assumptions. For this reason, we apply smoothing method to solve this problem.
The Chen-Mangasarian smoothing function has the property if and only if , but Karyotye is non-differentiable at . But, the CHKS smoothing function has the property if and only if for , and the function is smooth with respect to for . Hence, by applying the CHKS smoothing function , the problem (3) can be approximated by:Hence, the problem (2) can be transformed as follows:
The smoothing factor is treated as a variable rather than a parameter in the problem (4), (5), which avoids the difficulty of not satisfying the regularity assumptions induced by complementarity condition in the problem (2). For a given and x fixed, if one wants to get the optimal solution y of the lower level problem, what he needs to do is to solve the problem (4). If satisfies all constraints of the lower level problem, then gets its optimal value at y. Thus what we need to do is to solve the following single programming problem to get the optimal solution x and is then the approximate optimal solution of :