Hypervolume Indicator Gradient Ascent Multi-objective Optimization
详细信息    查看全文
文摘
Many evolutionary algorithms are designed to solve black-box multi-objective optimization problems (MOPs) using stochastic operators, where neither the form nor the gradient information of the problem is accessible. In some real-world applications, e.g. surrogate-based global optimization, the gradient of the objective function is accessible. In this case, it is straightforward to use a gradient-based multi-objective optimization algorithm to achieve fast convergence speed and the stability of the solution. In a relatively recent approach, the hypervolume indicator gradient in the decision space is derived, which paves the way for the method for maximizing the hypervolume indicator of a fixed size population. In this paper, several mechanisms which originated in the field of evolutionary computation are proposed to make this gradient ascent method applicable. Specifically, the well-known non-dominated sorting is used to help steering the dominated points. The principle of the so-called cumulative step-size control that is originally proposed for evolution strategies is adapted to control the step-size dynamically. The resulting algorithm is called Hypervolume Indicator Gradient Ascent Multi-objective Optimization (HIGA-MO). The proposed algorithm is tested on ZDT problems and its performance is compared to other methods of moving the dominated points as well as to some evolutionary multi-objective optimization algorithms that are commonly used.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700