Computation of the output of a function with fuzzy inputs based on a low-rank tensor approximation
详细信息    查看全文
文摘
We apply a derivative-free optimization method based on novel low-rank tensor methods to the problem of propagating fuzzy uncertainty through a continuous real-valued function. Adhering to Zadeh's extension principle, such a problem can be reformulated as a sequence of optimization problems over nested search spaces. The optimization method we use is based on a low-rank tensor approximation of the function sampled on a grid and a search for the minimal and maximal entries in this low-rank tensor. In contrast to classical fuzzy uncertainty propagation methods, such as the vertex method and the transformation method, the method we propose does not exhibit an inherent exponential scaling for increasing dimension of the search space. Obviously, no derivative-free optimization algorithm can exist which shows sub-exponential scaling with the dimension for all possible continuous functions. The algorithm that we present here, however, can exploit a specific type of structure and regularity (beyond continuity) that is often present in real-world optimization problems. We illustrate this with some high-dimensional numerical examples where the presented method clearly outperforms some established derivative-free optimization codes.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700