用户名: 密码: 验证码:
一些求解结构型优化的一阶算法
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
许多现实世界中的问题可以转化为结构型优化求解,特别是在电子工程,统计,计算机科学等领域,具体的应用包括数字信号处理,信号增强,自然影像处理,矩阵处理,医学影像处理,交通网络分析等等。有效的解决这些问题是这些应用中的关键步骤,因此,为这些有着深刻应用背景的优化问题开发专用的高效算法十分重要。
     我们所考虑的这些结构型优化,或多或少,存在一些良好的结构可以加以利用,例如,可分性。虽然我们可以使用以内点算法为代表的标准算法来求解这些问题,并且效率通常不错。但是标准算法通常无法利用这些有利的结构。此外,一些标准算法,如内点算法,其固有的高计算复杂度使其并不适合求解大规模问题。
     因此,我们需要针对所需处理问题的具体结构有针对性地设计专门的算法。近年来,包括”向前向后分裂算法”,”增广拉格朗日乘子法”以及”邻近点算法”在内的一阶算法,已经吸引了越来越多的学者关注。由于只涉及一阶计算,相对于所求问题的规模,这些算法每步的计算量是比较低的,这使它们更适合处理大规模问题。而且这些算法己被广泛地研究,特别是从非线性规划的角度。这些研究提供了丰富的理论结果和大量的改进算法。此外,所求问题的良好结构如果被适当地加以利用,可以大大提高计算效率。
     然而,这些算法可能仍然有其局限性,例如,在使用增广拉格朗日乘子法的时候,其子问题的求解可能依然困难。一个可能的补救措施是引入不精确极小化准则,或者说放松子问题使其容易求解。特别的,在满足某些条件的时候,放松的子问题可以有显式解,因而是容易求解的。这些改进算法对于求解一部分问题是比较有效的,但是我们依然可以进一步改进它们。一方面,这些算法可能收敛较慢。另一方面,放松的子问题可能没有显式解。此外,子问题即使有显式解,也可能因为其中涉及到计算量较大的操作,如特征值分解,奇异值分解,而大大降低计算速度。因而我们尽量避免算法中出现这种操作。
     事实上,大多数上述(一阶)算法可以从变分不等式的角度来解释,作收敛性分析,并且可以归纳到一个统一的框架下面。在这一框架下,我们能够比较容易地分析现有算法的收敛性,并根据我们的需要建立新的算法。特别的,为了降低计算量,我们可以要求算法对于某类特定问题是”可以实施的”。或者说子问题是容易求解的,且计算量较低。
     在这篇论文中,我们考虑四种类型的优化问题,提出了一些新算法。数值试验表明,这些新算法在特定的应用中的计算效率超过了一些当今世界上的最先进算法。由于这些新算法都是专为某一类问题设计的,虽然这些算法被证明是有效和实用的,但它们并非通用的算法,也不是用来取代这些通用的算法。
In the real world applications, there are quite a few structured optimizations aris-ing from the fields such as electrical engineering, statistics, and computer science, including digital signal processing, signal enhancement, natural imaging processing, matrix processing, medical imaging processing, and traffic network analysis, and etc. Solving such problem constitutes a critical step in an emerging methodology in these applications, hence, it is important to develop efficient algorithms for it. In this thesis, we consider four types of structured optimizations derived from the above applications.
     These optimization problems, more or less, have some favorable structures, e.g., separability. If we use the standard methods, such as interior-point methods, to solve these problems, these good structures could be totally ignored. Additionally, the stan-dard methods such as interior point methods are not directly amenable to large prob-lems due to their inherent high computational complexity.
     As a result, we need to develop dedicated algorithms that are able to handle the specific structure of the problem. In recent years, the first-order algorithms such as "forward-backward splitting method","augmented Lagrangian method", or "proximal point algorithm" have attracted more and more attentions from the researchers. In fact, the low per-iteration cost of these algorithms gives them better ability to handle large-scale problem. These algorithms have been extensively studied in the field of nonlinear programming, providing abundant theory and practical improvements. Additionally, the favorable structure of the problem under consideration can be utilized to simplify the subproblem.
     However, these algorithms could still have limitations, e.g., their subproblems can be difficult or expensive to solve. One remedy is allowing inexact minimization which is equivalent to solving a relaxed subproblem in each iteration, and we require the relaxed subproblem to be implementable, e.g., when the relaxed subproblem is simple enough to have a closed form solution. Although the algorithms based on above idea can work well for a few types of problems, it is still possible to improve them further. On one hand, the algorithms based on inexact minimization can suffer from the slow convergence. On the other hand, the relaxed subproblem may not always have a closed form solution.
     In fact, most of the above first-order algorithms can be interpreted from the aspect of variational inequality (VI), and their convergence analysis can be categorized into a unified framework. Under this framework, we are able to develop new algorithms based on our need. Specially, for saving the cost, we can require the algorithms to be implementable for the specific problem by taking the advantage of problem structure.
     In this thesis, we propose several such algorithms for four types of optimization problems, and demonstrate their efficiencies in the numerical experiments by compar-ing them with some state-of-the-art algorithms. Each of these algorithms is only for one type of problem, hence, they are not general-purpose methods, and not designed to replace the standard methods, however, for the specific problem they are designed for, these dedicated algorithms are proved to be efficient and practical.
引文
[1]Abernethy, J., Bach, F., Evgeniou, T., Vert, J.,2006. Low-rank matrix factorization with attributes. Arxiv preprint cs/0611124.
    [2]Acar, R., Vogel, C.R.,1994. Analysis of total variation penalty methods. Inverse Probl. 10, 1217-1229.
    [3]Afonso, M., Bioucas-Dias, J., Figueiredo, M.A.T.,2010. Fast image recovery using variable splitting and constrained optimization. IEEE Trans. Image Process.19,2345-2356.
    [4]Afonso, M.V., Bioucas-Dias, J.M., Figueiredo, M.A.T., 2011. An augmented lagrangian approach to the constrained optimization formulation of imaging inverse problems. IEEE Trans. Imag. Process.20,681-695.
    [5]Alliney, S.,1992. Digital filters as absolute norm regularizers. IEEE Trans. Signal Proces. 40,1548-1562.
    [6]Amit, Y., Fink, M., Srebro, N., Ullman, S.,2007. Uncovering shared structures in multiclass classication. in Proceedings of the 24th international conference on Machine learning, ACM ,17-24.
    [7]Argyriou, A., Evgeniou, T., Pontil, M., 2007. Multi-task feature learning. Advances in Neural Information Processing Systems 19, 41-48.
    [8]Arora, J.S., Chahande, A.I., Paeng, J.K.,1991. Multiplier methods for engineering opti-mization. Int. J. Numer. Meth. Eng. 32, 1485-1525.
    [9]Bar, L., Brook, A., Sochen, N., Kiryati, N.,2007. Deblurring of color images corrupted by impulsive noise. IEEE Trans. Image Process.16,1101-1110.
    [10]Baron, D., Duarte, M., Sarvotham, S., Wakin, M.B., Baraniuk, R.G.,2005. Distributed compressed sensing. Available at: http://dsp.rice.edu/cs/DCS112005.pdf.
    [11]Beck, A., Teboulle, M.,2009. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci.2,183-202.
    [12]Becker, S., Bobin, J., Candes, E.J.,2009. NESTA:A Fast and Accurate First-order Method for Sparse Recovery. Technical Report. California Institute of Technology.
    [13]Bect, J., Blanc-Feraud, L., Aubert, G., Chambolle, A.,2004. A(?)-unified variational frame-work for image restoration. European Conference on Computer Vision, Prague, Lecture Notes in Computer Sciences 3024,1-13.
    [14]Van den Berg, E., Friedlander, M.P.,2008/09. Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31,890-912.
    [15]Bertsekas, D.P.,1976. Multiplier methods: A survey. Automatica 12, 133-145.
    [16]Bertsekas, D.P.,1982. Constrained optimization and Lagrange multiplier methods. Com-puter Science and Applied Mathematics, Academic Press Inc. Harcourt Brace Jovanovich Publishers, New York.
    [17]Bertsekas, D.P.,1999. Nonlinear Programming. Athena Scientific.
    [18]Bertsekas, D.P., Tsitsiklis, J.N.,1989. Parallel and distributed computation:numerical meth-ods. Prentice-Hall, Inc., Upper Saddle River, NJ, USA.
    [19]Bioucas-Dias, J., Figueiredo, M.,2007. A new twist: Two-step iterative shrink-age/thresholding algorithms for image restoration. IEEE Trans. Image Process. 16, 2992-3004.
    [20]Blomgren, P., Chan, T.F.,1998. Color tv:Total variation methods for restoration of vector-valued images,. IEEE Trans. Imag. Process., 7, 304-309.
    [21]Blum, E., Oettli, W.,1975. Mathematische Optimierung. Grundlagen und Verfahren. Okonometrie und Unternehmensforschung, Springer-Verlag, Berlin-Heidelberg-New York.
    [22]Borghi, A., Darbon, J., Peyronnet, S., Chan, T.F., Osher, S.,2008. A simple compressive sensing algorithm for parallel many-core architectures. UCLA CAM Report 08-64, Avail-able at: ftp://ftp.math.ucla.edu/pub/camreport/cam08-64.pdf.
    [23]Bovik, A.,2000. Handbook of Image and Video Processing. New York: Academic,.
    [24]Boyd, S., Vandenberghe, L.,2004. Convex optimization. Cambridge University Press, Cambridge.
    [25]Boyd, S., Xiao, L.,2005. Least-squares covariance matrix adjustment. SIAM J. Matrix Anal. Appl. 27, 532-546.
    [26]Burachik, R. S. Scheimberg, S., Svaiter, B.F., 2001. Robustness of the hybrid extragradient proximal-point algorithm. J. Optim. Theory Appl. 111,117-136.
    [27]Cai, J.F., Candes, E.J., Shen, Z.,2010. A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20,1956-1982.
    [28]Cai,J.F.,Osher, S., Shen, Z.W., 2008. Linearized bregman iterations for compressed sensing. UCLA CAM Report TR08-06, ftp://ftp.math.ucla.edu/pub/canreport/ cam08-06.paf.
    [29]Cai, J.F.,Osher, S., Shen, Z.W.,2009. Convergence of the linearized bregman iteration for ι1-norm minimization. Math. Comp. 78,2127 - 2136.
    [30]Candes, E.J.,2006. Compressive sampling. Proc. International Congress of Mathematics 3, 1433-1452.
    [31]Candes, E.J., Li, X., Ma, Y., Wright,J., 2009. Robust principal component analysis? Arxiv preprint arXiv:0912.3599,.
    [32]Candes, E.J., Recht, B.,2009. Exact matrix completion via convex optimization. Found. Comput. Math.9,717-772.
    [33]Candes, E.J., Romberg, J.,2006. Quantitative robust uncertainty principles and optimally sparse decompositions. Found. Comput. Math.6,227-254.
    [34]Candes, E.J., Romberg, J.,2007. Sparsity and incoherence in compressive sampling. Inverse Probl.23,969-985.
    [35]Candes, E.J., Romberg, J., Tao, T.,2005. Stable signal recovery from incomplete and inac-curate information. Commun. Pure. Appl. Math. 59, 1207-1233.
    [36]Candes, E.J., Romberg, J., Tao, T.,2006. Robust uncertainty principles:Exact signal recon-struction from highly incomplete frequency information. IEEE Trans. Inform. Theory 52, 489-509.
    [37]Candes, E.J., Tao, T.,2005. Decoding by linear programming. IEEE Trans. Inform. Theory 51,4203-4215.
    [38]Candes, E.J., Tao, T.,2006. Near optimal signal recovery from random projections:universal encoding strategies. IEEE Trans. Inform. Theory 52, 5406-5425.
    [39]Candes, E.J., Tao, T., 2010. The power of convex relaxation:Near-optimal matrix comple-tion. IEEE Trans. Inform. Theory 56, 2053-2080.
    [40]Carin, L., Liu, D., Xue, Y.,2007. In Situ compressive sensing Preprint.
    [41]Chambolle, A.,2004. An algorithm for total variation minimization and applications. J. Math. Imaging Vis.20,89-97.
    [42]Chambolle, A.,2005. Total variation minimization and a class of binary MRF models. Technical Report UMR CNRS 7641. Ecole Polytechnique.
    [43]Chan, T., Chen, K., Tai, X., 2005. Nonlinear multilevel scheme for solving the total variation image minimization problem, Image Processing Based on Partial Differential Equations. Springer Berlin Heidelberg,.
    [44]Chan, T.F., Esedoglu, S., 2005. Aspects of total variation regularized 11 function approxi-mation. SIAM J. Appl. Math. 65,1817-1837.
    [45]Chan, T.F., Golub, G., Mulet, P.,1999. A nonlinear primal-dual method for total variation-based image restoration. SIAM J. Sci. Comput. 20, 1964-1977.
    [46]Chandrasekaran, V., Sanghavi, S., Parrilo, P.A., Willsky, A.S.,2009a. Rank-sparsity inco-herence for matrix decomposition. Arxiv preprint arXiv:0906.2220,.
    [47]Chandrasekaran, V., Sanghavi, S., Parrilo, P.A., Willsky, A.S.,2009b. Sparse and low-rank matrix decompositions, in: in IFAC Symposium on System Identification.
    [48]Chen, G., Teboulle, M.,1994. A proximal-based decomposition method for convex mini-mization problems. Math. Program. 64, 81-101.
    [49]Chen, P., Suter, D.,2004. Recovering the missing components in a large noisy low-rank matrix: Application to sfm. IEEE Trans. Pattern Anal.,1051-1063.
    [50]Chen, S., Donoho, D., Saunders, M.A.,1998. Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20, 33-61.
    [51]Chen, T., Huang, T., Yin, W., Zhou, X.S.,2005a. A new coarse-to-fine framework for 3D brain MR image registration, in: Computer Vision for Biomedical Image. Springer. volume 3765 of Lecture Notes in Computer Science, pp. 114-124.
    [52]Chen, T., Yin, W., Zhou, X.S., Comaniciu, D., Huang, T.,2006. Total variation models for variable lighting face recognition. IEEE Trans. Pattern Anal.28,1519-1524.
    [53]Chen, T., Yin, W., Zhou, X.S., Domaniciu, D., Huang, T.,2005b. Illumination normaliza-tion for face recognition and uneven background correction using total variation based im-age models, in:2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), San Diego. pp.532-539.
    [54]Combettes, P.L., Pesquet, J.C., 2007. Proximal thresholding algorithm for minimization over orthonormal bases. SIAM J. Optim. 18,1351-1376.
    [55]Combettes, P.L., Wajs, V.R.,2005a. Signal recovery by proximal forward-backward split-ting. Multiscale Model. Sim. 4, 1168-1200.
    [56]Combettes, P.L., Wajs, V.R.,2005b. Signal recovery by proximal forward-backward split-ting. Multiscale Model. Sim.4,1168-1200.
    [57]Darbon, J., Sigelle, M.,2006a. Image restoration with discrete constrained total variation, Part I: fast and exact optimization. J. Math. Imaging Vis. 26, 261-276.
    [58]Darbon, J., Sigelle, M.,2006b. Image restoration with discrete constrained total variation, Part II:levelable functions, convex priors and non-convex cases. J. Math. Imaging Vis. 26, 277-291.
    [59]Daubechies, I., Defrise, M., De Mol, C.,2004. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pur. Appl. Math. 57, 1413-1457.
    [60]Dobson, D., Santosa, F., 1996. Recovery of blocky images from noisy and blurred data. SIAM J. Appl. Math. 56,1181-1198.
    [61]Donoho, D.,2006a. Compressed sensing. IEEE Trans. Inform. Theory 52, 1289-1306.
    [62]Donoho, D.,2006b. For most large underdetermined systems of linear equations, the mini-mal ell-1 norm solution is also the sparsest solution. Commun. Pur. Appl. Math. 59, 797-829.
    [63]Donoho, D., 2006c. For most large underdetermined systems of linear equations, the min-imal (?) norm near-solution approximates the sparsest near-solution. Commun. Contemp. Math. 59,907-934.
    [64]Dykstra, R.L.,1983. An algorithm for restricted least squares regression. J. Amer. Statist. Assoc. 78,837-842.
    [65]Eckstein, J.,1994. Some saddle-function splitting methods for convex programming. Optim. Method Softw. 4,75-83.
    [66]Eckstein, J., Bertsekas, D.P.,. An alternating direction method for linear programming. LIDS-P, 1967. Cambridge, MA, Laboratory for Information and Decision Systems, Mas-sachusetts Institute of Technology.
    [67]Eckstein, J., Bertsekas, D.P., 1992. On the Douglas-Rachford splitting method and the prox-imal point algorithm for maximal monotone operators. Math. Program. 55,293-318.
    [68]Elad, M.,2006. Why simple shrinkage is still relevant for redundant representations? IEEE Trans. Inform. Theory 52,5559-5569.
    [69]Esser, E.,2009. Applications of lagrangian-based alternating direction methods and connec-tions to split bregman. Manuscript ftp://ftp.math.ucla.edu/pub/camreport, cāmO9-31.pdf.
    [70]Fazel, M., Goodman, J.,1998. Approximations for partially coherent optical imaging sys-tems. Technical Report. Stanford University.
    [71]Fazel, M., Hindi, H., Boyd, S.,2001. A rank minimization heuristic with application to minimum order system approximation, in: Proceedings American Control Conference, pp. 4734-4739.
    [72]Fazel, M., Hindi, H., Boyd, S.,2003. Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices, in:Proceedings of the American Control Conference.
    [73]Figueiredo, M., Nowak, R.,2003. An EM algorithm for wavelet-based image restoration. IEEE Trans. Image Process. 12, 906-916.
    [74]Figueiredo, M., Nowak, R.,2005. A bound optimization approach to wavelet-based im-age deconvolution. Proceedings of the IEEE International Conference on Image Processing (ICIP).
    [75]Figueiredo, M., Nowak, R., Wright, S.J.,2007a. GPSR:Gradient projection for sparse reconstruction, http://www.lx.it.pt/-mtf/gpsr/
    [76]Figueiredo, M., Nowak, R., Wright, S.J.,2007b. Gradient projection for sparse reconstruc-tion: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process.:Special Issue on Convex Optimization Methods for Signal Processing 1,586-597.
    [77]Fletcher, R.,2005. On the barzilai-borwein method. Optimization and Control with Appli-cations, Applied Optimization, Mathematics and Statistics 96 Ⅱ, 235-256.
    [78]Fortin, M., Glowinski, R.,1983. Augmented Lagrangian methods. volume 15 of Studies in Mathematics and its Applications. North-Holland Publishing Co., Amsterdam. Applications to the numerical solution of boundary value problems, Translated from the French by B. Hunt and D. C. Spicer.
    [79]Fu, H., Ng, M.K., Nikolova, M., Barlow, J.L.,2006. Efficient minimization methods of mixed (?)-(?) and (?)-(?) norm for image restoration. SIAM J. Sci. Comput.,27, 1881-1902.
    [80]Fu, X., He, B.,2010. Self-adaptive projection-based prediction-correction method for con-strained variational inequalities. Front. Math. China 5,3-21.
    [81]Ganeshy, A., Wright, J., Li, X., Candes, E.J., Ma, Y., 2010. Dense error correction for low-rank matrices via principal component pursuit. Information Theory Proceedings (ISIT), 2010 IEEE International Symposium on,1513-1517.
    [82]Glowinski, R.,1984. Numerical methods for nonlinear variational problems. Springer-Verlag, New York, Berlin, Heidelberg, Tokyo,.
    [83]Glowinski, R., Le Tallec, P., 1989. Augmented Lagrangian and operator-splitting methods in nonlinear mechanics. volume 9 of SIAM Studies in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA.
    [84]Goldfarb, D., Yin, W., 2005. Second-order cone programming methods for total variation based image restoration. SIAM J. Sci. Comput. 27, 622-645.
    [85]Goldstein, T., Osher, S.,2009. The split bregman method for 11-regularized problems. SIAM J. Imaging Sci.2,323-343.
    [86]Golub, G.H., Van Loan, C.F.,1996. Matrix computations. Johns Hopkins Studies in the Mathematical Sciences, Johns Hopkins University Press, Baltimore, MD. third edition.
    [87]Haldar, J.P., Hernando, D.,2009. Rank-constrained solutions to linear matrix equations using powerfactorization. IEEE Signal Proc. Let. 16,584-587.
    [88]Hale, E., Yin, W., Zhang, Y., 2007. A fixed-point continuation method for (?)-regularization with application to compressed sensing. Rice University CAAM Technical Report TR07-07 http://www.caam.rice.edu/-zhang/reports/tr0707.pdf.
    [89]Hale, E.T., Yin, W., Zhang, Y.,2008. Fixed-point continuation for ι1-minimization: method-ology and convergence. SIAM J. Optim. 19, 1107-1130.
    [90]He, B.,1994a. A new method for a class of linear variational inequalities. Math. Program. 66,137-144.
    [91]He, B.,1994b. Solving a class of linear projection equations. Numer. Math. 68,71-80.
    [92]He, B., 1997. A class of projection and contraction methods for monotone variational in-equalities. Appl. Math. Opt. 35,69-76.
    [93]He, B.,1999. Inexact implicit methods for monotone general variational inequalities. Math. Program. 86, 199-217.
    [94]He, B., Liao, L., Han, D., Yang, H.,2002. A new inexact alternating directions method for monotone variational inequalities. Math. Program.92, 103-118.
    [95]He, B., Liao, L., Wang, X.,2012a. Proximal-like contraction methods for monotone vari-ational inequalities in a unified framework ii:General methods and numerical experiments. Comput. Optim. Appl.51,681-708.
    [96]He, B., Liao, L., Wang, X.,2012b. Proximal-like contraction methods for monotone varia-tional inequalitiesin a unified framework i:Effective quadruplet and primary methods. Com-put. Optim. Appl. 51,649-679.
    [97]He, B., Liao, L., Yuan, X.,2006. A lqp based interior prediction-correction method for nonlinear complementarity problems. J. Comput. Math. 24, 33-44.
    [98]He, B., Yang, H.,1998. Some convergence properties of a method of multipliers for linearly constrained monotone variational inequalities. Oper. Res. Lett. 23,151-161.
    [99]He, B., Yang, H., Wang, S.,2000. Alternating direction method with self-adaptive penalty parameters for monotone variational inequalities. J. Optim. Theory Appl. 106, 337-356.
    [100]He, B., Yuan, X.,201 la. Linearized alternating direction method with gaussian back sub-stitution for separable convex programming, optimization-online, Available at: http: //www.optimization-online.org/DB_FILE/2011/10/3192.pdf
    [101]He, B., Yuan, X.,2011b. On the o(1/t) convergence rate of the alternating direction method http://www.optimization-online.org/DB_HTML/2011/09/3157.html.
    [102]Hestenes, M.R.,1969. Multiplier and gradient methods. J. Optim. Theory Appl.4,303-320.
    [103]Higham, N.J., 2002. Computing the nearest correlation matrix - a problem from finance. IMA J. Numer. Anal.22,329-343.
    [104]Ishikawa, H.,2003. Exact optimization for markov random fields with convex priors. IEEE Trans. Pattern Anal. Mach. Intell.,25,1333-1336.
    [105]Kapoor, S., Vaidya, P., 1986. Fast algorithms for convex quadratic programming and multi-commodity flows. Proceeding of the 18th Annual ACM Symp. Theory Comput.,149-159.
    [106]Kim, S.J., Koh, K., Lustig, M., Boyd, S., Gorinevsky, D.,2007. An interior-point method for large-scale (?)-regularized least squares. IEEE J. Sel. Top. Signal Process. 1,606-617.
    [107]Kiwiel, K.C., Rosa, C.H., Ruszczynski, A.,1999. Proximal decomposition via alternating linearization. SIAM J. Optim. 9, 668-689.
    [108]Kolmogorov, V., Zabih, R.,2004. What energy functions can be minimized via graph cuts? IEEE Trans. Pattern Anal.26,147-159.
    [109]Kontogiorgis, S., Meyer, R.R.,1998. A variable-penalty alternating directions method for convex optimization. Math. Program. 83,29-53.
    [110]Larsen, R.M.,. Propack: Software for large and sparse svd calculations. http://soi. stanford.edu/-rmunk/PROPACK.
    [111]Li, K., Dai, Q., Xu, W., Yang, J., Jiang, J.,2012. Three-dimensional motion estimation via matrix completion. IEEE Trans. Sys. Man Cy. B 42, 539-551.
    [112]Li, L., Huang, W., Gu, I., Tian, Q.,2004. Statistical modeling of complex backgrounds for foreground object detection. IEEE Trans. Image Process. 13(11),1459-1472.
    [113]Lin, Z., Chen, M., Wu, L., Ma, Y., 2009a. The augmented lagrange multiplier method for exact recovery of a corrupted low-rank matrices. Math. Program., submitted.
    [114]Lin, Z., Ganesh, A., Wright, J., Wu, L., Chen, M., Ma, Y.,2009b. Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix Available at:http://yima. csl.uiuc.edu/psfile/rpca_algorithms.pdf.
    [115]Liu, X., Wen, Z., Zhang, Y., 2012. Limited memory block krylov subspace optimization for computing dominant singular value decompositions. optimization-online, Available at http://www.optimization-online.org/DB_FILE/2012/03/3401.pdf.
    [116]Ma, S., Goldfarb, D., Chen, L.,2008. Fixed point and Bregman iterative methods for matrix rank minimization. Technical Report. Department of IEOR, Columbia University.
    [117]Malick, J.,2004. A dual approach to semidefinite least-squares problems. SIAM J. Matrix Anal. Appl.26,272-284.
    [118]Mesbahi, M., Papavassilopoulos, G.P.,1997. On the rank minimization problem over a positive semidenitelinear matrix inequality. IEEE Trans. Automat. Contr. 42, 239-243.
    [119]Mumford, D., Shah, J., 1989. Optimal approximations by piecewise smooth functions and associated variational problems. Commun. Pur. Appl. Math. 42, 577-684.
    [120]Nagurney, A., Zhang, D.,1996. Projected dynamical systems and variational inequalities with applications. Kluwer Academic Publishers, Boston, Dordrecht, London.
    [121]Needell, D., Tropp, J.A.,2008. Cosamp: Iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. A. 26,301-321.
    [122]Needell, D., Vershynin, R.,2007. Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit. Preprint: arXiv:0707.4203v3.
    [123]Nesterov, Y.,2004. Introductory lectures on convex optimization 87, xviii+236. A basic course.
    [124]Nesterov, Y., 2005. Smooth minimization of non-smooth functions. Math. Program. 103, 127-152.
    [125]Nesterov, Y., 2007. Gradient methods for minimizing composite objective function, core dis-cussion paper 2007/76. Center for Operations Research and Econometrics (CORE), Catholic University of Louvain, Belgium,
    [126]Nikolova, M.,2002. Minimizers of cost-functions involving nonsmooth data-fidelity terms. application to the processing of outliers. SIAM J. Num. Anal. 40, 965 - 994.
    [127]Nocedal, J., Wright, S.J.,1999. Numerical Optimization. Springer-Verlag, New York.
    [128]Nowak, R., Figueiredo, M.,2001. Fast wavelet-based image deconvolution using the EM al-gorithm. Proceedings of the 35th Asilomar Conference on Signals, Systems, and Computers, Monterey, CA.
    [129]Osher, S., Burger, M., Goldfarb, D., Xu, J., Yin, W., 2005. An iterative regularization method for total variation-based image restoration. Multiscale Model. Simul.4,460-489.
    [130]Osher, S., Mao, Y., Dong, B., Yin, W.,2009. Fast linearized bregman iteration for compres-sive sensing and sparse denoising To appear in Commun. Math. Sci.
    [131]Powell, M.J.D.,1972. A method for nonlinear constraints in minimization problems, in: Fletcher, R. (Ed.), Optimization. Academic Press, New York, pp. 283-298.
    [132]Qi, H.D., Sun, D.F.,2006. A quadratically convergent newton method for computing the nearest correlation matrix. SIAM J. Matrix Anal. Appl.28,360-385.
    [133]Ramani, S., Fessler, J.A.,2011. Parallel mr image reconstruction using augmented la-grangian methods. IEEE Trans. Med. Imaging 30, 694-706.
    [134]Rockafellar, R.,1974. Conjugate Duality and Optimization. SIAM, Philadelphia.
    [135]Rockafellar, R.,1976. Augmented lagrangians and applications of the proximal point algo-rithm in convex programming. Math. Oper. Res. 1,97-116.
    [136]Rudin, L., Osher, S.,1994. Total variation based image restoration with free local constraints. Proc. 1st IEEE ICIP 1.
    [137]Rudin, L., Osher, S., Fatemi, E.,1992. Nonlinear total variation based noise removal algo-rithms. Physica D 60, 259-268.
    [138]Shen, Y., Zhang, W., 2012. Partial convolution total variation algorithm by augmented lagrangian-based proximal point descent algorithm. to be submitted.
    [139]Simon, H.D.,1984. The lanczos algorithm with partial reorthogonalization. Math. Comp. 42,115-142.
    [140]Solodov, M., Svaiter, B.,1999. A hybrid approximate extragradient-proximal point algorith-m using the enlargement of a maximal monotone operator. Set-Valued Anal. 7, 117-132.
    [141]Starck, J.-L. Candes, E., Donoho, D.,2003. Astronomical image representation by the curvelet transform. Astron. Astrophys. 398,785-800.
    [142]Starck, Jean-Luc Nguyen, M.K., Murtagh, F., 2003. Fast communication wavelets and curvelets for image deconvolution:a combined approach. Signal Process. 83,2279-2283.
    [143]Tao, M., Yang, J.F.,2009. Alternating direction algorithms for total variation decon-volution in image reconstruction, optimization-online, Available at: http://www optimization-online.org/DB_HTML/2009/11/2463.html
    [144]Tao, M., Yuan, X.,2011. Recovering low-rank and sparse components of matrices from incomplete and noisy observations. SIAM J. Optim. 21,57-81.
    [145]Tikhonov, A.N., Arsenin, V.Y.,1977. Solutions of ill-posed problems. Winston, New York.
    [146]Tomasi, C., Kanade, T.,1992. Shape and motion from image streams under orthography: a factorization method. International Journal of Computer Vision 9, 137-154.
    [147]Tosserams, S., Etman, L.F.P., Rooda, J.E.,2008. Augmented lagrangian coordination for distributed optimal design in mdo. Int. J. Numer. Meth. Eng. 73,1885-1910.
    [148]Tropp,J.A., Gilbert, A.C.,2007. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inform. Theory 53,4655-4666.
    [149]Tseng, P.,1997. Alternating projection-proximal methods for convex programming and variational inequalities. SIAM J. Optim. 7, 951-965.
    [150]Tseng, P., 2008. On accelerated proximal gradient methods for convex-concave optimiza-tion. submitted to SIAM J. Optim.
    [151]Vogel, C., Oman, M., 1996. Iterative methods for total variation denoising. SIAM J. Sci. Comput. 17, 227-238.
    [152]Vogel, C., Oman, M.,1998. Fast, robust total variation-based reconstruction of noisy, blurred images. IEEE Trans. Image Process.7,813-824.
    [153]Wang, Y., Yang, J., Yin, W., Zhang, Y., 2008. A new alternating minimization algorithm for total variation image reconstruction. SIAM J. Imaging Sci.1,248-272.
    [154]Wang, Y, Yin, W., Zhang, Y., 2007. Ftvd:A fast algorithm for image deblurring with total variation regularization. CAAM Technical Report TR07-10.
    [155]Wen, Z., Goldfarb, D., Yin, W.,2009a. Alternating Direction Augmented Lagrangian Meth-ods for semidefinite programming. Technical Report. Dept of IEOR, Columbia University.
    [156]Wen, Z., Yin, W., Goldfarb, D., Zhang, Y,2008. On the convergence of an active set method for ι1 minimization. Technical Report. Dept of IEOR, Columbia University.
    [157]Wen, Z., Yin, W.,Goldfarb, D.,Zhang, Y.,2009b. A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization and continuation. Technical Report. Dept. of IEOR, Columbia University.
    [158]Wen, Z., Yin, W., Zhang, Y,2010. Solving a Low-Rank Factorization Model for Ma-trix Completion by a Nonlinear Successive Over-Relaxation Algorithm. Technical Report. http://www.optimization-online.org/DB_FILE/2010/03/2581.pdf.
    [159]Wright, J., Ganesh, A., Rao, S., Peng, Y., Ma, Y.,2009a. Robust principal component analysis:Exact recovery of corrupted low-rank matrices via convex optimization. submitted to J. ACM.
    [160]Wright, S., Nowak, R., Figueiredo, M.,2009b. Sparse reconstruction by separable approxi-mation. IEEE Trans. Signal Proces.57,2479-2493.
    [161]Yang, J., Zhang, Y., 2009. Alternating direction algorithms for (?)-problems in compressive sensing. Technical Report CAAM TR09-37. Rice University. Available at: http://www. caam.rice.edu/-yzhang/reports/tr0937.pdf.
    [162]Yang, J., Zhang, Y., Yin, W.,2009. An efficient tvll algorithm for deblurring multichannel images corrupted by impulsive noise. SIAM J. Sci. Comput. 31,2842-2865.
    [163]Yang, J., Zhang, Y., Yin, W.,2010. A fast alternating direction method for tvll-12 signal reconstruction from partial fourier data. IEEE J. Sel. Top. Signal Process., Special Issue on Compressive Sensing, 4, 288-297. http://math.nju.edu.cn/-jfyang/files/ TR08-27.pdf.
    [164]Ye, Y. Tse, E.,1989. An extension of karmarkar's algorithm to convex quadratic program-ming. Math. Program. 44, 157-179.
    [165]Yin, W., 2009. The linearized Bregman method: reviews, analysis, and generalizations. Technical Report. CAAM TR09-02, Rice University.
    [166]Yin, W., Goldfarb, D., Osher, S.,2005. Image cartoon-texture decomposition and feature selection using the total variation regularized L1 functional, in: Variational, Geometric, and Level Set Methods in Computer Vision. Springer. volume 3752 of Leture Notes in Computer Science, pp. 73-84.
    [167]Yin, W., Goldfarb, D., Osher, S.,2006. The total variation regularized L1 model for multi-scale decomposition. Multiscale Model. Sim. 6, 190-211.
    [168]Yin, W., Morgan, S., Yang, J., Zhang, Y.,2010. Practical compressive sensing with toeplitz and circulant matrices, in:In proceedings of Visual Communications and Image Processing (VCIP). http://www.caam.rice.edu/-optimization/Ll/RecPC/.
    [169]Yin, W., Osher, S., Goldfarb, D., Darbon, J.,2008. Bregman iterative algorithms for (?)-minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1,143-168.
    [170]Yuan, X., Yang, J., 2009. Sparse and Low-Rank Matrix Decomposition Via Alternating Direction Methods. Technical Report. Dept. of Mathematics, Hong Kong Baptist University.
    [171]Zalesky, B.A.,2002. Network flow optimization for restoration of images. J. Appl. Math.2, 199-218.
    [172]Zhang, Y., 2009a. LMaFit:Low-rank matrix fitting. http://www.caam.rice.edu/-optimization/L1/LMaFit,
    [173]Zhang, Y., 2009b. User's guide for yalll:Your algorithms for 11 optimization. http: //www.caam.rice.edu/-zhang/reports/tr0917.pdf.
    [174]Zhang, Y., 201O. An Alternating Direction Algorithm for Nonnegative Matrix Factoriza-tion. Technical Report. Rice University. http://www.caam.rice.edu/-yzhang/ reports/tr1003.pdf.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700