稀疏约束下反问题理论与算法的研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
在生产生活的众多领域,人们需要根据观测到的数据,反演或提取相应的信息.这些问题归属于反问题的范畴,通常具有很强的不适定性.建立在Hilbert空间的经典Tikhonov正则化有着完备的理论和算法,是处理这类问题的常用方法.为了能够更好的刻画所反演变量的特征,去十几年里,在信号处理,图像处理,医学成像,非破坏性检测,偏微分方程中参数识别,机器学习,统计分析,金融,生物信息等领域里展开了Banach空间正则化问题理论与算法的研究.本文在Banach空间正则化理论框架下以稀疏正则化为主线讨论压缩感知,变量选择,椭圆型偏微分方程中Robin参数识别里的建模与计算.本文结构如下:
     第一章,总结文献中与我们研究相关的已有工作并简单介绍我们的研究动机.
     第二章,对压缩感知,变量选择问题中的(?)1和一类非凸稀疏正则化模型提出了连续化的原对偶有效集算法(PDASC)对(?)1正则化模型,在合理的条件下,我们证明了原对偶有效集算法的局部一步收敛性;同时还证明了PDASC的全局收敛性.对(?)0正则化模型,在合理的假设下,我们证明了全局最优解的唯一性,证明了PDASC的全局收敛性.我们还讨论了基于BIC,差异原则和修正差异原则的后验正则化参数选取准则.
     在第三章中,考虑椭圆型偏微分方程中Robin参数的反演问题.在充分考虑问题的物理背景和实际意义的情况下提出了带稀疏约束的变分正则化模型并讨论了该非凸非光滑模型解的存在性,稳定性,正则化参数先验选取,以及有限元离散的收敛性问题.算法上,我们提出了一个简单有效的滞后牛顿法并采用经典的差异原则做为停机准则.
In many areas of life and production, people need to extract useful infor-mation by inverting the observed data. These issues belong to the category of inverse problems which are usually ill-posed. Classic Tikhonov regularization that established in the Hilbert space with complete theory and algorithm is a powerful tool to deal with these inverse problems. However, in order to preserve some of the main features of the inversion variables, regularization in Banach space occurred in the past ten years in the fields of signal process-ing, image processing, medical imaging, non-destructive detection, parameter identification in PDE, machine learning, statistical analysis, finance, bioinfor-matics and other areas. This thesis studies several modeling and computing problems raised in compressed sensing, variable selection, Robin parameter identification of elliptic partial differential equations with the main line of sparse regularization under the framework of Banach space regularization.
     This thesis is structured as follows:
     Chapter1, We summarize the relative works in the literature and give simple introduction of our research motivation.
     Chapter2, We proposed primal dual active set algorithm with continu-ation (PDASC) to the e1and non-convex regularized model in compressive sensing and variable selection. For the e1regularized model, we prove that PDAS enjoys the local one step convergence property while PDASC con-verges globally. For the e0regularized model we prove the uniqueness of the global minimizer and the global convergence of PDASC under mild assump-tions. And we discuss a posterior regularization parameter selection rules based on discrepancy principle, modified discrepancy principle and Bayesian information criterion.
     Chapter3, We consider inversion of Robin parameters in elliptic partial differential equations. By taking full account of the physical background and practical significance, we propose a new variational regularization model with sparsity constraint. We established the existence, stability and prior regu-larization parameter rule and the convergence of finite element discretization even if the model we considered is non-convex and non-smooth. We propose a simple but effective lagged Newton algorithm to solve the model and use the discrepancy principle to select the regularization parameter.
引文
[1]Adams R. and Fournier J., Sobolev spaces[M]. Academic press,2003.
    [2]Akaike H., A new look at the statistical model identification [J]. Automatic Control, IEEE Transactions on,1974,19(6):716-723.
    [3]Alliney S. and Ruzinsky S. A., An algorithm for the minimization of mixed 1 1 and 12 norms with application to Bayesian estimation[J]. Signal Processing, IEEE Transactions on,1994,42(3):618-627.
    [4]Attouch H., Bolte J. and Svaiter B. F., Convergence of descent methods for semi-algebraic and tame problems:proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods [J]. Mathematical Program-ming,2013,137(1-2):91-129.
    [5]Bach F, Jenatton R, Mairal J, et al. Optimization with sparsity-inducing penalties[J]. Found. Trend. Mach. Learn.,4(1):1-106,2012.
    [6]Bai Z. D. and Silverstein J. W., Spectral analysis of large dimensional random matrices[M]. Springer,2009.
    [7]Bajwa W. U., Haupt J., Sayeed A. M., et al., Compressed channel sensing:A new approach to estimating sparse multipath channels [J]. Proceedings of the IEEE,2010,98(6):1058-1076.
    [8]Baraniuk R. G., Single-pixel imaging via compressive sampling[J]. IEEE Sig-nal Processing Magazine,2008.
    [9]Barron A., BirgeL. and Massart P., Risk bounds for model selection via pe-nalization [J]. Probability theory and related fields,1999,113(3):301-413.
    [10]Barron A., Cohen A., Dahmen W., et al., Approximation and learning by greedy algorithms [J]. The annals of statistics,2008:64-94.
    [11]Bauschke H. H. and Combettes P. L., Convex analysis and monotone operator theory in Hilbert spaces[M]. Springer,2011.
    [12]Belgacem F. B., Why is the Cauchy problem severely ill-posed?[J]. Inverse Problems,2007,23(2):823.
    [13]Beck A. and Teboulle M., A fast iterative shrinkage-thresholding algorithm for linear inverse problems[J]. SIAM Journal on Imaging Sciences,2009,2(1): 183-202.
    [14]Becker S., Bobin J. and Candes E. J., NESTA:a fast and accurate first-order method for sparse recovery[J]. SIAM Journal on Imaging Sciences,2011,4(1): 1-39.
    [15]Becker S., Candes E. J. and Grant M. C., Templates for convex cone problems with applications to sparse signal recovery[J]. Mathematical Programming Computation,2011,3(3):165-218.
    [16]Bertsekas D. P., Nedi A. and Ozdaglar A. E., Convex analysis and optimiza-tion[J].2003.
    [17]Bickel P. J., Ritov Y. and Tsybakov A. B., Simultaneous analysis of Lasso and Dantzig selector[J]. The Annals of Statistics,2009:1705-1732.
    [18]Blumensath T., Accelerated iterative hard thresholding[J]. Signal Processing, 2012,92(3):752-756.
    [19]Blumensath T. and Davies M. E., Iterative hard thresholding for compressed sensing[J]. Applied and Computational Harmonic Analysis,2009,27(3):265-274.
    [20]Blumensath T. and Davies M. E., Iterative thresholding for sparse approxima-tions[J]. Journal of Fourier Analysis and Applications,2008,14(5-6):629-654.
    [21]Blumensath T., Davies M E. Stagewise weak gradient pursuits[J]. Signal Pro-cessing, IEEE Transactions on,2009,57(11):4333-4346.
    [22]Boyd S. P. and Vandenberghe L., Convex optimization[M]. Cambridge uni-versity press,2004.
    [23]Boyd S.P., Parikh N., Chu E., et al., Distributed optimization and statistical learning via the alternating direction method of multipliers [J]. Foundations and Trends in Machine Learning,2011,3(1):1-122.
    [24]Breiman L., Heuristics of instability and stabilization in model selection[J]. The annals of statistics,1996,24(6):2350-2383.
    [25]Bredies K. and Lorenz D. A., Minimization of non-smooth, non-convex func-tionals by iterative thresholding[J]. preprint,2009.
    [26]Breheny P. and Huang J., Coordinate descent algorithms for nonconvex penal-ized regression, with applications to biological feature selection[J]. The annals of applied statistics,2011,5(1):232.
    [27]Bruckstein A. M., Donoho D. L. and Elad M., From sparse solutions of systems of equations to sparse modeling of signals and images[J]. SIAM review,2009, 51(1):34-81.
    [28]Biiehlmann P. Boosting for high-dimensional linear models[J]. The Annals of Statistics,2006:559-583.
    [29]Burger M. and Osher S., A guide to the TV zoo[M]//Level Set and PDE Based Reconstruction Methods in Imaging. Springer International Publishing,2013: 1-70.
    [30]Cai. J. F., Chan R., Shen L. X., et al., Convergence analysis of tight framelet approach for missing data recovery [J]. Advances in Computational Mathe-matics,2009,31(1-3):87-113.
    [31]Cai. J. F., Chan R., Shen L. X., et al., Restoration of chopped and nodded images by framelets[J]. SIAM Journal on Scientific Computing,2008,30(3): 1205-1227.
    [32]Cai J. F., Chan R., Shen Z. W., A framelet-based image inpainting algorith-m[J]. Applied and Computational Harmonic Analysis,2008,24(2):131-149.
    [33]Cai J. F., Chan R., Shen Z. W., Simultaneous cartoon and texture inpaint-ing[J]. Inverse Probl. Imaging,2010,4(3):379-395.
    [34]Cai J. F., Dong B., Osher S., et al., Image restoration: Total variation, wavelet frames, and beyond[J]. Journal of the American Mathematical Society,2012, 25(4):1033-1089.
    [35]Cai J. F., Ji H., Liu C., et al., Framelet-based blind motion deblurring from a single image[J]. Image Processing, IEEE Transactions on,2012,21(2):562-572.
    [36]Cai J. F., Osher S. and Shen Z. W., Split Bregman methods and frame based image restoration[J]. Multiscale modeling and simulation,2009,8(2):337-369.
    [37]Cai T. T. and Wang L., Orthogonal matching pursuit for sparse signal re-covery with noise[J]. Information Theory, IEEE Transactions on,2011,57(7): 4680-4688.
    [38]Caselles V, Chambolle A, Novaga M. Regularity for solutions of the total variation denoising problem[J]. Revista Matematica Iberoamericana,2011, 27(1):233-252.
    [39]Candes E. J. and Plan Y., Near-ideal model selection by e1 minimization[J]. The Annals of Statistics,2009,37(5A):2145-2177.
    [40]Candes E. J. and Tao T.,Decoding by linear programming [J]. Information Theory, IEEE Transactions on,2005,51(12):4203-4215.
    [41]Candes E. J. and Tao T., Near-optimal signal recovery from random projec-tions:Universal encoding strategies?[J]. Information Theory, IEEE Transac-tions on,2006,52(12):5406-5425.
    [42]Candes E. J. and Tao T., The Dantzig selector:Statistical estimation when p is much larger than n (with disscussion)[J]. The Annals of Statistics,2007: 2313-2351.
    [43]Candes E. J., Romberg J. and Tao T. Robust uncertainty principles:Exact signal reconstruction from highly incomplete frequency information[J]. Infor-mation Theory, IEEE Transactions on,2006,52(2):489-509.
    [44]Candes E. J., Romberg J. and Tao T. Stable signal recovery from incom-plete and inaccurate measurements[J]. Communications on pure and applied mathematics,2006,59(8):1207-1223.
    [45]Chaabane S., Elhechmi C. and Jaoua M., A stable recovery method for the Robin inverse problem[J]. Mathematics and Computers in Simulation,2004, 66(4):367-383.
    [46]Chaabane S., Feki I. and Mars N., Numerical reconstruction of a piecewise constant Robin parameter in the two-or three-dimensional case[J]. Inverse problems,2012,28(6):065016.
    [47]Chaabane S. Fellah I. Jaoua M, et al., Logarithmic stability estimates for a Robin coefficient in two-dimensional Laplace inverse, problems [J]. Inverse Problems,2004,20(1):47.
    [48]Chan R., Chan T. F., Shen L. X., et al., Wavelet algorithms for high-resolution image reconstruction[J]. SIAM Journal on Scientific Computing,2003,24(4): 1408-1432.
    [49]Chan R., Shen Z. W., Xia T., A framelet algorithm for enhancing video stills[J]. Applied and Computational Harmonic Analysis,2007,23(2):153-170.
    [50]Chan T. F and Shen J. J., Image processing and analysis:variational, PDE, wavelet, and stochastic methods[M]. Siam,2005.
    [51]Chartrand R. and Staneva V., Restricted isometry properties and nonconvex compressive sensing[J]. Inverse Problems,2008,24(3):035020.
    [52]Chartrand R. and Yin W. Iteratively reweighted algorithms for compres-sive sensing[C]//Acoustics, speech and signal processing,2008. ICASSP 2008. IEEE international conference on. IEEE,2008:3869-3872.
    [53]Chen J. H. and Chen Z. H., Extended Bayesian information criteria for model selection with large model spaces[J]. Biometrika,2008,95(3):759-771.
    [54]Chen X. J., Nashed Z. and Qi L. Q., Smoothing methods and semismooth meth-ods for nondifferentiable operator equations, SIAM J. Numer. Anal., 2000,38:1200-1216.
    [55]Chen S. S., Donoho D. L. and Saunders M. A., Atomic decomposition by basis pursuit[J]. SIAM journal on scientific computing,1998,20(1):33-61.
    [56]Chen X. J., Smoothing methods for nonsmooth, nonconvex minimization[J]. Mathematical programming,2012,134(1):71-99.
    [57]Chen X. J., Niu L. F. and Yuan Y. X., Optimality Conditions and a Smooth-ing Trust Region Newton Method for NonLipschitz Optimization[J]. SIAM Journal on Optimization,2013,23(3):1528-1552.
    [58]Ciarlet P., The finite element method for elliptic problems[M]. Elsevier,1978.
    [59]Clarke F. H., Optimization and nonsmooth analysis[M]. Siam,1990.
    [60]Christensen O., An introduction to frames and Riesz bases[M]. Springer,2003.
    [61]Combettes P. L. and Pesquet. J., Proximal splitting methods in signal pro-cessing, pages 185-212. Springer, Berlin,2011.
    [62]Combettes P. L. and Wajs V. R., Signal recovery by proximal forward-backward splitting[J]. Multiscale Modeling and Simulation,2005,4(4):1168-1200.
    [63]Cohen A., Dahmen W. and DeVore R., Compressed sensing and best k-term approximation[J]. Journal of the American mathematical society,2009,22(1): 211-231.
    [64]Dai W. and Milenkovic O., Subspace pursuit for compressive sensing signal reconstruction[J]. Information Theory, IEEE Transactions on,2009,55(5): 2230-2249.
    [65]Davenport M. A. and Wakin M. B., Analysis of orthogonal matching pursuit using the restricted isometry property[J]. Information Theory, IEEE Trans-actions on,2010,56(9):4395-4401.
    [66]Daubechies I., Defrise M. and De Mol C., An iterative thresholding algorithm for linear inverse problems with a sparsity constraint [J]. Communications on pure and applied mathematics,2004,57(11):1413-1457.
    [67]Daubechies I., DeVore R, Fornasier M., et al., Iteratively reweighted least squares minimization for sparse recovery [J]. Communications on Pure and Applied Mathematics,2010,63(1):1-38.
    [68]De Mol C., De Vito E. and Rosasco L., Elastic-net regularization in learning theory[J]. Journal of Complexity,2009,25(2):201-230.
    [69]Dempster, M. A. H. (Ed.), Risk management: value at risk and beyond[M]. Cambridge University Press,2002.
    [70]Dong B. and Shen Z. W., MRA based wavelet frames and applications [J]. IAS Lecture Notes Series, Summer Program on "The Mathematics of Image Processing", Park City Mathematics Institute,2010.
    [71]Dong B. and Shen Z. MRA-based wavelet frames and applications:image seg-mentation and surface reconstruction[C]//SPIE Defense, Security, and Sens-ing. International Society for Optics and Photonics,2012:840102-840102-16.
    [72]Dong B. and Zhang Y., An Efficient Algorithm for 0 Minimization in Wavelet Frame Based Image Restoration [J]. Journal of Scientific Computing,2013, 54(2-3):350-368.
    [73]Donoho D. L., High-dimensional data analysis:The curses and blessings of dimensionality [J]. AMS Math Challenges Lecture,2000:1-32.
    [74]Donoho D. L., Compressed sensing[J]. Information Theory, IEEE Transac-tions on,2006,52(4):1289-1306.
    [75]Donoho D. L., Elad M. and Temlyakov V. N., Stable recovery of sparse overcomplete representations in the presence of noise [J]. Information Theory, IEEE Transactions on,2006,52(1):6-18.
    [76]Donoho D. L. and Huo X. M., Uncertainty principles and ideal atomic decom-position[J]. Information Theory, IEEE Transactions on,2001,47(7):2845-2862.
    [77]Donoho D. L. and Johnstone I. M., Adapting to unknown smoothness via wavelet shrinkage [J]. Journal of the american statistical association,1995, 90(432):1200-1224.
    [78]Donoho D. L. and Tanner J., Counting faces of randomly projected polytopes when the projection radically lowers dimension[J]. Journal of the American Mathematical Society,2009,22(1):1-53.
    [79]Donoho D. L., Tsaig Y., Drori I., et al., Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit [J]. In-formation Theory, IEEE Transactions on,2012,58(2):1094-1121.
    [80]Donoho D. L. and Tanner J., Counting the faces of randomly-projected hy-percubes and orthants, with applications [J]. Discrete and computational ge-ometry,2010,43(3):522-541.
    [81]Donoho D. L. and Tsaig Y., Fast solution of-norm minimization problems when the solution may be sparse[J]. Information Theory, IEEE Transactions on,2008,54(11):4789-4812.
    [82]Dossal C, A necessary and sufficient condition for exact sparse recovery by e1-minimization[J]. Comptes Rendus Mathematique,2012,350(1):117-120.
    [83]Efron B., Hastie T., Johnstone I., et al., Least angle regression[J]. The Annals of statistics (with discussion),2004,32(2):407-499.
    [84]Ender J. H. G., On compressive sensing applied to radar[J]. Signal Processing, 2010,90(5):1402-1414.
    [85]Engl H. W., Hanke M. and Neubauer A., Regularization of inverse problem-s[M]. Springer,1996.
    [86]Elad M., Sparse and redundant representations:from theory to applications in signal and image processing[M]. Springer,2010.
    [87]Elad M., Matalon B. and Zibulevsky M., Coordinate and subspace optimiza-tion methods for linear least squares with non-quadratic regularization[J]. Applied and Computational Harmonic Analysis,2007,23(3):346-367.
    [88]Esser E., Zhang X. Q. and Chan T. F., A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science[J]. SIAM Journal on Imaging Sciences,2010,3(4):1015-1046.
    [89]Fan J.Q., Han F and Liu H., Challenges of Big Data Analysis[J]. arXiv preprint arXiv:1308.1479,2013.
    [90]Fan J. Q. and Li R. Z., Variable selection via nonconcave penalized likelihood and its oracle properties[J]. Journal of the American Statistical Association, 2001,96(456):1348-1360.
    [91]Fan J. Q. and Lv J. C, Sure independence screening for ultrahigh dimensional feature space[J]. Journal of the Royal Statistical Society:Series B (Statistical Methodology),2008,70(5):849-911.
    [92]Fan J. Q. and Peng H., Nonconcave penalized likelihood with a diverging number of parameters [J]. The Annals of Statistics,2004,32(3):928-961.
    [93]Fan Q. B., Jiao Y. L. and Lu X. L., A Primal Dual Active Set Algorithm with Continuation for Compressed Sensing[J]. preprint arXiv:1312.7039,2013.
    [94]Fang W. F. and Cumberbatch E., Inverse problems for metal oxide semicon-ductor field-effect transistor contact resistivity [J]. SIAM Journal on Applied Mathematics,1992,52(3):699-709.
    [95]Fang W. F. and Lu M. Y., A fast collocation method for an inverse boundary value problem[J]. International journal for numerical methods in engineering, 2004,59(12):1563-1585.
    [96]Fasino D. and Inglese G., An inverse Robin problem for Laplace's equation: theoretical results and numerical methods[J]. Inverse problems,1999,15(1): 41.
    [97]Figueiredo M. A. T., Nowak R. D. and Wright S. J., Gradient projection for sparse reconstruction:Application to compressed sensing and other inverse problems [J]. Selected Topics in Signal Processing, IEEE Journal of,2007,1(4): 586-597.
    [98]Foucart S, and Lai M. J., Sparsest solutions of underdetermined linear systems via eq-minimization for 0< q(?)1 [J]. Applied and Computational Harmonic Analysis,2009,26(3):395-407.
    [99]Foucart S., Hard thresholding pursuit: an algorithm for compressive sens-ing[J]. SIAM Journal on Numerical Analysis,2011,49(6):2543-2563.
    [100]Frank L. L. E. and Friedman J. H., A statistical view of some chemometrics regression tools[J]. Technometrics,1993,35(2):109-135.
    [101]Friedman J., Hastie T., Hofling H., et al., Pathwise coordinate optimiza-tion[J]. The Annals of Applied Statistics,2007,1(2):302-332.
    [102]Fu W. J., Penalized regressions:the bridge versus the lasso[J]. Journal of computational and graphical statistics, 1998,7(3):397-416.
    [103]Gasso G., Rakotomamonjy A. and Canu S. Recovering sparse signals with a certain family of nonconvex penalties and DC programming[J]. Signal Pro-cessing, IEEE Transactions on,2009,57(12):4686-4698.
    [104]Geman S. and Geman D., Stochastic Relaxation, Gibbs Distributions, and The Bayesian Restoration of Images, IEEE Trans. Pattern Anal. Mach. Intell., 1984,6(6):721-741.
    [105]Griesse R. and Lorenz D. A., A semismooth Newton method for Tikhonov functionals with sparsity constraints[J]. Inverse Problems,2008,24(3): 035007.
    [106]Goldstein T. and Osher S., The split Bregman method for Ll-regularized problems[J]. SIAM Journal on Imaging Sciences,2009,2(2):323-343.
    [107]Golub G. H. and Van Loan C. F., Matrix computations[M]. JHU Press,2012.
    [108]Grasmair M., Scherzer O. and Haltmeier M., Necessary and sufficient condi-tions for linear convergence of -regularization[J]. Communications on Pure and Applied Mathematics,2011,64(2):161-182.
    [109]Haldar J. P., Hernando D. and Liang Z. P., Compressed-sensing MRI with random encoding[J]. Medical Imaging, IEEE Transactions on,2011,30(4): 893-903.
    [110]Hale E. T., Yin W. T. and Zhang Y., Fixed-point continuation for l1-minimization:Methodology and convergence[J]. SIAM Journal on Optimiza-tion,2008,19(3):1107-1130.
    [111]Hastie T., Tibshirani R. and Friedman J., The elements of statistical learn-ing[M]. New York: Springer,2009.
    [112]Herman M. A., Strohmer T., High-resolution radar via compressed sens-ing[J]. Signal Processing, IEEE Transactions on,2009,57(6):2275-2284.
    [113]He B. S. and Yuan X. M., Convergence analysis of primal-dual algorithms for a saddle-point problem:from contraction perspective [J]. SIAM Journal on Imaging Sciences,2012,5(1):119-149.
    [114]Hinze M., Pinnau R., Ulbrich M., et al., Optimization with PDE constraints, volume 23 of Mathematical Modelling:Theory and Applications [J].2009.
    [115]Herrmann F. J., Friedlander M. P. and Yilmaz O., Fighting the curse of dimensionality:compressive sensing in exploration seismology [J]. Signal Pro-cessing Magazine, IEEE,2012,29(3):88-100.
    [116]Hintermuller M, Ito K, Kunisch K. The primal-dual active set strategy as a semismooth Newton method[J]. SIAM Journal on Optimization,2002,13(3): 865-888.
    [117]Hintermuller M. and Kunisch K., Path-following methods for a class of con-strained minimization problems in function space[J]. SIAM Journal on Opti-mization,2006,17(1):159-187.
    [118]Hintermuller M. and Wu T., A superlinearly convergent R-regularized New-ton scheme for variational models with concave sparsity-promoting priors[J]. Computational Optimization and Applications,2014,57(1):1-25.
    [119]Hofmann B., Kaltenbacher B., Poeschl C., et al., A convergence rates result for Tikhonov regularization in Banach spaces with non-smooth operators[J]. Inverse Problems,2007,23(3):987.
    [120]Huang J., Horowitz J. L. and Ma S. G., Asymptotic properties of bridge estimators in sparse high-dimensional regression models[J]. The Annals of Statistics,2008,36(2):587-613.
    [121]Huang J. Ma S. G. and Zhang C. H., Adaptive Lasso for sparse high-dimensional regression models[J]. Statistica Sinica,2008,18(4):1603.
    [122]Huang S. S. and Zhu J. B., Recovery of sparse signals using OMP and its variants:convergence analysis based on RIP[J]. Inverse Problems,2011,27(3): 035003.
    [123]Ito K. and Jin B. T., Inverse Problems:Tikhonov Theory and Algorithms. World Scientific, Singapore,2014.
    [124]Ito K. and Jin B. T. and Takeuchi T., A regularization parameter for non-smooth Tikhonov regularization[J]. SIAM Journal on Scientific Computing, 2011,33(3):1415-1438.
    [125]Ito K., Jin B. T. and Zou J., A two-stage method for inverse medium scat-tering[J]. Journal of Computational Physics,2013,237:211-223.
    [126]Ito K. and Kunisch K., A variational approach to sparsity optimization based on Lagrange multiplier theory [J]. Inverse Problems,2014,30(1):015001.
    [127]Ito K and Kunisch K., Lagrange multiplier approach to variational problems and applications[M]. SIAM,2008.
    [128]Inglese G., An inverse problem in corrosion detection[J]. Inverse problems, 1997,13(4):977.
    [129]Isakov V., Inverse problems for partial differential equations[M]. Springer, 2006.
    [130]James G. M., Radchenko P. and Lv J. C, DASSO: connections between the Dantzig selector and lasso[J]. Journal of the Royal Statistical Society:Series B (Statistical Methodology),2009,71(1):127-142.
    [131]Jiao Y. L, Jin, B. T and Lu X. L., A primal dual active set algorithm for a class of nonconvex sparsity optimization, preprint arXiv:1310.1147,2013.
    [132]Jiao Y. L, Jin, B. T and Lu X. L., A primal dual active set with continu-ation algorithm for the e0-regularized optimization problem, preprint, arXiv: 1403.0515,2014.
    [133]Jin B. T., Lorenz D. A. and Schiffler S.. Elastic-net regularization: error estimates and active set methods[J]. Inverse Problems,2009,25(11):115022.
    [134]Jin B. T. and Maass P., Sparsity regularization for parameter identification problems[J]. Inverse Problems,2012,28(12):123001.
    [135]Jin B. T. and Zou J., Numerical estimation of piecewise constant Robin coefficient [J]. SIAM Journal on Control and Optimization,2009,48(3):1977-2002.
    [136]Jin B. T. and Zou J., Numerical estimation of the Robin coefficient in a sta-tionary diffusion equation[J]. IMA journal of numerical analysis,2010,30(3): 677-701.
    [137]Johnson W. B. and Lindenstrauss J., Extensions of Lipschitz mappings into a Hilbert space[J]. Contemporary mathematics,1984,26(1):189-206.
    [138]Jones L. K., A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training[J]. The Annals of Statistics,1992:608-613.
    [139]Juditsky A. and Nemirovski A., On verifiable sufficient conditions for sparse signal recovery via minimization[J]. Mathematical programming,2011, 127(1):57-88.
    [140]Kaipio J. and Somersalo E., Statistical and computational inverse problem-s[M]. New York:Springer,2005.
    [141]Keener J. and Sneyd J., Mathematical Physiology:I:Cellular Physiology[M]. Springer,2010.
    [142]Kim S. J., Koh K., Lustig M., et al., An interior-point method for large-scale e1-regularized least squares [J]. Selected Topics in Signal Processing, IEEE Journal of,2007,1(4):606-617.
    [143]Kim Y. D., Choi H. and Oh H. S., Smoothly clipped absolute deviation on high dimensions [J]. Journal of the American Statistical Association,2008, 103(484):1665-1673.
    [144]Kirsch A., An introduction to the mathematical theory of inverse problem-s[M]. Springer,1996.
    [145]Knight K. and Fu W. J., Asymptotics for lasso-type estimators[J]. Annals of statistics,2000:1356-1378.
    [146]Knoll F., Bredies K. and Pock T., et al., Second order total generalized variation (TGV) for MRI[J]. Magnetic resonance in medicine,2011,65(2): 480-491.
    [147]Konishi S, Kitagawa G. Information criteria and statistical modeling[M]. Springer,2008.
    [148]Lai M. J. and Wang J. Q., An Unconstrained eq Minimization with 0< q(?) 1 for Sparse Solution of Underdetermined Linear Systems[J]. SIAM Journal on Optimization,2011,21(1):82-101.
    [149]Lai M. J., Xu Y. Y and Yin W. T., Improved Iteratively Reweighted Least Squares for Unconstrained Smoothed eq Minimization[J]. SIAM Journal on Numerical Analysis,2013,51(2):927-957.
    [150]Lange K. Hunter D. R. and Yang I., Optimization transfer using surrogate objective functions[J]. Journal of computational and graphical statistics,2000, 9(1):1-20.
    [151]Li M., Fan Z., Ji H., et al, Wavelet frame based algorithm for 3D recon-struction in electron microscopy [J]. SIAM Journal on Scientific Computing, 2014,36(1):B45-B69.
    [152]Li Q., Shen L. X. and Xu Y. S., et al., Multi-Step Proximity Algorithms for Solving a Class of Convex Optimization Problems. UCLA CAM report 2012.
    [153]Li Y. Y. and Osher S., Coordinate descent optimization for e1 minimiza-tion with application to compressed sensing; a greedy algorithm[J]. Inverse Problem and Imaging,2009,3(3):487-503.
    [154]Liang J., Li J., Shen Z. W., et al., Wavelet frame based color image demo-saicing[J]. Inverse Problems and Imaging,2013,7(3).
    [155]Lin F. R. and Fang W. F., A linear integral equation approach to the Robin inverse problem[J]. Inverse problems,2005,21(5):1757.
    [156]Lounici K., Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators [J]. Electronic Journal of statistics,2008,2:90-102.
    [157]Lu Z. S., Iterative reweighted minimization methods for ep regularized uncon-strained nonlinear programming[J]. Mathematical Programming,2012:1-31.
    [158]Lu Z. S., Pong T. K and Zhang Y., An alternating direction method for find-ing Dantzig selectors[J]. Computational Statistics and Data Analysis,2012, 56(12):4037-4046.
    [159]Lu Z. S. and Zhang Y., Sparse approximation via penalty decomposition methods. SIAM J. Optim.,2013,23(4):2448-2478.
    [160]Lustig M., Donoho D., and Pauly J. M., Sparse MRI: The application of compressed sensing for rapid MR imaging[J]. Magnetic resonance in medicine, 2007,58(6):1182-1195.
    [161]Mallat S., A wavelet tour of signal processing:the sparse way[M]. Academic press,2008.
    [162]Mallat S. and Zhang Z., Matching pursuits with time-frequency dictionar-ies[J]. Signal Processing, IEEE Transactions on,1993,41(12):3397-3415.
    [163]Mazumder R., Friedman J. H. and Hastie T., SparseNet:Coordinate descent with nonconvex penalties [J]. Journal of the American Statistical Association, 2011,106(495).
    [164]Meinshausen N. and Biihlmann P. High-dimensional graphs and variable selection with the lasso[J]. The Annals of Statistics,2006:1436-1462.
    [165]Miller A., Subset selection in regression[M]. CRC Press,2002.
    [166]Mo Q. and Shen Y., A remark on the restricted isometry property in orthog-onal matching pursuit [J]. Information Theory, IEEE Transactions on,2012, 58(6):3654-3656.
    [167]Murphy M., Alley M., Demmel J., et al., Fast-SPIRiT compressed sensing parallel imaging MRI: scalable parallel implementation and clinically feasible runtime[J]. Medical Imaging, IEEE Transactions on,2012,31(6):1250-1262.
    [168]Natarajan B. K., Sparse approximate solutions to linear systems[J]. SIAM journal on computing,1995,24(2):227-234.
    [169]Nesterov Y. A method of solving a convex programming problem with con-vergence rate O (1/k2)[C]//Soviet Mathematics Doklady.1983,27(2):372-376.
    [170]Nesterov Y., Smooth minimization of non-smooth functions[J]. Mathemati-cal programming,2005,103(1):127-152.
    [171]Needell D. and Tropp J. A., CoSaMP:Iterative signal recovery from in-complete and inaccurate samples [J]. Applied and Computational Harmonic Analysis,2009,26(3):301-321.
    [172]Needell D. and Vershynin R., Uniform uncertainty principle and signal re-covery via regularized orthogonal matching pursuit [J]. Foundations of com-putational mathematics,2009,9(3):317-334.
    [173]Nikolova M., Description of the Minimizers of Least Squares Regularized with -norm. Uniqueness of the Global Minimizer[J]. SIAM Journal on Imag-ing Sciences,2013,6(2):904-937.
    [174]Nikolova M., Relationship between the optimal solutions of least squares regularized with eo-norm and constrained by k-sparsity. preprint,2014.
    [175]Nikolova M., Ng M. K. and Tam C. P., Fast nonconvex nonsmooth minimiza-tion methods for image restoration and reconstruction[J]. Image Processing, IEEE Transactions on,2010,19(12):3073-3088.
    [176]Nocedal J. and Wright S. J., Conjugate gradient methods[M]. Springer New York,2006.
    [177]Osborne M. R., Presnell B. and Turlach B. A., A new approach to variable selection in least squares problems [J]. IMA journal of numerical analysis,2000, 20(3):389-403.
    [178]Pinnau R. and Ulbrich M., Optimization with PDE constraints[M]. Springer, 2008.
    [179]Parikh N. and Boyd S., Proximal algorithms [J]. Found. Trends Optim., 1(3):123-231,2014.
    [180]Papafitsoros K. and Schonlieb C. B., A combined first and second order variational approach for image reconstruction[J]. Journal of Mathematical Imaging and Vision,2014,48(2):308-338.
    [181]Pati Y. C, Rezaiifar R. and Krishnaprasad P. S., Orthogonal matching pur-suit:Recursive function approximation with applications to wavelet decom-position[C]//Signals, Systems and Computers,1993.1993 Conference Record of The Twenty-Seventh Asilomar Conference on. IEEE,1993:40-44.
    [182]Robini M. C. and Magnin I. E., Optimization by stochastic continuation[J]. SIAM journal on Imaging Sciences,2010,3(4):1096-1121.
    [183]Robini M. C. and Reissman P. J., From simulated annealing to stochastic continuation:a new trend in combinatorial optimization [J]. Journal of Global Optimization,2013,56(1):185-215.
    [184]Rockafellar R. T., Convex analysis[M]. Princeton university press,1997.
    [185]Rudin L. I., Osher S. and Fatemi, E., Nonlinear Total Variation Based Noise Removal Algorithms, Physica D,1992,60(1-4):259-268.
    [186]Rudelson M. and Vershynin R., Geometric approach to error-correcting codes and reconstruction of signals[J]. International mathematics research notices, 2005,2005(64):4019-4041.
    [187]Sanders J. N., Saikin S. K., Mostame S., et al., Compressed sensing for multi-dimensional spectroscopy experiments [J]. The Journal of Physical Chemistry Letters,2012,3(18):2697-2702.
    [188]Scherzer O., Handbook of Mathematical Methods in Imaging, Springer,2010.
    [189]Scherzer O, Grasmair M, Grossauer H, et al., Variational methods in imaging [M]. Springer,2008.
    [190]Schuster T, Kaltenbacher B, Hofmann B, et al., Regularization methods in Banach spaces[M]. Walter de Gruyter,2012.
    [191]Saul L. K. and Roweis S. T., Think globally, fit locally:unsupervised learning of low dimensional manifolds[J]. The Journal of Machine Learning Research, 2003,4:119-155.
    [192]She Y. Y., Thresholding-based iterative selection procedures for model se-lection and shrinkage [J]. Electronic Journal of Statistics,2009,3:384-415.
    [193]Shen Z. W., Toh K. C. and Yun S., An accelerated proximal gradient algo-rithm for frame-based image restoration via the balanced approach[J]. SIAM Journal on Imaging Sciences,2011,4(2):573-596.
    [194]Shrot Y. and Frydman L., Compressed sensing and the reconstruction of ultrafast 2D NMR data:Principles and biomolecular applications [J]. Journal of Magnetic Resonance,2011,209(2):352-358.
    [195]Stamey T. A., Kabalin J. N, McNeal J. E, et al., Prostate specific antigen in the diagnosis and treatment of adenocarcinoma of the prostate. II. Radical prostatectomy treated patients[J]. The Journal of urology,1989,141(5):1076-1083.
    [196]Sun Q. Y., Recovery of sparsest signals via eq-minimization[J]. Applied and Computational Harmonic Analysis,2012,32(3):329-341.
    [197]Sun D. F. and Qi L. Q., Solving variational inequality problems via smoothing-nonsmooth reformulations [J]. Journal of computational and ap-plied mathematics,2001,129(1):37-62.
    [198]Sun Z. Y., Jiao Y. L, Jin B. T. and Lu X. L., Numerical identification of a sparse Robin coefficient [J]. Advances in Computational Mathematics,2014: 1-18.
    [199]Temlyakov V. N., Nonlinear methods of approximation[J]. Foundations of Computational Mathematics,2003,3(1):33-107.
    [200]Tibshirani R., Regression shrinkage and selection via the lasso [J]. Journal of the Royal Statistical Society. Series B (Methodological),1996:267-288.
    [201]Tibshirani R. J. and Taylor J., Degrees of freedom in lasso problems[J]. The Annals of Statistics,2012,40(2):1198-1232.
    [202]Tikhonov A. N. and Arsenin V. Y., Methods for solving ill-posed problem-s[M]. Washington, DC:Winston,1977.
    [203]Tropp J. A., Greed is good:Algorithmic results for sparse approximation [J]. Information Theory, IEEE Transactions on,2004,50(10):2231-2242.
    [204]Tropp J. A., Just relax: Convex programming methods for identifying sparse signals in noise [J]. Information Theory, IEEE Transactions on,2006,52(3): 1030-1051.
    [205]Tropp J. A. and Gilbert A. C., Signal recovery from random measurements via orthogonal matching pursuit [J]. Information Theory, IEEE Transactions on,2007,53(12):4655-4666.
    [206]Tropp J. and Wright. S., Computational methods for sparse solution of linear inverse problems. Proc. IEEE,98(6):948-958,2010.
    [207]Tseng P., Convergence of a block coordinate descent method for nondifferen-tiable minimization[J]. Journal of optimization theory and applications,2001, 109(3):475-494.
    [208]Tseng P. and Yun S., A coordinate gradient descent method for nonsmooth separable minimization[J]. Mathematical Programming,2009,117(1-2):387-423.
    [209]Van Den Berg E. and Friedlander M. P., Probing the Pareto frontier for basis pursuit solutions[J]. SIAM Journal on Scientific Computing,2008,31(2):890-912.
    [210]Wang L., Kim Y. D. and Li R. Z., Calibrating nonconvex penalized regression in ultra-high dimension[J]. The Annals of Statistics,2013,41(5):2505-2536.
    [211]Wang W., Lu S., Mao H, et al., Multi-parameter Tikhonov regularization with the f sparsity constraint [J]. Inverse Problems,2013,29(6):065018.
    [212]Wen Z. W., Yin W. T., Goldfarb D., et al., A fast algorithm for sparse re-construction based on shrinkage, subspace optimization, and continuation[J]. SIAM Journal on Scientific Computing,2010,32(4):1832-1857.
    [213]West M, Blanchette C, Dressman H, et al., Predicting the clinical status of human breast cancer by using gene expression profiles [J]. Proceedings of the National Academy of Sciences,2001,98(20):11462-11467.
    [214]Wiaux Y., Jacques L., Puy G., et al., Compressed sensing imaging techniques for radio interferometry[J]. Monthly Notices of the Royal Astronomical Soci-ety,2009,395(3):1733-1742.
    [215]Wright S. J., Nowak R. D. and Figueiredo M. A. T., Sparse reconstruction by separable approximation[J]. Signal Processing, IEEE Transactions on,2009, 57(7):2479-2493.
    [216]Wu C. L and Tai X. C., Augmented Lagrangian Method, Dual Methods, and Split Bregman Iteration for ROF, Vectorial TV, and High Order Models[J]. SIAM J. Imaging Sciences,2010,3(3):300-339.
    [217]Wu T. T. and Lange K., Coordinate descent algorithms for lasso penalized regression[J]. The Annals of Applied Statistics,2008:224-244.
    [218]Xiao L. and Zhang T., A proximal-gradient homotopy method for the sparse least-squares problem[J]. SIAM Journal on Optimization,2013,23(2):1062-1091.
    [219]Xu C. and Chen J. H., The Sparse MLE for Ultra-High-Dimensional Feature Screening[J]. Journal of the American Statistical Association, in press.
    [220]Xue L. Z. and Zou H., Sure independence screening and compressed random sensing[J]. Biometrika,2011,98(2):371-380.
    [221]Yang A. Y., Sastry S. S., Ganesh A., et al., Fast l1-minimization algorithms and an application in robust face recognition:A review[C]//Image Processing (ICIP),2010 17th IEEE International Conference on. IEEE,2010:1849-1852.
    [222]Yang J. F. and Zhang Y., Alternating direction algorithms for l1-problems in compressive sensing[J]. SIAM journal on scientific computing,2011,33(1): 250-278.
    [223]Yin W. T., Osher S., Goldfarb D., et al., Bregman iterative algorithms for l1-minimization with applications to compressed sensing[J]. SIAM Journal on Imaging Sciences,2008,1(1):143-168.
    [224]Zhang C. H., Nearly unbiased variable selection under minimax concave penalty[J]. The Annals of Statistics,2010,38(2):894-942.
    [225]Zhang C. H. and Huang J., The sparsity and bias of the Lasso selection in high-dimensional linear regression[J]. The Annals of Statistics,2008:1567-1594.
    [226]Zhang C. H. and Zhang T., A general theory of concave regularization for high-dimensional sparse estimation problems[J]. Statistical Science,2012, 27(4):576-593.
    [227]Zhang T., Adaptive forward-backward greedy algorithm for learning sparse representations [J]. Information Theory, IEEE Transactions on,2011,57(7): 4689-4708.
    [228]Zhang T., Analysis of multi-stage convex relaxation for sparse regulariza-tion[J]. The Journal of Machine Learning Research,2010,11:1081-1107.
    [229]Zhang T., On the Consistency of Feature Selection using Greedy Least Squares Regression[J]. Journal of Machine Learning Research,2009, 10(3).
    [230]Zhang T., Some sharp performance bounds for least squares regression with L1 regularization[J]. The Annals of Statistics,2009,37(5A):2109-2144.
    [231]Zhang T., Sparse recovery with orthogonal matching pursuit under RIP[J]. Information Theory, IEEE Transactions on,2011,57(9):6215-6221.
    [232]Zhang X. Q., Burger M. and Osher S., A unified primal-dual algorithm framework based on Bregman iteration[J]. Journal of Scientific Computing, 2011,46(1):20-46.
    [233]Zhao Y. B. and Li D., Reweighted e1-Minimization for Sparse Solutions to Underdetermined Linear Systems[J]. SIAM Journal on Optimization,2012, 22(3):1065-1088.
    [234]Zhao P. and Yu B., On model selection consistency of Lasso [J]. The Journal of Machine Learning Research,2006,7:2541-2563.
    [235]Zou H., Hastie T. and Tibshirani R., On the "degrees of freedom" of the lasso[J]. The Annals of Statistics,2007,35(5):2173-2192.
    [236]Zou H. and Li R. Z., One-step sparse estimates in nonconcave penalized likelihood models[J]. Annals of statistics,2008,36(4):1509.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700