压缩感知的稀疏重构算法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
压缩感知(Compressive Sensing)包括压缩采样(Compressive sampling)与稀疏重构(Sparsereconstruction)。压缩采样通过随机投影获得降维的观测数据,是一种新型的压缩采样方法,适用于核磁共振成像(MRI),超宽带信号处理,天文图象复原等海量数据实时采集与处理领域。稀疏重构利用信号的稀疏性,由低维观测恢复高维稀疏信号。稀疏重构算法的设计与分析已成为压缩感知的研究热点。现有的大规模稀疏重构算法主要包括三类:匹配追踪(Matching pursuit),迭代式硬阈值与L1范数最优化方法。本文对这三类算法进行了拓展性的研究、分析与实现。本文的主要工作包括:
     1)提出了压缩感知匹配追踪(compressive sensing matching pursuit)算法CSMP。其重构s稀疏信号的充分收敛条件是3s阶约束等距常数不超过0.23,由此放宽了匹配追踪的收敛条件,加快了收敛速率。针对大规模稀疏信号重构,该算法提供的矩阵向量乘算子可进行投影测量子集与稀疏基子集的选择,因此可利用离散余弦变换测量算子与小波变换稀疏基算子,避免显式存储大规模矩阵。
     2)提出了基于Barzilai-Borwein步长的稀疏约束迭代式硬阈值(Sparsity constrained IterativeHard thresholding with Barzilai–Borwein step size)算法SCIHTBB.该算法包含单调与非单调两个版本。基于非对称约束等距性质进行了相应的收敛分析.针对未知稀疏度问题,利用割线法实现了自适应稀疏度检测。SCIHTBB可应用于组(Group)稀疏、非负稀疏与矩阵补全(matrix completion)。
     3)设计并分析了一种稀疏重构算法FPSP3。该算法包含3个要素:不动点迭代(Fixed Pointiteration),SPG2非单调线搜索及热启动技术。由前向后向算子分裂推导出最优解的不动点迭代,并将该迭代分解为前向梯度步与后向邻近步。引入邻近算子表明后向邻近步对应软阈值收缩.通过证明梯度算子逆是强单调的从而获得收敛步长条件。采用SPG2非单调线搜索因而显著加快了不动点迭代收敛效率。在稀疏重构实验中将该算法与L1范数方法GPSR,SPARSA,SPGL1进行比较,结果表明FPSP3具有运算速度与重构精度的优势。
     4)提出了对偶交替方向乘子法(Dual Alternate Direction Multiplier Method)算法DADMM.其通过将交替方向乘子法(ADMM)运用于原始稀疏重构问题的对偶形式发展而得,并且形成了一个灵活的稀疏罚框架,可处理各种Lasso型稀疏罚,包括Lasso,Group Lasso,SparseGroup Lasso及Overlapping Group Lasso.最后理论上通过Douglas Rachford算子分裂法证明了其收敛性,实验比较验证了DADMM的快速计算效率。
Compressive Sensing consists of compressive sampling and sparse reconstruction.Compre-ssive sampling acquires the dimension-reduced observation data by random projections andturns out to be a novel compressed sampling technology, which can be applied to real-timeacquisition and recovery of massive data, i.e., Magnetic Resonance Imaging, Ultra-WidebandSignal Processing and Astronomic Image Recovery. Sparse reconstruction exploits signal'ssparse feature thereby recovering the original high-dimension signal from the low dimensionalobservations. Sparse recovery algorithm is the active research area of compressive sensing.Large–scale sparse recovery algorithm mainly includes three classes: Matching pursuit,iterative hard thresholding and L1norm optimization methods. This paper expands the study,analysis and implementation of aforementioned three type methods.The major works in thispaper are listed as following:
     1)Proposing the algorithm named Compressive sensing matching pursuit(CSMP). It has theguarantee to recover s-sparse signal provided that3s order Restricted Isometry Constant(RIC)is less than0.23, which relaxes the RIC condition for recovering s sparse signal by matchingpursuit and improves the convergence speed. In order to adaptfor large scale sparse recovery,matrix-vector multiplication operator is equipped to select subsets of both random projectionmeasurements and sparse bases, therefore being able to utilize discrete cosine transform andwavelet transform and avoid explicit storage of large scale matrix.
     2)Proposing the algorithm named SCIHTBB(Sparsity constrained Iterative Hard thresholdingwith Barzilai–Borwein step size), which has Monotone and Non-Monotone versions.Convergence analysis are developed respectively based on asymmetrical restricted isometryproperty. To tackle the unknown sparsity problem, an adaptive sparsity framework is presentedutilizing secant method. Extensions are provided to handle group sparsity, non-negative sparsityand matrix completion.
     3) Design and analysis of the algorithm named FPSP3, which has three components includingfixed point iteration, the SPG2nonmonotone line search and warm start skill. The fixed pointiteration is derived from forward backward operator splitting and is decomposed into forwardgradient and backward proximal steps.The introduction of proximity operator indicates backwardproximal step reducing to soft-thresholding thrinkage. Convergent step-size condition is obtainedby proving the strong monotonicity of inverse of gradient operator. The use of nonmonotone linesearch SPG2significantly speed up the covergence of fixed point iteration. In sparse recoveryexperiment, the comparison with L1norm methods GPSR,SPARSA and SPGL1shows thatFPSP3has the advantages of speed and recovery accuracy.
     4) Proposing the algorithm named DADMM(Dual Alternate Direction Multiplier Method),which is obtained by applying ADMM(Alternate Direction Multiplier Method) to the dual formof primal sparse recovery problem. DADMM is a flexible framework for sparse penalty and hasthe capability to handle various lasso-style sparse penalty, including lasso, group lasso, sparse group lasso and overlapping group lasso. The convergence analysis of DADMM is developedbased on Douglas Rachford operator splitting technology.The experimental comparisions verifythe highly computational efficiency of DADMM.
引文
[1]李树涛,魏丹,压缩传感综述,自动化学报,35(11),1-7,2009
    [2]方红,章权兵,韦穗.基于亚高斯随机投影的图像重建方法.计算机研究与发展,45(8),1402-1407,2008
    [3]石光明,刘丹华,高大化,刘哲,林杰,王良君,压缩感知理论及其研究进展,电子学报,37(5),1070-1081,2009
    [4]杨海蓉,张成,丁大为,韦穗,压缩传感理论与重构算法,电子学报,39(1),142-148,2011
    [5]戴琼海,付长军,季向阳,压缩感知研究,计算机学报,34(3),425-434,2011
    [6]张先玉刘郁林王开,超宽带通信压缩感知信道估计与信号检测方法,西安交通大学学报,44(2),88-92,2010
    [7]胡海峰,杨震,无线传感器网络中基于空间相关性的分布式压缩感知,南京邮电大学学报自然科学版,29(6),12-16,2009
    [8]谢志鹏,陈松灿,CSMP:基于约束等距的压缩感知匹配追踪,计算机研究与发展,49(3),579-588,2012
    [9]谢志鹏,迭代式正交匹配追踪及稀疏解,微电子学与计算机,26(10),53-56,2009
    [10]谢志鹏,基于前向后向算子分裂的稀疏信号重构,南京大学学报(自然科学)48(4),2012
    [11] D. Donoho, Compressed sensing, IEEE Transactions on Information Theory52(4),1289–1306,2006.
    [12] E. Candes, M. Wakin,“people hearing without listening” An introduction to compressivesampling, IEEE Signal Processing Magazine,25(2),21–30,2008.
    [13] E. Candes, T. Tao, Near-optimal signal recovery from random projections: universalencoding strategies, IEEE Transactions on Information Theory,52(12),5406–5425,2006.
    [14] E. Candes, The restricted isometry property and its implications for compressed sensing,Comptes Rendus Mathematique,346(9),589–592,2008
    [15] E. Candes, T. Tao, Decoding by linear programming, IEEE Transactions Information Theory,51(12),4203–4215,2005
    [16] JA Tropp, JN Laska, et al., Beyond Nyquist: Efficient sampling of sparse bandlimitedsignals, IEEE Transactions on Information Theory,56(1),520-544,2010
    [17] R. Baraniuk, M. Davenport, R. DeVore, et al., A simple proof of the restricted isometryproperty for random matrices, Constructive Approximation,28(1)253–263,2008
    [18] A. Cohen, W. Dahmen, R. DeVore, Compressed sensing and best k-term approximation,Journal of the American Mathematical Society,22(2),211–231,2009
    [19] D. Donoho, For most large underdetermined systems of linear equations, the minimal L1solution is also the sparsest solution, Communications on Pure Applied Mathematics,59(7),907–934,2006
    [20] M. Lustig, D.L. Donoho, J.M. Pauly, Sparse MRI: the application of compressed sensing forrapid MR imaging, Magnetic Resonance in Medicine,58(6),1182–1195,2007
    [21] Marco Duarte, Mark Davenport, Dharmpal Takhar, Jason Laska, Ting Sun, Kevin Kelly, andRichard Baraniuk, Single-pixel imaging via compressive sampling.,IEEE Signal ProcessingMagazine,25(2),83-91,2008
    [22] Baraniuk, Single-pixel imaging via compressive sampling, IEEE Signal ProcessingMagazine,25(2),83-91,2008
    [23] W. L. Chan, K. Charan, D. Takhar, K. F. Kelly, R. G. Baraniuk, and D. M. Mittleman, Asingle-pixel terahertz imaging system based on compressive sensing. Applied Physics Letters,93(12),2008
    [24] J. Bobin, J.-L. Starck, and R. Ottensamer, Compressed sensing in astronomy, Journal ofSelected Topics in Signal Processing,2,718-726,2008
    [25] Lin, Tim T.Y, Herrmann, Felix J, Compressed wavefield extrapolation, Geophysics,72(5),2007
    [26] J. Laska, S. Kirolos, M. Duarte, T. Ragheb, R. Baraniuk, and Y. Massoud, Theory andimplementation of an analog-to-information converter using random demodulation, inProceedings of the IEEE International Symposium on Circuits and Systems (ISCAS), NewOrleans, LA,2007.
    [27] J. Laska, S. Kirolos, Y. Massoud, R. Baraniuk, A. Gilbert, M. Iwen, and M. Strauss, Randomsampling for analog-to-information conversion of wideband signals, in Proceedings of the IEEEDallas Circuits and Systems Workshop, Dallas, TX,2006.
    [28] John Wright, Allen Yang, Arvind Ganesh, Shankar Shastry, and Yi Ma, Robust facerecognition via sparse representation,IEEE Transactions on Pattern Analysis and MachineIntelligence,31(2),2009
    [29] Sheikh M, Milenkovic O, Baraniuk R. Designing compressive sensing DNA microarrays.Proceedings of the IEEE InternationalWorkshop on Computational Advances in MultiSensorAdaptive Processing.Washington D. C. IEEE,141-144,2007.
    [30] Sheikh M, Sarvotham S, Milenkovic O, Baraniuk R. DNA array decoding from nonlinearmeasurements by belief propagation, Proceedings of the14th Workshop on Statistical SignalProcessing. Washington D. C.,IEEE,215-219,2007
    [31] E. Candes, B. Recht, Exact matrix completion via convex optimization,Foundations ofComputational Mathematics9(6)717–772,2009
    [32] Herman, M.A.; Strohmer, T.;, High-Resolution Radar via Compressed Sensing, IEEETransactions on Signal Processing,57(6),2275-2284,2009
    [33] Jing Zheng and Eddie Jacobs,"Application of compressive sensing theory in infraredimaging systems", http://dx.doi.org/10.1117/12.776967,2008
    [34] M. D. Gabriel Rilling and B. Mulgrew, Compressed sensing based compression of SAR rawdata,in Signal Processing with Adaptive Sparse Structured Representations,2009
    [35] Subotic, N.S.; Thelen, B.; Cooper, K. et.al. Distributed RADAR waveform design basedon compressive sensing considerations, Radar Conference,2008.
    [36] Davenport M, Wakin M, Baraniuk R. Detection and estimation with compressivemeasurements, Technical Report TREE0610, Rice University, USA,2006.
    [37] Bajwa W, Haupt J, Sayeed A et al. Compressive wireless sensing, Proceedings of theInternational conferenceon Information Processing in Sensor Networks. Washington D. C.,2006.
    [38] Marcia R, Willett R. Compressive coded aperture video reconstruction, Proceedings of theEuropean Signal Processing Conference. Lausanne, Switzerland:2008
    [39] E. Candes, X Li, Y Ma, J Wright, Robust principal component analysis, Journal of the ACM,58(3),2011
    [40] J.Wright, Dense Error Correction Via l1Minimization, IEEE Transactions on InformationTheory,56(7),2010
    [41] Mallat, S. and Zhang, S.,Matching Pursuits with Time-Frequency Dictionaries. IEEETransactions on Signal Processing,41,3397-3415,1993
    [42] Y. Pati, R. Rezaiifar, P. Krishnaprasad, Orthogonal Matching Pursuit: recursive functionapproximation with application to wavelet decomposition", in Asilomar Conf. on Signals,Systems and Comput.,1993
    [43] David Donoho, Sparse Solution of Underdetermined Linear Equations by StagewiseOrthogonal Matching Pursuit, technical report, standford university,2007
    [44] T. Blumensath, M. E. Davies; Stagewise Weak Gradient Pursuits, IEEE Transactions onSignal Processing,57(11),4333-4346,2009
    [45] D. Needell, R. Vershynin, Uniform uncertainty principle and signal recovery via regularizedorthogonal matching pursuit, Foundations of Computational Mathematics9(3),317–334,2009
    [46] D. Needell, J. Tropp, CoSaMP: iterative signal recovery from incomplete and inaccuratesamples, Applied and Computational Harmonic Analysis26(3)301–321,2008
    [47] J. Tropp, S. Wright, Computational methods for sparse solution of linear inverse problems,Technical Report No.2009-01California Institute of Technology.
    [48] W. Dai, O. Milenkovic, Subspace pursuit for compressive sensing reconstruction, IEEETransactions on Information Theory55(5),2230–2249,2009
    [49] T. Blumensath, M. Davies, Iterative Hard Thresholding for compressed sensing, Appliedand Computational Harmonic Analysis27(3)265–274,2009
    [50] T. Blumensath, M. Davies, Normalised iterative hard thresholding: guaranteed stability andperformance, IEEE Journal of Selected Topics in Signal Processing4(2)298–309,2010
    [51] R. Garg, R. Khandekar, Gradient descent with sparsification: an iterative algorithm forsparse recovery with restricted isometry property,337–344, ICML2009
    [52] Zhipeng Xie,Songcan Chen,SCIHTBB: Sparsity constrained iterative hard thresholding withBarzilai–Borwein step size, Neurocomputing74(17),3663–3676,2011
    [53] Emmanuel Candes and Justin Romberg, L1magic: Recovery of Sparse Signals via ConvexProgramming,technical report,Caltech,2005
    [54] E. Candes, M. Wakin, S. Boyd, Enhancing sparsity by reweighted L1minimization, Journalof Fourier Analysis and Applications14(5),877–905,2008
    [55] S.-J. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky, An Interior-Point Method forLarge-Scale l1-Regularized Least Squares, IEEE Journal on Selected Topics in Signal Processing,1(4):606-617,2007.
    [56] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein,Distributed Optimization andStatistical Learning via the Alternating Direction Method of Multipliers,Foundations and Trendsin Machine Learning,3(1),1–122,2011
    [57] M. Figueiredo, R. Nowak, S. Wright, Gradient projection for sparse reconstruction:application to compressed sensing and other inverse problems, IEEE Journal of Selected Topicsin Signal Processing1(4),586–597,2007
    [58] S. Wright, R. Nowak, M. Figueiredo, Sparse reconstruction by separable approximation,IEEE Transactions on Signal Processing57(7),2479–2493,2009
    [59] E. Berg, M. Friedlander, Probing the Pareto frontier for basis pursuit solutions, SIAM J. onScientific Computing31(2)890–912,2008
    [60] E. Berg, M. Friedlander, In pursuit of a root, Technical Report. TR-2007-19, Dept. ofComputer Science, University of British Columbia,2007.
    [61] E. Berg, M. Schmidt, M. Friedlander, K. Murphy, Group sparsity via lineartime projection.Technical Report TR-2008-09, University of British Columbia,2008.
    [62] A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverseproblems, SIAM Journal on Imaging Sciences2(1)183–202,2009
    [63] E. Hale, W. Yin, Y. Zhang, Fixed-point continuation for l1-minimization: methodology andconvergence, SIAM Journal on Optimization19(3)1107–1130,2008
    [64] E. Hale, W. Yin, Y. Zhang, Fixed-Point Continuation applied to compressed sensing:implementation and numerical experiments, Journal of Computational Mathematics28(2),170–194,2010
    [65] Raymond H. Chan, J. Yang, and Xiaoming Yuan, Alternating direction method for imageinpainting in wavelet domain, SIAM Journal on Imaging Sciences,4(3),807-826,2011.
    [66] J. Yang, and Yin Zhang, Alternating direction algorithms for L1-problems in compressivesensing, SIAM Journal on Scientific Computing,33(1),250-278,2011.
    [67] Wotao Yin, Simon Morgan, J. Yang, and Yin Zhang, Practical compressive sensing withToeplitz and circulant matrices, Proceedings of SPIE,2010.
    [68] J. Yang, Yin Zhang, and Wotao Yin, A fast alternating direction method for TVL1-L2signalreconstruction from partial Fourier data, IEEE Journal of Selected Topics in Signal Processing4(2),288-297,2010.
    [69] J. Yang, and Xiaoming Yuan, Linearized augmented Lagrangian and alternating directionmethods for nuclear norm minimization, Mathematics of Computation. accepted2011
    [70] Xiaoming Yuan, and J. Yang, Low rank and sparse matrix decomposition via alternatingdirection method, Pacific Journal of Optimization. accepted2011
    [71] J. Liu and J.Ye, Fast Overlapping Group Lasso, arXiv:1009.0306v1,2010
    [72] J.Liu and J.Ye, Moreau-Yosida Regularization for Grouped Tree Structure Learning,Advances in Neural Information Processing Systems (NIPS),2010
    [73] J. Liu and J.Ye, Efficient L1/Lq Norm Regularization,arXiv:1009.4766v1,2010
    [74] E. J. Candès and J. Romberg. Sparsity and incoherence in compressive sampling. InverseProblems,23969-985,2006.
    [75] R. Chartrand, V. Staneva, Restricted isometry properties and nonconvex compressivesensing, Inverse Problems24(3),1–14,2008
    [76] R. Chartrand, W. Yin, Iteratively reweighted algorithms for compressivesensing, ICASSP,3869–3872,2008
    [77] P. Schniter, L. C. Potter, and J. Ziniel, Proc, Fast Bayesian Matching Pursuit, Workshop onInformation Theory and Applications, La Jolla, CA,2008.
    [78] Justin Ziniel, Lee C. Potter, and Philip Schniter, Tracking and Smoothing of Time-VaryingSparse Signals via Approximate Belief Propagation, Asilomar Conference on Signals, Systems,and Computers, Pacific Grove, CA,2010.
    [79] Jaewook Kang, Heung-No Lee, Kiseon Kim, Message passing aided least square recoveryfor compressive sensing. SPARS2011, Edinburgh, Scotland, UK,2011
    [80] Y.E. Nesterov, Gradient Methods for Minimizing Composite Objective Function, COREreport, Center for Operations Research and Econometrics (CORE), Universit_e catholique deLouvain, Louvain-la-Neuve,Belgium,2007.
    [81]黄祥国,医学影像设备学,人民卫生出版社,2011.
    [82] E.G. Birgin, J.M. Mart′nez, M. Raydan, Nonmonotone spectral projected gradient methodson convex sets, SIAM Journal on Optimization10(4),1196–1211,2000
    [83] E.G. Birgin, J.M. Martinez, M. Raydan, Algorithm813: SPG—software for convexconstrained optimization, ACM Transactions on Mathematical Software,27(1),340–349,2001
    [84] T. Serafini, G. Zanghirati, L. Zanni.“Gradient projection methods for large quadraticprograms and applications in training support vector machines,” Optimization Methods andSoftware,20(2),353–378,2004.
    [85] C. Wang, N. Xiu, Convergence of the gradient projection method for generalized convexminimization, Computational Optimization and Applications16(2),111–120,2000
    [86] C. Wang, Q. Liu, X. Yang, Convergence properties of nonmonotone spectral projectedgradient method, Journal of Computational and Applied Mathematics182(1),51–66,2005
    [87] Quarteroni A., Sacco R., Saleri F., Numerical Mathematics, Springer,154-155,2001.
    [88] Coifman R., Geshwind F., Meyer Y., Noiselets, Applied and Computational HarmonicAnalysis,10(1),27-44,2001
    [89] Doy T., Trany T. Lu Gan,Fast compressive sampling with structurally random matrices,ICASSP2008, Piscataway, IEEE,3369-3372,2008
    [90] Doy T., Lu Gan, Nguyeny N. et al. Sparsity adaptive matching pursuit algorithm for practicalcompressed sensing, Asilomar Conference on Signals, Systems, and Computers, Mauritius, VDMPublishing House,581-587,2008
    [91] S. Ma, D. Goldfarb, L. Chen, Fixed point and Bregman iterative methods for matrix rankminimization, Mathematical Programming Series A, accpeted,2009.
    [92] D. Goldfarb, S. Ma, Convergence of fixed point continuation algorithms for matrix rankminimization Technical Report, Department of IEOR, Columbia University,2009
    [93] J. Cai, E. Candes, Z. Shen, A singular value thresholding algorithm for matrix completion,SIAM Journal on Optimization20(4)1956–1982,2008
    [94] E. Candes, B. Recht, Exact matrix completion via convex optimization, Foundations ofComputational Mathematics9(6)717–772,2009
    [95] M. Gerven, B. Cseke, F. Lange, T. Heskes, Efficient Bayesian multivariate fMRI analysisusing a sparsifying spatio-temporal prior, Neuroimage50,150–161,2010
    [96] M. Gerven, C. Hesse, O. Jensen, T. Heskes, Interpreting single trial data using groupwiseregularisation, Neuroimage46665–676,2009
    [97] M. Hu, Y. Chen, J.T. Kwok, Building sparse multi-kernel SVM classifiers, IEEETransactions on Neural Networks20(5),827–839,2009
    [98] I.W. Tsang, A. Kocsor, J.T. Kwok, Large-scale maximum margin discriminant analysis usingcore vector machines, IEEE Transactions on Neural Networks19(4),610–624,2008
    [99] M. Zhao, S. Li, J.T. Kwok, Text detection in images using sparse representation withdiscriminative dictionaries, Image and Vision Computing28(12),1590–1599,2010
    [100] T. Pong, P. Tseng, S. Ji, J. Ye, Trace norm regularization: reformulations, algorithms, andmulti-task learning, SIAM Journal on Optimization, accepted2010
    [101] S. Ji, J. Ye, An accelerated gradient method for trace norm minimization, Proceedings ofthe26th International Conference on Machine Learning,457–464,2009
    [102] J. Liu, J. Ye, Moreau-Yosida regularization for grouped tree structure learning, Advances inNeural Information Processing Systems (NIPS),2010
    [103] L. Qiao, S. Chen, Xiaoyang Tan: sparsity preserving projections with applications to facerecognition, Pattern Recognition43(1)331–341,2010
    [104] Daoqiang Zhang, Songcan Chen, Zhi-Hua Zhou, Constraint score: a new filter methodfor feature selection with pairwise constraints, Pattern Recognition41(5),1440–1451,2008
    [105] R. Tibshirani, Regression shrinkage and selection via the lasso, Journal of the RoyalStatistical Society: Series B58(1)267–288,1996
    [106] B. Efron, T. Hastie, I. Johnstone, R. Tibshirani, Least angle regression, Annals of Statistics32(2),407–499,2004
    [107] L. Grippo, F. Lampariello, S. Lucidi, A nonmonotone line search technique for Newton’smethod, SIAM Journal on Numerical Analysis23(4),707–716,1986
    [108] I. Daubechies, M. Fornasier, I. Loris, Accelerated projected gradient method for linearinverse problems with sparsity constraints, Journal of Fourier Analysis and Applications14(5)764–792,2008
    [109] I. Daubechies, R. DeVore, M. Fornasier, S. Gunturk, Iteratively re-weighted least squaresminimization for sparse recovery, Communications on Pure and Applied Mathematics63(1)1–38,2010
    [110] I. Daubechies, M. Defrise, C. DeMol, An iterative thresholding algorithm for linear inverseproblems with a sparsity constraint, Communications on Pure and Applied Mathematics57(11)1413–1457,2004
    [111] M. Gu, L. Lim, C. Wu, PARNES: A Rapidly Convergent Algorithm for Accurate Recoveryof Sparse and Approximately Sparse Signals.[Online] arXiv:0911,2009.
    [112] S. Becker, J. Bobin, E. Candes, NESTA: a fast and accurate first order method for sparserecovery. Technical Report, California Institute of Technology,2009.
    [113] T. Doy, L. Gan, N. Nguyeny, T. Tran, Sparsity adaptive matching pursuit algorithm forpractical compressed sensing, in: Proceeding of the Asilomar Conference on Signals, Systems,and Computers, California, USA,581–587,2008
    [114] L. Grippo, F. Lampariello, S. Lucidi, A nonmonotone line search technique for Newton’smethod, SIAM Journal on Numerical Analysis23(4)707–716,1986
    [115] K. Rosenblum, L. Manor, Y. Eldar, Dictionary Optimization for Block-SparseRepresentations,[Online]/arXiv:1005.0202S,2010.
    [116] Y. Eldar, M. Mishali, Robust recovery of signals from a structured union of subspaces,IEEE Transactions on Information Theory55(11),5302–5316,2009
    [117] S. Ji, J. Ye, An accelerated gradient method for trace norm minimization, Proceedings ofthe26th International Conference on Machine Learning,457–464,2009
    [118] T. Pong, P. Tseng, S. Ji, J. Ye, Trace norm regularization: reformulations, algorithms, andmulti-task learning, SIAM Journal on Optimization,2010
    [119] P. Drineas, R. Kannan, M. Mahoney, Fast Monte Carlo algorithms for matrices I:approximating matrix multiplication, SIAM Journal on Computing36(1)132–157,2006
    [120] P. Drineas, R. Kannan, M. Mahoney, Fast Monte Carlo algorithms for matrices II:computing a low-rank approximation to a matrix, SIAM Journal on Computing36(1),158–183,2006
    [121] P. Tseng, Applications of a splitting algorithm to decomposition in convex programmingand variational inequalities[J], SIAM Journal of Control and Optimization,29(1),119–138,1991
    [122] Chen H G. Forward-Backward Splitting Techniques: Theory and Applications,University of Washington,1994
    [123] Combettes. L, Wajs R. Signal recovery by proximal forward-backward splitting,Multiscale Modeling and Simulation,4(4):1168-1200,2006
    [124] Combettes P L. Solving monotone inclusions via compositions of nonexpansive averagedoperators[J], Optimization,53(5-6):475-504,2004
    [125] J. Eckstein, D.Bertsekas, On the Douglas Rachford splitting method and the proximal pointalgorithm for maximal monotone operators, Mathematical Programming,55(1),293-318,1992
    [126]J. Eckstein, splitting methods for monotone operators with applications to paralleloptimization, Doctoral dissertation, Report LIDS-TH-1877, MIT,1989.
    [127] M. Afonso, J. Bioucas-Dias, M. Figueiredo, An augmented Lagrangian approach to theconstrained optimization formulation of imaging inverse problems, IEEE Transactions on ImageProcessing,20(3),681-695,2011.
    [128] M. Afonso, J. Bioucas-Dias, M. Figueiredo,“Fast image recovery using variable splittingand constrained optimization”,IEEE Transactions on Image Processing, vol.19, no.9,2345–2356,2010.
    [129] J. YANG and Y. Zhang, Alternating direction algorithms for L1problems in compressivesensing SIAM Journal on Scientific Computing,33(1),250-278,2011.
    [130] R. Rockafellar. Convex Analysis. Princeton University Press, Princeton, New Jersey,1970
    [131] R. Glowinski, Numerical methods for nonlinear variational problems, Springer-Verlag, NewYork, Berlin, Heidelberg, Tokyo,1984.
    [132] R. Glowinski, and P. Le Tallec, Augmented Lagrangian and operator splitting methods innonlinear mechanics, SIAM1989.
    [133] R. Tomioka and M. Sugiyama. Dual augmented lagrangian method for efficient sparsereconstruction. IEEE Signal Processing Letters,16(12):1067–1070,2009.
    [134] R. Tomioka, T. Suzuki, M. Sugiyama, Super Linear Convergence of Dual AugmentedLagrangian Algorithm for Sparsity Regularized Estimation, JMLR Volume12,1537-1586,2011.
    [135] J. Yang, and Xiaoming Yuan, Linearized augmented Lagrangian and alternating directionmethods for nuclear norm minimization, Mathematics of Computation,accepted,2011
    [136] J. Eckstein,D.Bertsekas, On the Douglas Rachford splitting method and the proximal pointalgorithm for maximal monotone operators, Mathematical Programming,55(1),293-318,1992
    [137] B. S. He, H. Yang, Some Convergence Properties of a Method of multipliers for linearlyconstrained monotone variational inequalities, Operations Research Letters,23,151-161,1998
    [138] S. Boyd et al. Distributed optimization and statistical learning via the alternating directionmethod of multipliers, Foundations and Trends in Machine Learning,3(1):1–122,2011
    [139] P. Lions and B. Mercier. Splitting algorithms for the sum of two nonlinear operators.SIAM Journal on Numerical Analysis,16(6):964–979,1979.
    [140] I. Ekeland Convex Analysis and Variational Problems, SIAM, Philadelphia,1999
    [141] J. Liu and J.Ye, Fast Overlapping Group Lasso, arXiv:1009.0306v1,2010
    [142] C.Micchelli, L.Shen and Y.Xu, Proximity Algorithms for Image Models: Denoising,Inverse Problems,27(4),2011.
    [143] B.S. He and X.M. Yuan, On non-ergodic convergence rate of Douglas-Rachford alternatingdirection method of multipliers. submitted.
    [144] B.S. He, M. Tao and X.M. Yuan, Alternating Direction Method with Gaussian BackSubstitution for Separable Convex Programming, SIAM J. Optim.22(2012),313-340.
    [145] B.S. He and X.M. Yuan, On the O(1/n) Convergence Rate of the Douglas-RachfordAlternating Direction Method,SIAM J. Numer. Anal.50(2012),700-709.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700