大规模矩阵特征值及线性系统的Krylov子空间算法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
大规模矩阵特征值及稀疏线性系统数值求解问题已成为许多科学、工程、工业模拟,以及金融等领域的核心问题,由于其求解时间在整个问题解决上占有相当大的比重,因此,高效地解决这些大规模矩阵计算问题能够很大程度上提高整个问题的求解效率.最近20多年来这些大规模矩阵计算问题的算法研究一直是计算数学的热点,国际、国内的研究十分活跃.其求解算法,特别是Krylov子空间类型的算法得到了长足的发展,然而,仍有许多问题尚待解决,基于问题的重要性及其长远意义,本文讨论这些大规模矩阵计算问题的数值求解算法,一方面,本文研究了大规模矩阵特征值问题的数值求解算法,算法的收敛性、稳定性等问题;另一方面,还讨论了大规模稀疏线性系统的加速算法、预处理技术、以及相关的收敛性分析,全文共分八章.
     第一章分别介绍了大规模矩阵特征值和稀疏线性系统问题的来源、研究历史、发展现状以及解决这些问题的基本方法.
     第二章、第三章讨论了大规模非对称矩阵部分特征对的数值计算问题,其中,第二章提出了一种计算大规模矩阵部分内部特征对的Arnoldi类型的算法,分析了算法的收敛性,与调和Arnoldi算法之间的关系,最后给出数值算例,试验对比表明:当近似子空间空间维数较小时,本文的算法能够得到更快的收敛速度;当近似子空间维数逐渐增加时,两种算法的收敛速度都明显加快,子空间维数较大时,两种算法的收敛效果相差不大.
     第三章讨论了求解大规模矩阵特征值问题的一类带压缩向量的块类型Arnoldi算法,每次重新启动时,利用上一个循环得到的近似特征向量来构造初始块向量,在正交基向量的构造过程中采用‘非精确压缩’,一方面,在新的求解子空间中算法可以包含已经收敛的近似特征向量,另一方面,通过非精确压缩,初始‘块’的列数随着近似特征对的收敛而减小,从而使得新的近似子空间对于尚未收敛的特征对更有利.因此,这种方法能够克服单向量的Krylov子空间不能计算重特征值的缺陷,而且比传统的块Krylov子空间方法更稳定,更有效,近似子空间的性质分析表明这种方法渐进地具备添加特征向量重新启动的Arnoldi算法的优势,随着近似特征对的收敛,每次重新启动生成的近似子空间对未收敛的特征对越来越有利,最后给出数值试验,结果表明本文的算法能够克服块的Krylov子空间不规则收敛的现象,具有非常光滑的收敛性质.
     第四至第七章讨论的是大规模稀疏线性系统的求解问题.其中,第四章首先通过实验发现添加近似特征向量重新启动的GMRES(Generalized MinimumRESidual)算法(GMRES-E)每隔一步生成的残向量角度往往很小.从而提出在GMRES-E的基础上添加修正向量的一种双重增广重新启动的GMRES算法(LGMRES-E).数值试验表明新的重新启动方式可以有效的纠正残向量经常出现的跳跃角很小的现象,从而可以使每次重新启动得到的近似子空间保持适当的正交性,一定程度上克服原来方法由于重新启动所造成的近似子空间整体维数下降问题,同时由于添加近似特征向量,新方法也能够有效地压缩掉影响收敛速度的小特征值.数值试验表明了这种双重增广方法对于系数矩阵具有少数几个相对较小特征值的线性系统具有非常好的效果.新方法可以加快GMRES-E的收敛速度.
     第五章我们提出在GCRO-DR每次重新启动时保留部分修正向量信息,并将其添加到新的求解子空间中.这种策略可以使前后近似求解子空间保持适当的线性无关性,从而可以有效的缩短GCRO-DR经常出现的收敛较慢的现象.当只保留一个最新生成的修正向量时,分析表明,算法的改进是非常自然的,只需要极小的改动就能达到目的.当需要添加的修正向量的个数大于1时,本章重点讨论了一种非精确方法.它可以保证在近似子空间维数相同时,改进后的算法与原算法的运算量相差不大.然而改进后的方法具有明显的加速现象.我们对比了两种方法在求解单个线性系统以及序列线性系统方面的收敛速度,讨论了添加修正向量个数对于收敛性的影响.数值试验表明新方法能够有效加快GCRO-DR的收敛速度,在解决模拟机械疲劳断裂问题产生的系列线性系统等问题上的数值试验效果表明,新方法的收敛速度提高接近10%.
     第六章讨论了切频率过滤分解与组合预处理技术.首先,我们观察到传统的右侧过滤分解同样可以在满足左侧过滤条件下来完成.在此基础上,本文提出了一种双侧切频率过滤分解预处理子.在过滤向量的选取上采用ones来作为过滤向量.如此选取有以下几个优点,一方面,可以节省其他类似算法的前处理过程来计算过滤向量.另一方面,对于使用ones作为左侧过滤向量所构造的过滤预处理子,如果选取适当的初始向量,那么预处理的Krylov子空间迭代算法能保证其残向量的和始终为零,即所谓的材料均衡误差为零的性质.将切频率过滤分解所构造的预处理子与传统的ILU(0)预处理子以乘性方式相结合,我们讨论了两种组合预处理技术.谱分析表明组合预处理子能吸收两个预处理子的优点,使预处理矩阵的特征值很好的聚集在1附近.最后,对于一类非线性偏微分方程离散化产生的大规模稀疏线性系统,我们对比了几种不同类型组合预处理子的试验效果,数值结果表明了本文方法的稳定性和有效性.
     第七章讨论了鞍点问题的预处理技术,基于对鞍点问题(1,1)块的切频率过滤分解和约束预处理子的结构,本章讨论了一种切频率过滤分解类型的预处理子.通过对Stokes问题及Oseen方程离散化产生的鞍点问题,我们分析了预处理子在网格精化,Reynolds数增加时的性态.数值结果表明:在内迭代精度和Reynolds数不变的情形下,随着网格精化,需要的迭代步数逐渐增加但变化不是很明显.在矩阵网格单元及内迭代精度不变的情形下,迭代步数基本不受Reynolds数变化的影响,这一性质要比同类型的ILU预处理子要好很多.本章的后半部分,我们提出了一类推广Schilders分解类型的预处理子.理论分析表明,预处理之后矩阵的谱性质与Schilders分解类型的预条件子对标准鞍点问题预处理之后矩阵的谱性质相同.因此,本文的预处理子是Schilders分解类型约束预处理子对于广义鞍点问题的自然推广.由于广义鞍点问题非零(2,2)块的出现,本文的预条件子在实际应用时涉及到Schur补类型的计算.对于一类特殊类型的问题,本文讨论了算法的一种非精确变形可以避免Schur补的计算,数值试验还在准备中.
     最后一章提出了一种修正的切频率过滤分解预处理子(MTFFD),并对其进行了Fourier分析.用2维Poisson方程作为模型问题,通过Fourier分析确定了最优修正阶数以及最优参数. Fourier分析结果表明MTFFD预处理之后的矩阵条件数为O(h~(?)).这一结果表明MTFFD的性能应该比BILU以及MBILU都要好.Fourier分析得到的结论都通过试验进行了验证.最后,通过数值算例表明MTFFD预处理子的效果比TFFD更好.
Large scale eigenvalue problems along with large sparse linear system of equations are at the heart of many large scale computations in science, engineering, industry simulation, finance and so on. In order to solve the real problem completely, considerable CPU time will be spent in solving the linear equations or calculating some meaningful eigenpairs. Therefore, the simulation process can be accelerated considerably, if the associate matrix computation problems are solved efficiently. During the past twenty years, these large scale matrix computation problems have always been one of the most active areas in computational mathematics, and they are widely investigated by many experts all over the world. Many successful algorithms have been proposed, particularly, Krylov subspace type algorithms have made considerable progress. However, there are still many problems unsolved, and many algorithms need to be analyzed and improved. In view of the importance and the long-range significance of these problems, we endeavor to develop and analyze Krylov subspace type algorithms for solving large scale eigenvalue problems and large sparse linear system of equations. This thesis comprises eight chapters.
     In Chapter§1, we firstly give a brief review of the background and projection type solvers of eigenvalue problems. Then we briefly recall the development of iterative method and preconditioning techniques for solving large sparse linear system of equations.
     Chapter§2 and Chapter§3 deal with large scale eigenvalue problems. In Chapter§2, we propose an Arnoldi-type algorithm for computing a few interior eigenvalues of large scale matrices. The convergence analysis, and some comparisons between the algorithm and the harmonic Arnoldi algorithm are done. The results show that the algorithm is efficient, especially when the subspace dimension is small. When enlarging the subspace dimension, both methods improve quickly and the convergence behaviors are close to each other.
     In Chapter§3, we present a class of deflated block Arnoldi type methods for computing partial eigenpairs of large matrices. The Rayleigh-Ritz procedure and its refined variants are analyzed. At the beginning of each restart, the computed approximate eigenvectors are used to form the next initial block vector. By the inexact deflation prodecure in the orthogonalization process, the size of the initial block will be automatically reduced by one whenever an eigenpair achieves convergence, we reveal that the procedure can not only retain the information of the approximate eigenvectors that have converged, but also makes the new generated approximate subspace more favorable for the unconverged eigenpairs. Theoretical analysis shows that the methods are robust and can inherit the advantages of the regular block Krylov subspace methods for solving multiple or closely clustered eigenvalue problems. Qualitative analysis shows that the methods also have the property of gradually possessing the merits of the restarted Arnoldi methods augmented with approximate eigenvectors. Numerical experiments show that the new methods can mitigate the irregular convergence behavior of the block Krylov subspace methods, and exhibit rather smooth convergence behavior.
     From Chapter§4 to Chapter§7, we investigate the iterative solvers of large sparse linear systems. Particularly, in Chapter§4, Based on the phenomena that the skip angles of GMRES-E are usually small, we proposed to accelerate the convergence of GMRES-E by augmenting error approximations, which results in a combined augmented restarted GMRES method. We demonstrate that the resulted combination method can gain the advantages of two approaches: (i) effectively deflate small eigenvalues in magnitude that may hamper the convergence of the method; (ii) partially recover the linear independence of the basis. The numerical results show that the method can efficiently accelerate the convergence of GMRES-E method, and especially suitable for linear systems with a few relatively small eigenvalues.
     In Chapter§5, an accelerating strategy of GCRO-DR method is proposed. By recycling some error approximations generated during previous restart cycles, the new method is able to mitigate the occurrence of small skip angles in GCRO-DR, so that it can effectively shrink the phase of slow convergence. If just one error approximation to be retained, we show that the accelerating strategy can be done in a natural way. However, if more than one error approximations to be retained, we analyzed an inexact variant, which is cheap and effective. Applications of the new method for solving a single and a sequence of linear systems are discussed, and the efficiency of the new method is illustrated by some examples from practical applications. In particular, for solving sequence of linear systems arising from fatigue and fracture engineering components, we show that the convergence of the new method can reduces the overall cost by nearly 10%.
     In Chapter§6, the tangential frequency filtering decomposition and combination preconditioning are discussed. Particularly, we first show that the convential filtering decomposition can also be carried out by satisfying the left filtering property. Then the two sides tangential frequency filtering decomposition is introduced. By combining the new filtering preconditioner with the classical ILU(O) preconditioner in multiplicative ways, composite preconditioners are produced. Analysis show that the composite preconditioners benefit from each of the preconditioners. On the filtering vector, we reveal that ones is a reasonable choice, which is efficient and can avoid the preprosess needed in other methods to build the filtering vector. By setting an appropriate initial solution, the preconditioner by using ones as the left filtering vector ensures the sum of the residual vector is zero all throughout the iterations, i.e. material balance error is zero. Numerical test show that the composite preconditioner is rather robust and efficient for block tridiagonal linear systems arising from the discretisation of partial differential equations with strongly varying coefficient.
     In Chapter§7, we discuss preconditioning techniques for saddle point problems. By utilizing the structure of the (1,1) block of the saddle point problem along with the structure of constraint preconditioner, we propose a tangential frequency filtering decomposition type preconditioner for saddle point problems. We test the preconditioner on the saddle point problems discretized from Stokes and Oseen equations. We analyzed the properties of the preconditioner when the grid size refines and the Reynolds number increases. The numerical results show that: for fixed inner iteration tolerance and the fixed Reynolds number, the outer iteration number increases slowly when the grid size refines; for fixed inner iteration tolerance and the fixed grid size, the outer iteration numbers only slightly influenced by the Reynolds number. The later property more appealing than the same type of ILU preconditioner. In the later part of this chapter, we propose a class of Schilders factorization type constraint preconditioner for regularized saddle point ptoblems. The spectrum analysis shows that the preconditioned matrix inherits properties of preconditioned matrix by using the Schilders factorization type constraint preconditioner for standard saddle point problems. Therefore, the factorization proposed in this paper naturally generalized the Schilders factorization to the context of generalized saddle point problems. However, due to the nonzero (2,2) block appeared in the regularized saddle point problem, the application of the preconditioner involves Schur complement type linear systems, which generally poses difficulties. For some special regularized saddle point problems often arise from practical applications, an inexact variant of the preconditioner is analyzed, which can avoid the Schur complement type computation. Numerical tests are under preparation.
     In the last Chapter, a modified tangential frequency filtering decomposition (MTFFD) preconditioner is proposed. The optimal order of the modification and the optimal relaxation parameter is determined by Fourier analysis. The Fourier results shows that the condition number of the preconditioned matrix is (?)(h~(-2/3)), and the spectra distribution of the preconditioned matrix can be predicted by the Fourier results. The performance of MTFFD is compared with tangential frequency filtering (TFFD) preconditioner on a variety of large sparse matrices from discretization of PDEs with discontinuous coefficients, the numerical results show that the MTFFD preconditioner is much more efficient than the TFFD preconditioner.
引文
[1] 陈桂芝,牛强,解大规模矩阵内部特征问题的简单调和 Arnoldi 方法,厦门大学学报.,2007,(46):312-316.
    [2] Y. Achdou and F. Nataf. Low frequency tangential filtering decomposition, Numer. Linear Algebra Appl., 2007, (14): 129-147
    [3] Y. Achdou and F. Nataf, An iterated tangential filtering decomposition, Numer. Linear Algebra Appl., 2003, (10): 511-539.
    [4] J. I. Aliaga, D. L. Boley, R. W. Freund and V. Hernandez, A Lanczos-type method for multiple starting vectors, Math. Comp., 2000, (69): 1577-1601.
    [5] P. Amestoy, M. Dayde, I. Duff, L. Giraud. A. Haidar, S. Lanteri, J.-Y. L'Excellent, P. Ramet Trends in Sparse Linear Research and Software Developments in France, First French-Japanese Workshop. Petascale Applications, Algorithms and Programming (PAAP). Tokyo, Japan, 01/11/07-02/11/07.
    [6] E. Anderson, Z. Bai, C. Bischof, J. Demmel, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, S. Ostrouchov, and D. Sorensen, LAPACK User's Guide, SIAM, Philadelphia, PA, 1992.
    [7] M. Arioli, V. Ptak and Z. Strakos . Krylov Sequences of Maximal Length and Convergence of GMRES, BIT., 1998, (38): 636-643.
    [8] J.R. Appleyard, I.M. Cheshire and R.K. Pollard, Special techniques for Fully-Implicit Simulators, Tech. Rpt. HL81/2039, AERE Harwell, 1981.
    [9] J. R. Appleyard and I. M. Cheshire. Nested Factorization, SPE 12264, presented at the Seventh SPE Symposium on Reservoir Simulation, San Francisco, 1983.
    [10] C.C. Ashcraft and R.G. Grimes, On vectorizing incomplete factorization and SSOR preconditioners, SIAM J. Sci. Stat. Comput.. 1988, (9): 122-151
    [11] S. F. Ashby, M. J. Hoist, T. A. Manteuffel and P. E. Savlor The role of inner product, in stopping criteria for conjugate gradient iterations, BIT., 2001, (41): 26-52.
    [12] W. E. Arnoldi, The principle of minimized iteration in the solution of the matrix eigenvalue problem. Quart. Appl. Math., 1951. (9): 17-29.
    [13] O. Axelsson and G. Lindskog, On the rate of convergence of the preconditioned Conjugate Gradient method, Numer. Math., (1986), (48): 499-523.
    [14] O. Axelsson and G. Lindskog, On the eigenvalue distribution of a class of preconditioning matrices, Numer. Math.. (1986), (48):479-498.
    [15] O. Axelsson, A survey of preconditioned iterative methods for linear systems of algebraic equations, BIT, 1985, (25): 165-187
    [16] O. Axelsson, Iterative solution method, Cambridge University Press, 1994.
    [17] O. Axelson and B. Polman, A robust preconditioner based on algebraic substructuring and two level grids, In: Hackbusch, W.(ed.) Robust multi-grid methods, NNFM, Bd.23. Vieweg-Verlag, Braunschweig.
    [18] O. Axelsson and M. Neytcheva, Preconditioning methods for linear systems arising in constrained optimization problems. Numer. Linear Algebra Appl., 2003, (10): 3-31.
    [19] J. Baglama, D. Calvetti, G. H. Golub and L. Reichel, Adaptively preconditioned GMRES algorithms, SIAM Journal on Scientific Computing 1998, (20): 243-269.
    [20] J. Baglama, D. Calvetti and L. Reichel, irbleigs: A MATLAB program for computing a few eigenpairs of a large sparse Hermitian matrix, SIAM J. Sci. Comput, 2003, (24): 1650-1677.
    [21] Z. Bai, D. Day and Q. Ye, ABLE: An Adaptive Block Lanczos Method for Non-Hermitian Eigenvalue Problem, SIAM J. Matrix Anal. Appl., 1999, (20): 1060-1082.
    [22] Z. Bai, R. Barret, D. Day, J. Demmel and J. Dongarra, Test matrix collection for non-Hermitian eigenvalue problems. Technical Report CS-97-355. University of Tennessee, 1997. Available at http://math.nist.gov/MatrixMarket
    [23] Z. Bai, J. Demmel, J. Dongarra, A. Ruhe, and H. van der Vorst, editors., Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide, SIAM, Philadelphia.
    [24] Z. Bai and R. W. Freund, A symmetric band Lanczos process based on coupled recurrences and some applications, SIAM J. Sci. Comput., 2001, (23): 542-562.
    [25] Z.-Z Bai, I.S. Duff and A.J. Wathen. A class of incomplete orthogonal factorization methodsⅠ: Methods and theories, BIT, 2001, (41):53-70.
    [26] Z.-Z Bai. I.S. Duff and J.F Yin. Numerical study on incomplete orthogonal factorization preconditioners, Journal of Computational and Applied Mathematics (2008), doi:10.1016/j.cam.2008.05.014.
    [27] Z.-Z Bai and G.-Q. Li, Restrictively preconditioned conjugate gradient methods for systems of linear equations, IMA Journal of Numerical Analysis 2003, (23): 561-580
    [28] Z.-Z Bai and M. Ng. Preconditioners for Nonsymmetric Block-Toeplitz-Like-Plus-Diagonal Linear System, Numer. Math., 2003. (96): 197-220
    [29] Z.-Z Bai, G. H. Golub and M. K. Ng, Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems, SIAM J. Matrix Anal Appl.. 2003, (24): 603-626.
    [30] Z.-Z Bai, G. H. Golub, L.-Z. Lu and J.-F. Yin, Block triangular and skew-Hermitian splitting methods for positive definite linear systems, SIAM J. Sci. Comput., 2005, (26): 844-863.
    [31] Z.-Z Bai, G.H. Golub, C.-K. Li, Optimal parameter in Hermitian and skew-Hermitian splitting method for certain two-by-two block matrices, SIAM J. Sci. Comput., 2006, (28): 583-603.
    [32] Z.-Z Bai, G.H. Golub, Accelerated Hermitian and skew-Hermitian splitting methods for saddle-point problems, IMA. J. Numer. anal., 2007, (247): 1-23.
    [33] A. Baker, L. Jcssup and T. Manteuffel, A technique for accelerating the convergence of restarted GMRES, SIAM J. Matrix Anal Appl, 2005. (26): 962-984.
    [34] A. H. Baker, J. M. Dennis and E. R. Jessup, On Improving Linear Solver Performance: A Block Variant of GMRES. SIAM J. Sci. Comput., 2006, (27): 1608-1626.
    [35] C. A. Beattie, M. Embree and J. Rossi, Convergence of restarted Krylov subspace to invariant subspaces, SIAM J. Matrix Anal. Appl., 2004, (25): 1074-1109.
    [36] C. A. Beattie. M. Embree and D. C. Sorensen, Convergence of polynomial restarted Krylov methods for eigenvalue computations, SIAM Review., 2005, (47): 492-515.
    [37] M. Benzi, M. J. Gander and G. H. Golub, Optimization of the Hermitian and skew-Hermitian splitting iteration for saddle-point problems, BIT., 2002, (43): 1-13.
    [38] M. Benzi, Preconditioning techniques for large linear systems: a survey, J. Comput. Phys., 2002, (182): 418-477.
    [39] M. Benzi, G. H. Golub, A preconditioner for generalized saddle piont problems, SIAM J. Matrix Anal Appl., 2004, (26): 20-41.
    [40] M. Benzi, D. B. Szyld and A. Van Duin, Ordenngs for incomplete factorization preconditioning of nonsymmetric problems, SIAM J. Sci. Comput.. 1999, (20): 1652-1670.
    [41] M. Benzi, C. D. Meyer, and M. Tuma. A Sparse Approximate Inverse Preconditioner for the Conjugate Gradient Method, SIAM J. Sci. Comput., 1996. (17): 1135-1149.
    [42] M. Benzi and M. Tuma, A Sparse Approximate Inverse Preconditioner for Nonsymmetric Linear Systems, SIAM J. Sci. Comput.. 1998. (19): 968-994.
    [43] M. Benzi, G. H. Golub and J. Liesen, Numerical solution of saddle point problems, Acta Numerica., 2005, (14): 1-137.
    [44] L. Bergamaschi, J. Gondzio and G. Zilli, Preconditioning Indefinite Systems in Interior Point Methods for Optimization. Comput Optim Appl.. 2004, (28): 149-171.
    [45] L. Bergamaschi, J. Gondzio, M. Venturin and G. Zilli, Inexact Constraint Preconditioners for Linear Systems Arising in Interior Point Methods, Comput Optim Appl., 2007, (36): 137-147.
    [46] M. Bollhofer and Y. Saad, Multilevel preconditioners constructed from inverse-based ILUs, SIAM J. Sci. Comput., 2006, 27(5):1627-1650.
    [47] M. Bollhofer and Y. Notay, JADAMILU: A software code for computing selected eigenvalues of large sparse symmetric matrices, Comput. Phys. Commun., To appear.
    [48] M. Bollhofer et. al, http://www.math.tu-berlin.de/ilupack/.
    [49] T. Boonen, J. Van Lent and S. Vandewalle, Local Fourier analysis of multigrid for the curl curl equation, SIAM J. Sci. Comput., 2008, (30): 1730-1755.
    [50] A. Brandt, Rigorous local model anlaysis of Multigrid, Math. Comp., 1977, (31): 333-390.
    [51] J. H. Bramble, J. E. Pasciak and A. T. Vassilev, Analysis of the inexact Uzawa algorithm for saddle point problems, SIAM J. Matrix Anal Appl., 1997, (34): 1072-1092.
    [52] J. H. Bramble, J. E. Pasciak and A. T. Vassilev, Uzawa type algorithms for nonsymmetric saddle point problems, Math. Comp., 2000, (69): 667-689.
    [53] K. Burrage and J. Erhel, On the performance of various adaptive preconditioned GMRES strategies, Numer. Linear Algebra Appl., 1998, (5): 101-121.
    [54] B. Barret, M. Berry, T. Chan, J. Demmel, J. Donato, J. Dongarra, V. Eijkhout, R. Pozo, H.A. van der Vorst (eds), Templates for the Solution of Linear Systems: Building Blocks for Iterative Systems, SIAM, Philadelphia, 1994.
    [55] J. H. Bramble, J. E. Pasciak, and J. Xu, Parallel multilevel preconditioners. Math. Comput., 1990, (55): 1-22.
    [56] S. C. Brenner and L. R. Scott, The mathematical theory of finite element methods. Springer, 2002.
    [57] S. Brin, L. Page, R. Motwami, and T. Winograd, The PageRank citation ranking: bringing order to the web, Technical report, Computer Science Department. Stanford University, 1998.
    [58] A. Buzdin, Tangential decomposition, Computing., 1998, (61): 257-276.
    [59] A. Buzdin and G. Wittum, Two-frequency decomposition, Numer. Math., 2004, (97): 269-295.
    [60] Z.-H. Cao, A note on the convergence behavior of GMRES, Appl. Numer. Math., 1997, (25): 13-20.
    [61] Z.-H. Cao, A note on constraint preconditioning for nonsymmetric indefinite matrices, SIAM J. Matrix Anal Appl., 2002, (24): 121-125.
    [62] Z.-H. Cao, A class of constraint preconditioners for nonsymmetric saddle point matrices. Numer. Math., 2006, (103): 47-61.
    [63] A. Chapman and Y. Saad, Deflated and augmented Krylov subspace techniques, Numer. Liner Algebra Appl., 1997, (4): 43-66.
    [64] T. F. Chan and M. Ng, Galerkin Projection Methods for Solving Multiple Linear Systems, SIAM J. Sci. Comput., 2000, (21): 836-850.
    [65] T. F. Chan and W. Wang, Analysis of projection methods for solving linear systems with multiple right-hand sides, SIAM J. Sci Comput., 1997, (18): 1698-1721.
    [66] T. F. Chan and H. Elman, Fourier analysis of iterative methods for elliptic problems, SIAM Review, 1989, (12): 668-680.
    [67] T. F. Chan, Fourier analysis of relaxed incomplete factorization preconditioners, SIAM J. Sci. Comput., 1991, (1): 20-49.
    [68] T. F. Chan and J. M. Donato, Fourier analysis of incomplete factorization preconditioners for three-dimensional anisotropic problems, SIAM J. Sci. Stat. Comput., 13, (1992), pp.319-338.
    [69] T.F. Chan and van der Vorst, Approximate And Incomplete Factorizations, Preprint 871, Department of Mathematics, Utrecht University, 1994, 23 pages.
    [70] G. Chen and Z. Jia, An analogue of the results of Saad and Stewart for harmonic Ritz vectors. J. Comput. Appl. Math., 2004, (167): 493-498.
    [71] E. Cuthill, Several strategies for reducing the bandwidth of matrices, In Sparse Matrices and their Applications, edited by D. J. Rose and R. A. Willoughby (Plenum, New York, 1972).
    [72] P. Concus. P., G. H. Golub and G. Meurant, Block Preconditioning for the Conjugate Gradient method, SIAM J. Sci. Statist. Comput., 1985, (6): 220-252
    [73] T. Davis, University of Florida sparse matrix collection, http://www.cise.ufl.edu/research/ sparse/matrices, 2002.
    [74] T. A. Davis, Summary of available software for sparse direct methods, Technique Report. University of Florida 2006. April.
    [75] J. W. Daniel, W. B. Gragg and G. W. Stewart, Reorthogonalization and stable algorithms for updating the Gram-Schmidt QR, factorization. Math. Comp., 1976, (30): 772-795.
    [76] E. R. Davidson. The iterative calculation of a few of the lowest eigenvalues and corresponding eigenvectors of large real-symmetric matrices, J. Comput. Phys., 1975. (17):87-94.
    [77] E. De Sturler, Nested Krylov methods based on GCR, J. Comput. Appl. Math., 1996, (67): 15-41.
    [78] E. De Sturler, Truncation strategy for optimal Krylov subspace methods, SIAM J. Numer. Annal., 1999, (36): 864-889.
    [79] H. S. Dollar and A. J. Wathen, Approximate factorization constraint preconditioners for saddle-point matrics, SIAM J. Sci. Comput., 2006, (27): 1555-1572
    [80] H. S. Dollar, Constraint-style preconditioners for regularized saddle-point problems, SIAM J. Matrix Anal Appl., 2006 (29): 672-684.
    [81] H. S. Dollar, N. I. M. Gould, W. H. A. Schilders and A. J. Wathen, Implicit-factorization preconditioning and iterative solvers for regularized saddle-point systems, SIAM J. Matrix Anal Appl., 2006, (28): 170-189.
    [82] H. S. Dollar, N. I. M. Gould, W. H. A. Schilders and A. J. Wathen, Using constraint preconditioners with regularized saddle-point problems, Comput. Optim. Appl., 2007, (36): 249-270
    [83] H. S. Dollar, A note on permutations for quadratic programming problems, ACM Reading University, Numerical Analysis Technical Report 5/2006. Annal., 1999, (36): 864-889.
    [84] I. S. Duff, R. G. Grimes and J. G. Lewis, Sparse matrix test problems, ACM Trans. Math. Soft., 1989, (15): 1-14.
    [85] I. S. Duff and J. Koster, The design and use of algorithms for permuting large entries to the diagonal of large matrices, SIAM J. Matrix Anal. Appl., 1999, (20): 889-901.
    [86] I.S. Duff and J.A. Scott, A parallel direct solver for large sparse highly unsymmetric linear systems, ACM Transactions on Mathematical Software.. 2004. (30): 95-117.
    [87] I. S. Duff, G. H. Golub, J. Scott and F. Kwok, Combining Direct and Iterative Methods to Solve Partitioned Linear Systems, Technical Report, Rutherford Appleton Laboratory, in press. Presented by Kwok at SIAM Meeting on Parallel Processing. San Francisco, February 2004.
    [88] I. S. Duff, The design and use of a sparse direct solver for skew symmetric matrices, Journal of Computational and Applied Mathematics, (2008), doi:10.1016/j.cam.2008.05.016
    [89] M. Eiermann, O. G. Ernst and O. Schneider. Analysis of Acceleration Stategies for Restarted Minimal Residual Methods. J. Comput. Appl. Math., 2000, (123): 261-292.
    [90] M. Eiermann and O. G. Ernst, Geometric aspects of the theory of Krylov subspace methods, Acta Numerica., 2001, (1): 251-312.
    [91] S. C. Eisenstat, H. C. Elman and M. H. Schultz, Variational iterative methods for nonsymmetric systems of linear, SIAM J. Numer. Anal.. 1983, (20): 345-357.
    [92] L. Eisner, An optimal bound for the spectral variation of two matrices, Linear Algebra Appl., 1985, (71): 77-80.
    [93] H. C. Elman, A stability analysis of incomplete LU factorization, Math. Comput., 1986. (47): 191-217
    [94] H. C. Elman, Preconditioning for the steady-state Navier-Stokes equations with low viscosity, SIAM J. Sci. Comput., 1999, (20): 1299-1316.
    [95] S. C. Eisenstat, J. M. Ortega and C. T. Vaughan, Efficient polynomial preconditioning for the conjugate gradient method, SIAM J. Sci. Comput., 1990, (15): 859-872.
    [96] J. Erhel, K. Burrage and B. Pohl, Restarted GMRES preconditioned by deflation, J. Comput. Appl. Math., 1996, (69): 303-318.
    [97] V. Faber and T. Manteuffel, Necessary and sufficient, conditions for the existence of a conjugate gradient method, SIAM J. Numer. Annal., 1984, (21): 352-362.
    [98] P. Feldmann, R. W. Freund, and E. Acar, Power Grid Analysis Using a Flexible Conjugate Gradient Algorithm with Sparsification, Technical Report, Department of Mathematics, UC Davis (June 2006).
    [99] R. D. Ferraro, Parallel Hybrid Iterative/Direct Solution Methods in T. Itoh, G. Pelosi. P.P. Silvester, Finite Element Software for Microwave Engineering, Series in Microwave and optical engineering 1187. Series editor K. Chang John Wiley Sons: New York, 1996.
    [100] Q. W. Freund, Quasi-kernel polynomials and their use in non-Hermitian matrix iteration, J. Comput. Appl. Math., 1992, (43): 135-158.
    [101] R. W. Freund, Solution of shifted linear systems by quasi-minimal residual iterations. Numer Linear Algebra, L. Reichel, A. Ruttan and R.S. Varga (eds.), Berlin: W. De Gruyter 1993, 101-121.
    [102] R. W. Freund, Krylov-subspace methods for reduced-order modeling in circuit simulation, J. Comput. Appl. Math., 2000, (123): 395-421.
    [103] A. Frommer and U. Glassner, Restarted GMRES for Shifted Linear Systems, SIAM J. Sci. Comput., 1998, (19): 15-26.
    [104] A. Frommer, BiCGstab(l) for families of shifted linear systems, Computing., 2003. (70): 87-109.
    [105] J. Gaidamour, P. Henon, J. Roman, Y. Saad, An hybrid direct-iterative solver based on the Schur complement approach, Gene around the world at CERFACS, February 29th. 2008. report by J. Gaidamour.
    [106] Giraud, S. Gratton and E. Martin, Incremental spectral preconditioners for sequences of linear systems, Applied Numerical Math., 2007. (57): 1164-1180.
    [107] G. H. Golub and C. F. Van Loan, Matrix Compilation, 3rd Edition, John Hopkins University Press, Baltimore, MD, 1996.
    [108] G. H. Golub and R. Underwood, A Block Lanczos Method for Computing the Singular Values and Corresponding Singular Vectors of a Matrix. ACM Trans. Math. Softw., 1981, (7): 149-169.
    [109] G. H. Golub and Q. Ye, Inexact preconditioned Conjugate Gradient method with inner-outer iteration SIAM J. Sci. Comput., 2001, (21): 1305-1320.
    [110] R. M. Gray, Toeplitz and Circulant Matrices: A review, Foundations and Trends in Communications and Information Theory, 2006, (2): 155-239
    [111] A. Greenbaum, Iterative methods for solving linear systems, SIAM, Philadelphia, 1997.
    [112] A. Greenbaum, V. Ptak and Z. Strakos, Any nonincreasing convergence curve is possible for GMRES, SIAM J. Matrix Anal. AppL, 1996, (17): 465-469.
    [113] R. Grimes, J. Lewis and H. Simon, A shifted block Lanczos algorithm for solving sparse symmetric generalized eigenproblems, SIAM J. Matrix Anal. AppL, 1994, (15): 228-272.
    [114] M. Grote and T. Huckle, Parallel preconditioning with sparse approximate inverses, SIAM J. Sci. Comput. 1997, (18): 838-853.
    [115] A. Gullerud and R. H. Dodds, MPI-based implementation of a PCG solver using an EBE architechture and preconditioner for implicit,3-D finite element analyses, Comput. Structures., 2001, (79): 553-575.
    [116] M. H. Gutknecht and T. Schmelzer, The block grade of a block Krylov space, Priprent, ETH Zurich 2006.
    [117] P. Henon, F. Pellegrini, P. Ramet, and J. Roman. Towards High Performance Hybrid Direct-Iterative Solvers for Large Sparse Systems, In International SIAM Conference On Preconditioning Techniques For Large Sparse Matrix Problems In Scientific And Industrial Applications, Napa Valley, USA, October 2003.
    [118] M. R. Hestenes, E. L. Stiefel, Methods of Conjugate Gradients for Solving Linear Systems, Journal of Research of the National Bureau of Standards 49 (6): 409-436.
    [119] I. Gustafsson, A class of first order factorization methods, BIT., 1978, (18): 142-156.
    [120] A. S. Householder, The Theory of Matrices in Numerical Analysis, Dover, New York, 1964,(有中译本).
    [121] Harwell Subroutine Library, see http://www.cse.scitech.ac.uk/nag/hsl/hsl.shtml.
    [122] C. G. J. Jacobi, (?) ber ein leichtes Verfahren, die in der Theorie der S(?)cularst(?)rungen vorkommenden Gleichungen numerisch aufzul(?)sen, J. Reine Angew. Math. 1846, (30): 51-94.
    [123] Z. Jia, The convergence of generalized Lanczos methods for large unsymmetric eigenproblems, SIAM J. Matrix Anal. Apl., 1995, (16): 843-862.
    [124] Z. Jia, Refined iterative algorithms based on Arnoldi's process for large unsymmetric eigenpriblem, Linear Algebra Appl., 1997, (259): 1-23.
    [125] Z. Jia, Refined iterative algorithms based on block Arnoldi process for large unsymmetric eigenproblems, Linear Algebra Appl., 1998, (270): 171-189.
    [126] Z. Jia, Generalized block Lanczos methods for large unsymmetric eigenproblems, Numer. Math.,1998, (80): 239-266.
    [127] Z. Jia, A refined subspace iteration algorithm for large sparse eigenproblems, Appl. Numer. Math., 2000, (32): 35-52.
    [128] Z. Jia and G. W. Stewart, An analysis of the Rayleigh-Ritz method for approximating eigenspaces, Math. Comp., 2001, (70): 637-647.
    [129] Z. Jia, The refined harmonic Arnoldi method and an implicitly restarted refined algorithm for computing interior eigenpairs of large matrices, Appl. Numer. Math., 2002, (42): 489-512.
    [130] Z. Jia, The convergence of harmonic Ritz values, harmonic Ritz vectors and refined harmonic Ritz vectors, Math. Comput., 2005, (74): 1441-1456.
    [131] 《九章算术》,中国古代数学专著,有注释版.
    [132] R. Kettler, Analysis and comparison of relaxation schemes in robust multigrid and preconditioned conjugate gradient methods, In: [4]: 502-534.
    [133] C. Keller, N. I. M. Gould and A. J. Wathen, Constraint preconditioning for indefinite linear systems, SIAM J. Matrix. Anal. Appl., 2000, (21): 1300-1317 .
    [134] Kesheng Wu, Preconditioned Techniques for Large Eigenvalue Problems, Ph.D thesis, University of Minnesota, 1997.
    [135] S. A. Kharchenko and A. Yeramin, Eigenvalue translation based preconditioners for the GMRES(k) method, Numer. Linear Algebra Appl., 1995, (2): 51-77.
    [136] M. Kilmer and E. De Sturler, Recycling Subspace Information for Diffuse Optical Tomography, SIAM J. Sci. Comput., 2006, (27): 2140-2166.
    [137] L. Y. Kolotilina and A. Yu. Yeremin, Factorized sparse approximate inverse preconditioning (?). Theory, SIAM J. Matrix Anal. Appl., 1993. (14): 45-58.
    [138] A. B. J. Kujlaars, Convergence analysis of Krylov subspace iterations with methods from potential theory, SIAM Review., 2006, (48): 3-40.
    [139] C. Lanczos, Solution of systems of linear equations by minimized iterations, J. Res. Natl. Bur. Stand. 1952, (49): 33-53.
    [140] Laura Grigori and Hua Xiang Krenecker product approximation preconditioners for convection-diffusion model problems, INRIA/RR-6536, 25 pages, 2008.
    [141] R J. Le Veque and L. N. Trefethen, Fourier anlaysis of the SOR iterations, IMA J. Numer. Anal., 1988, (8): 273-279.
    [142] R. B. Lehoucq, D. C. Sorensen and C. Yang, Arpack users' guide: Solution of large scale eigenvalue problems with implicitly restarted Arnoldi methods, SIAM, Philadelphia, 1998. Available at http://www.caam.rice.edu/software/ARPACK/
    [143] X. S. Li and J. W. Demmel, SuperLU DIST: A Scalable Distributed-memory Sparse Direct Solver for Unsymmetric linear systems, ACM Transactions on Mathematical Software, 29 (2003).
    [144] J. Liesen and P. Tichy, The Worst-Case GMRES for Normal Matrices, BIT Numerical Mathematics., 2004, (44): 79-98.
    [145] Yiqin Lin and Yimin Wei, A note on constraint preconditioners for nonsymmetric saddle point problems, Numer. Linear Algera Appl., 2007, (14): 659-664.
    [146] L. Luk(?)an and J. Vl(?)ek, Indefinitely preconditioned inexact Newton method for large sparse equality constrained non-linear programming problems, Numer. Linear Algebra Appl., 1998, (5): 219-247.
    [147] M. Magolu Monga-Made, Taking Advantage of the potentialities of dynamically modified block incomplete factorizations, SIAM J. Sci Comput., 19, (1998), pp. 1083-1108.
    [148] M. Magolu Monga-Made, Dynamically relaxed block incomplete factorizations for solving two- and three-dimensional problems, SIAM J. Sci Comput., 21, (2000), pp. 2008-2028.
    [149] The MathWorks, Inc. MATLAB 7, September 2004.
    [150] J. A. Meijerink and H. A. van der Vorst, An iterative solution method for linear systems of which the coefficient matrix is symmetric M-matnx, Math. Comput., 1977, (137): 148-162.
    [151] G. Meurant, Computer Solution of Large Linear Systems, North-Holland Publishing Co., Amsterdam, 1999.
    [152] G. Meurant, A review on the inverse of symmetric block tridiagonal and block tridiagoal matrices, SIAM J. Matrix Anal. Appl. 1992, (13): 707-728.
    [153] H. Mittelman, C. C. Law, D. F. Jankowski and G. P. Neitzel, A large, sparse, and indefinite generalized eigenvalue problem from fluid mechanics, SIAM J. Sci. Comput., 1992, (13): 411-424.
    [154] R.B. Morgan, A restarted GMRES method augmented with eigenvectors, SIAM J. Matrix Anal. Appl., 1995, (16): 1154-1171.
    [155] R. B. Morgan, Computing interior eigenvalues of large matrices. Linear. Algebra. Appl., 1991, (154-156): 289-309.
    [156] R. B. Morgan, On restarting the Arnoldi method for large nonsymmetric eigenvalue problems, Math. Comput., 1996, (65): 1213-1230.
    [157] R. B. Morgan, Implicitly restarted GMRES and Arnoldi methods for nonsymmetric systems of equations, SIAM J. Matrix Anal. Appl., 2000, (21): 1112-1135.
    [158] R. B. Morgan and M. Zeng, Harmonic projection methods for large nonsymmetric eigenvalue problems, Numer. Linear Algebra Appl., 1998, (5): 33-55.
    [159] R. B. Morgan, GMRES with deflated restarting, SIAM J. Sci. Comput., 2002, (24): 20-37.
    [160] R. B. Morgan, Restarted block GMRES with deflation of eigenvalues, Appl. Numer. Math., 2005, (54): 222-236.
    [161] R. B. Morgan, Computing interior eigenvalues of large matrices, Linear Algebra Appl.. 1991, (154-155): 289-309.
    [162] M.F. Murphy, G.H. Golub and A.J. Wathen, A Note on Preconditioning for Indefinite Linear Systems, SIAM J. Sci. Comput., 2000, (21): 1969-1972.
    [163] N. M. Nachtigal, S. C. Reddy and L. N. Trefethen, How fast are nonsymmetric matrix iterations?, SIAM J. Matrix Anal. Appl., 1992, (13): 778-795.
    [164] National Institute of Standards and Technology, Mathematical and Computatinal Sciences Division, Matrix Market, Available at http://math.nist.gov/MatrixMarket/
    [165] http://www.mgnet.org/mgnet-sites.html.
    [166] Q. Niu and L.-Z. Lu, An invert-free Arnoldi method for computing interior eigenvalues of large matrices, Int. J Comput. math., 2007, (84): 477-490.
    [167] Q. Niu and L.-Z. Lu, An refined Arnoldi type methods for large scale eigenvalue problems. Technique report., 2007.
    [168] J. Nocedal and S. Wright, Numerical optimization, Springer-Verlag, 1999.
    [169] Y. Notay, Flexible Conjugate Gradient, SIAM J. Sci. Comput.. 2000. (22): 1444-1460.
    [170] Y. Notay, DRIC: a Dynamic Version of the RIC Method, Numer. Linear. Algebra Appl.. 1994, (1): 511-533 .
    [171] K. Otto, Analysis of preconditioners for hyperbolic partial differential equations, SIAM J. Numer. Anal. 1996, (33): 2131-2165.
    [172] C. C. Paige, B. N. Parlett and H. A. van der Vorst, Approximate solutions and eigenvalue bounds frm Krylov subspace, Numer. Linear. Algebra Appl., 1995, (2): 115-133.
    [173] M. L. Parks, The Iterative Solution of a Sequence of Linear Systems Arising from Nonlinear Finite Element Analysis, Ph.D. Dissertation, University of Illinois at Urbana-Champaign, 2005.
    [174] M. L. Parks, E. de Sturler, G. Mackey, D. Johnson and S. Maiti, Recycling Krylov Subspaces for Sequences of Linear Systems, SIAM J. Sci. Comput., 2006, (28): 1651-1674.
    [175] A. K. Reid, On the method of conjugate gradients for the solution of large sparse systems of linear equations, in Large sparse Sets of Linear Equations, editored by J. K. Reid, p231, Academic Press, New York, 1971.
    [176] R. D. Richtmyer and K. W. Morton, Difference Methods for initial Value problem.s, 2nd ed., Wiley Intersience, 1967.
    [177] A. Ruhe, Implementation aspects of band Lanczos algorithms for computation of eigenvalues of large sparse symmetric matrices, Math. Comp., 1979, (33):680-687.
    [178] H. Rutishauer, Computational aspects of F. L. Bauer's simultaneous iteration method, Numer. Math., 1969, (13):4-13.
    [179] Y. Saad, Variations on Arnoldi's method for computing eigenelements of large unsymmetric matrices, Linear Algebra Appl., 1980, (34): 269-295.
    [180] Y. Saad and M. H. Schultz, GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Stat. Comput., 1986, (7): 856-869.
    [181] Y. Saad, Numerical Methods for Large Eigenvalue Problems, Manchester University Press, UK, 1992.
    [182] Y. Saad, ILUT: A dual threshold incomplete LU factorization, Numer. Linear Algebra Appl. 1994, (1): 387-402.
    [183] Y. Saad, A flexible inner-outer preconditioned GMRES algorithm, SIAM J. Sci. Stat. Comput., 1993, (14): 461-469.
    [184] Y. Saad, Iterative methods for sparse linear systems, PWS Publishing Company, Boston, 1996
    [185] Y. Saad, Analysis of augmented Krylov subspace methods. SIAM J. Matrix Anal. Appl., 1997, (18): 435-449.
    [186] Z. Li, Y. Saad, and M. Sosonkina. PARMS : a parallel version of the algebraic recursive multilevel solver, Numer. Linear Algebra Appl. 2003, (10): 485-509.
    [187] N. Li, Y. Saad, MIQR: A Multilevel Incomplete QR Preconditioner for Large Sparse Least-Squares Problems, SIAM J. Matrix Anal. Appl., 2006, (28): 524-550.
    [188] Y. Saad and H. A. van der Vorst, Iterative solution of linear systems in the 20th century, J. Comput. Appl. Math., 2000, (123): 1-33.
    [189] M. Sadkane, Block-Arnoldi and Davidson methods for unsymmetric large eigenvalue problems, Numer. Math., 1993, (64): 195-211.
    [190] O. Schenk and K. Gartner PARDISO and PARDSO/ML -Parallel Sparse Algebraic Multi-level Solver Packages for Sparse Linear Matrices, http://www.pardiso-project.org
    [191] O. Schenk, M. Bollhofer and R. A. Romer, On Large-Scale Diagonalization Techniques for the Anderson Model of Localition, SIAM Review., 2008, (50):91-112. The paper originally appeared in SIAM J. Sci. Comput., 2006, (28): 963-983.
    [192] D. C. Scott, The advantages of the inverted operators in Rayleigh-Ritz approximations, SIAM J. Sci. Statist., Comput., 1982, (3): 68-75.
    [193] J. A. Scott, An Arnoldi Code for Computing Selected Eigenvalues of Sparse, Real, Unsymmetric Matrices, ACM Trans. Math. Softw., 1995, (214): 432-475.
    [194] V. Simoncini, On the convergence of restarted Krylov subspace methods, SIAM Journal on Matrix Analysis and Applications, 2000, (22): 430-452.
    [195] V. Simoncini and D. B. Szyld, On the occurrence of superlinear convergence of exact and inexact Krylov subspace methods, SIAM Review., 2005, (47): 247-272.
    [196] V. Simoncini and D. B. Szyld, Recent computational developments in Krylov subspace methods for linear systems, Numer. Linear Algebra Appl., 2007, (14): 1-59.
    [197] H. D. Simon, The Lanczos algorithm with partial reorthogonalization . Math. Comp., 1984. (42): 115-136.
    [198] G. L. G. Sleijpen and H. A. Van der Vorst, A Jacobi-Davidson iteration method for linear eigenvalue problems, SIAM J. Matrix Anal. Appl.. 1996, (17): 401-425.
    [199] Gerard L.G. Sleijpen, Peter Sonneveld, and Martin B. van Gijzen, Bi-CGSTAB as an induced dimension reduction method. Preprint 1369, Dep. Math.. University Utrecht (April. 2008).
    [200] D. C. Sorensen, Implicit application of polynomial filters in a k-step Arnoldi method, SIAM J. Matrix Anal. Appl., 1992, (13): 357-385.
    [201] B. Steffen, Subspace methods for large sparse interior eigenvalue problems, Inter. J. Differ Appl.. 2001, (3): 339-351.
    [202] G. W. Stewart and J. Sun, Matrix perturbation theory, Academic Press, London, 1990.
    [203] G. W. Stewart, Matrix Algorithms: Vol. Ⅱ, Eigensystems. SIAM, Philadelphia, PA, 2001.
    [204] G. W. Stewart, A Residual Inverse Power Method, TR-4854, University of Maryland, 2007.
    [205] D. B. Szyld and J. A. Vogel, A flexible quasi-minimal residual method with inexact preconditioning, SIAM J. Sci. Comput., 2001, (23): 363-380.
    [206] B. T. Smith,J. M. Boyle, B. S. Garbow, Y. Ikebe, V. C. Klema, and C. B. Moler, editors., Matrix eigensystem Routines, EISPA CK Guide, Number 6 in Lecture Notes in Computer Science, Springer, New York, 2nd edition, 1976.
    [207] F. Tisseur and K. Meerbergen, The quadratic eigenvalue problem, SIAM Review, 2001, (43): 235-286.
    [208] K.C. Toh, K.K. Phoon and S.H. Chan, Block preconditioners for symmetric indefinite linear systems, Int. J. Numer. Meth. Engng., 2004, (60): 1361-1381
    [209] F. T. Tracy, T. C. Oppe and S. Gavali, Testing Parallel Linear Iterative Solvers for Finite Element Groundwater Flow Problems, NAS Technical Report; NAS-07-007.
    [210] L. N. Trefethen, Numerical Analysis, to appear in W. T. Gowers, ed., Princeton Companion to Mathematics, Princeton U. Press, 2008.
    [211] L. N. Trefethen, Predictions for Scientific Computing Fifty Years From Now, Mathematics Today, 2000.
    [212] L. N. Trefethen and D.Bau, Ⅲ Numerical Linear Algebra, SIAM, Philadephia, 1997.
    [213] H. A. van der Vorst and C. Vuik, The superlinear convergence behaviour of GMRES, J. Comput. Appl. Math., 1993, (48): 327-341.
    [214] H. A. van der Vorst, High performance preconditioning, SIAM J. Sci. Comput., 1989, (10): 1174-1185
    [215] H. A. van der Vorst and C. Vuik, GMRESR: A family of nested GMRES methods, Numer. Linear Algebra Appl., 1994, (1): 369-386.
    [216] H. A. van der Vorst and G. H. Golub, 150 years old and still alive: eigenproblems, in: I.S. Duff and G.A. Watson (eds). The State of the Art in Numerical Analysis. Clarendon Press, Oxford, 1997, 93-119.
    [217] H. A. Van der Vorst, Computational Methods for Large Eigenvalue Problems, in P.G. Ciarlet and J.L. Lions (eds), Handbook of Numerical Analysis, Volume Ⅷ, North-Holland (Elsevier), Amsterdam 2002, 3-179.
    [218] R. S. Varga, Matrix Iterative Analysis, Prentice-Hall. Englewood Cliffs. NJ. 1962.
    [219] J. A. Vogel, Flexible BiCG and flexible Bi-CGSTAB for nonsymmetric linear systems Appl. Math. Comput., 2007, (188): 226-233.
    [220] C. Wagner, Tangential frequency filtering decompositions for symmetric matrices, Numer. Math., 1997, (78): 119-142.
    [221] C. Wagner, Tangential frequency filtering decompositions for unsymmetric matrices Numer. Math., 1997, (78): 143-163.
    [222] C. Wagner and G. Wittum, Adaptive filtering, Numer. Math., 1997, (78): 305-382.
    [223] J. W. Watts, III, A conjugate gradient truncated direct method for the iterative solution of the reservoir simulation pressure equation, Soc. Petrol. Eng. J., 1981, (21): 345-353.
    [224] H. F. Walker and L. Zhou, A simpler GMRES, Numer. Linear Algebra Appl., 1994, (1): 571-581.
    [225] D. S. Watkins The Matrix Eigenvalue Problem: GR and Krylov Subspace Methods SIAM, Philadelphia, November 2007.
    [226] R. Wienands, C. W. Oosterlee and T. Washio, Fourier analysis of GMRES(m) preconditioned by multigrid, SIAM J. Sci. Comput., 2000, (22): 582-603.
    [227] J. H. Wilkinson, The algebraic eigenvalue problem, Oxford University Press, Inc. New York, NY, USA, 1988,
    [228] J. R. Wallis, R. P. Kendall and T. E. Little. Constraint Residual Acceleration of Conjugate Residual Methods, SPE 13536, presented at the SPE 1985 Reservoir Simulation Symposium held in Dallas, Texas, 1985.
    [229] G. Wittum, An ILU-based smoothing correction scheme, In: Parallel solvers. Proceedings of the Sixth GAMM-Seminar, Kiel, Jan. 25-27, 1990 (W. Hackbusch., ed). Braunschweig: Vieweg, 1991.
    [230] G. Wittum, Filternde Zerlegungen, Schnelle Loser fuer grosse Gleichungssysteme. Teubner Skripten zur Numerik, Band 1, Teubner-Verlag, Stuttgart, 1992.
    [231] J.Xu, Theory of Multilevel Methods, Ph.D thesis, Cornell University, 1989.
    [232] Qian Yin and Linzhang Lu, An implicitly restarted block Arnoldi method in vector-wise fashion, Numer. Math., J. Chinese Univ., 2006, (15): 268-277.
    [233] D. M. Young, Iterative methods for solving partial differential equations of elliptic type, Ph.D Thesis, Harvard University, Cambridge, MA, USA, 1950.
    [234] Y. K. Zhou and Y. Saad, A Chebyshev-Davidson algorithm, for large symmetric eigenvalue problems, SIAM J. Matrix Anal. Appl., 2007, (29): 954-971.
    [235] Y. K. Zhou and Y. Saad, Block Krylov-Schur method for large symmetric eigenvalue problems, Numer. Alg., 2008, (47): 348-359.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700