BP神经网络的理论及其在农业机械化中的应用研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
BP神经网络是目前研究最为成熟、应用最为广泛的人工神经网络模型之一。由于其结构简单、可操作性强、具有较好的自学习能力、能够有效地解决非线性目标函数的逼近问题等优点,因此被广泛应用于自动控制、模式识别、图像识别、信号处理、预测、函数拟合、系统仿真等学科和领域中。但是,BP算法也存在许多不足。例如,初始学习率选取困难,收敛速度慢,接近最优解时易产生波动,有时还会出现振荡现象,对于具有增长趋势的时间序列预测问题外推效果不好等问题。因此,针对BP神经网络存在的这些问题进行深入系统的研究,不仅具有理论意义,也具有重要的应用价值。
     理论已经证明,三层BP神经网络只要隐含层节点数足够多,就具有模拟任意复杂的非线性映射的能力,由此可见BP神经网络具有较强的拟合能力。但是,在实际应用中,有时人们不仅关心神经网络的拟合效果,而且也十分关心当输入为何值时,能使输出取得极大值或极小值的问题。这一问题实际是基于BP神经网络的优化问题,而对这一问题的研究,目前尚未见文献报导。虽然有些文献称为BP神经网络的优化研究,但都是关于BP神经网络权值、学习率和网络结构的优化研究;或者,根据BP神经网络的输入与输出关系,从中选择一个较好的输出值,这实际上并不是真正意义上的优化,而是一种模拟,是从模拟结果中选择一个较优方案。因此对真正的基于BP神经网络的优化方法进行探讨,不仅具有理论意义,也具有现实意义和应用价值。
     本论文旨在通过研究,分析BP神经网络存在不足的原因,进而通过研究提出BP神经网络的改进算法及用于时间序列预测时的一种新方法。在此基础上,研究探讨基于BP神经网络的优化问题。最后,将BP神经网络理论上的研究成果用于黑龙江省农机总动力预测与气吸式割前摘脱联收机惯性分离室的工艺参数的优化中。
     研究取得的成果主要有:
     (1)分析指出了BP神经网络算法存在问题的原因和用BP神经网络进行时间序列预测时外推效果不好的原因。
     (2)研究给出了一种改进的BP神经网络算法。
     提出了每个权值分别对应一个学习率的改进BP算法。该算法使得负梯度方向的信息得到了更充分的利用,同时学习率实现了按需要变化,克服了BP神经网络接近最优解时产生波动和振荡现象、计算精度得到了明显提高,并且后一次的迭代计算继承了前一次迭代计算的学习率,因此可以提高学习速度。另外,改进BP算法基本不受初始学习率的影响,避免了初始学习率选取的困难。
     (3)研究给出了一种基于BP神经网络的时间序列预测问题的新方法。
     首先,分析指出了基于BP神经网络的预测问题存在的不足,根据基于BP神经网络的时间序列预测问题的结构特点,依据Z变换理论,给出了一种新的激活函数,并分析指出了在BP神经网络中,以y=x作为激活函数与y=a+bx作为激活函数等价的原因。其次,推导出了以y=x作为激活函数的BP算法的计算公式和模型。最后,通过示例和实例计算表明,对于具有增长趋势的时间序列预测问题,当以单极性S型函数作为激活函数时其外推效果不理想,而以y=x作为激活函数时外推效果较好。另外,y=x作为激活函数的外推效果基本不受数据处理区间的影响,而单极性S型函数作为激活函数时外推效果受处理区间的影响较大。并且以y=x作为激活函数可以克服单极性Sigmoid函数作为激活函数的BP神经网络在预测问题中存在的不足。
     (4)研究给出了一种基于BP神经网络的优化方法。
     该优化方法以单极性Sigmoid函数作为激活函数,以网络输出最大化为例,给出了基于BP神经网络的无约束和有约束优化问题的一般数学模型,在此基础上,给出了基于BP网络的无约束和有约束优化方法的基本思路,推导给出了BP神经网络输出对输入的偏导数,进而给出了基于BP神经网络的无约束和有约束优化方法的计算方法。
     (5)编写了标准BP算法、改进BP算法、基于改进BP算法的时间序列预测方法和基于BP神经网络的优化方法的计算机程序。
     (6)BP神经网络在农业机械化中的应用研究。
     首先,运用编好的基于改进BP算法的时间序列预测程序对黑龙江省的农机总动力进行预测,给出了未来5年的农机总动力值。预测结果表明该预测方法具有较高的预测精度。其次,运用基于BP神经网络的优化程序对气吸式割前摘脱联收机惯性分离室的工艺参数进行优化,给出了分离室压力损失最小时的最佳工艺参数,优化结果可为此类型惯性分离室的设计和优化提供理论依据。
BP neural network is one of the most mature and widely used artificial neural network models. As it has the advantages of simple structure, easy to operate, good self-learning capability, effectively solve the approximation problems of nonlinear objective function, etc., which has been widely used in pattern recognition, signal processing, automatic control, prediction, image recognition, function approximation, system simulation and other disciplines and fields. However, BP algorithm also has many deficiencies. For example, the selection of initial learning rate is difficult, the rate of convergence is slow, the volatile appears when close to the optimal solution, and sometimes there is oscillation. It is ineffective extrapolated with growth trend of time series prediction problems. Therefore, it has not only of theoretical significance, but also important application value in further BP neural network systematic study of these issues.
     It is proved that as long as the hidden nodes of three-layer BP neural network are enough, it has the capacity to simulate any complex nonlinear mapping, so the BP neural network has strong ability of the fitting ability. However, in practice, sometimes people not only care about the fitting effect of neural networks, but also very concerned about the value of the input, which can lead the output to achieve maximum or minimum. This problem is actually based on the optimization problem of BP neural network, by now, the research on this issue has not yet be reported. Although some literatures are referred to as BP neural network optimization, but they are focus on the weights, learning rate and network structure optimization of BP neural network, according to the relationship between BP neural network input and output, and to choose a better output value, which is actually not really optimization, but a simulation, it is to choose an optimal solution from simulation results. Therefore, it has not only of theoretical significance, but also important application value in exploration of real BP neural network optimization method.
     This thesis aimed to analysis the reasons of BP neural network appeared shortages, and then proposed improved algorithm of BP neural network and a new method for time series prediction. On this basis, BP neural network optimization problems were discussed. Finally, the theoretical research production of BP neural network for prediction of Heilongjiang province agriculture machinery total power and processing parameters optimization of inertia separation chamber of stripper combine harvester with air suction.
     The results achieved during the research were below:
     (1) Analysis indicated the reasons of BP neural network algorithm appeared problems and poor extrapolation results of time series forecasting when use BP neural network.
     (2) This study put forward an improved BP neural network algorithm.
     It was proposed that each weight corresponds to an improved learning rate of BP algorithm. This algorithm made the negative gradient direction information was more fully utilized, while the learning rate achieved necessary changes. It overcame the fluctuation and oscillation when BP neural network close to the optimal solution, and significantly improved the calculated accuracy; also the latter iterative calculation continued the learning rate of previous iterative calculation, which can improve the learning rate. In addition, the improved BP algorithm was independent of the initial learning rate, which avoided the difficulties of learning rate selection.
     (3) A new prediction method of BP neural network based time series was presented in this research.
     First, the prediction shortcomings of BP neural network were indicated, according to the structural features of BP neural network time series prediction, based on Z transform theory, a new activate function was given. And in the BP neural network, as activation function was y=x, y=a+bx was the reason of equivalent to the activation function. Secondly, y=x was derived as the activation function of BP algorithm and the model formula. Finally, through examples calculation, it was showed that with the growth trend for time series prediction, the extrapolation results were not good when unipolar S-function as the activation function, but the extrapolation results were better when y=x as the activation function. In addition, the extrapolation results were not affected by data processing interval when y=x as the activation function, while the extrapolation results were influenced by the processing interval a lot when unipolar S-function as the activation function. And the y=x as the activation function can overcome the shortcomings of prediction problems of unipolar Sigmoid function as the activation.
     (4) An optimization method was given based on BP neural network. The optimization method was according to unipolar Sigmoid function as the activation function, take the network maximize output for example, the general mathematical model of unconstrained and constrained optimization problem was given based on BP neural network, on this basis, the basic ideas of unconstrained and constrained optimization methods were given based on BP network, the partial derivative of BP neural network output to input was derived, and then the optimization calculation of unconstrained and constrained method was given.
     (5) The standard BP algorithm was written, BP algorithm was improved, the optimization method of the computer program based on time series forecasting was improved.
     (6) The application of BP neural network in agricultural mechanization was discussed.
     First, the total power of Heilongjiang Province was predicted by improved BP algorithm based on programmed time series forecasting procedure, the results showed that the total power value of the next 5 years was given. Predicted results showed high prediction accuracy. Secondly, the BP neural network optimization program was used for suction stripping inertial separation chamber associated receiver to optimize the process parameters, the best technology parameters of separation chamber pressure loss minima was given, the results can provide a theoretical basis for design and optimization of this type inertial separator chamber.
引文
1.白人朴,杨敏丽,刘清水.1993.中国农业机械化发展水平地区分类研究.中国农机化,(3):24-27
    2.陈思.2010.一种BP神经网络学习率的改进方法.长春师范学院学报(自然科学版),29(4):25-27
    3.邓娟,杨家明.2005.一种改进的BP算法神经网络.东华大学学报(自然科学版),31(3):123-126
    4.段玉波,王璁.2004.一种新的变步长最小均方算法.大庆石油学院学报,28(2):72-74
    5.冯国良,秦晓明.2009.二次自适应调整学习参数的改进型BP算法研究.硅谷,(1):118-131
    6.高大启.1998.有老师的线性基本函数前向三层神经网络结构研究.计算机学报,21(1):80-86
    7.高隽.2007.人工神经网络原理及仿真实例.第2版.北京:机械工业出版社
    8.高雪鹏,丛爽.2001.BP网络改进算法的性能对比研究.控制与决策,16(2):167-171
    9.龚安,张敏.2006.BP网络自适应学习率研究.科学技术与工程,6(1):64-66
    10.韩力群.2006.人工神经网络教程.北京:北京邮电大学出版社
    11.何政道,何瑞银.2010.农业机械总动力及其影响的时间序列分析.中国农机化,(1):20-24
    12.洪伟,吴承波,何东进.1998.基于人工神经网络的森林资源管理模型研究.自然资源学报,13(1):69-76
    13.姜绍飞.1999.人工神经网络用于建筑工程领域的数据处理方法.哈尔滨建筑大学学报,32(5):24-28
    14.鞠金艳,王金武,王金峰.2010.基于BP神经网络的农机总动力组合预测方法.农业机械学报,41(6):87-92
    15.李翱翔,陈健.2009.BP神经网络参数改进方法综述.数字通信世界,(1):62-64
    16.李恩玉,杨平先,孙玉波.2008.基于激活函数四参数可调的BP神经网络改进算法.微电子学与计算机,25(11):89-93
    17.刘存根,陈增辉,周东辉.2007.K-L变换在BP网络初始权值优化中的应用.21(2):103-105
    18.刘刚.2002.一种综合改进的BP神经网络及其实现.武汉理工大学学报,24(10):57-60
    19.刘海萍,王海涛,王洪利等.2009.基于BP神经网络的CPI预测模型.山东交通学院学报,17(3):83-86
    20.刘磊.2010.基于遗传神经网络的指数跟踪优化方法.系统工程理论与实践,30(1):22-29
    21.刘莉,叶文.2010.基于BP神经网络时间序列模型的降水量预测.水资源与水土工程学报,21(5):156-159
    22.刘幺和,陈睿,彭伟等.2007.一种BP神经网络学习率的优化设计.湖北工业大学学报,22(3):1-3
    23.刘玉静,李成华,杨升明.2007.辽宁省农机总动力组合预测与分析.农机化研究,(5):31-33
    24.卢学强,梁雪慧.1997.神经网络方法及其在非线性时间序列中的应用.系统工程理论与实践,17(6):20-22
    25.吕柏权,李天铎.1997.一种具有全局最优的神经网络BP算法.清华大学学报(自然科学版),37(2):32-34
    26.吕俊,张兴华.2003.几种快速BP算法的比较研究.现代电子技术,(24):96-99
    17.罗四维,肖晔,丁嘉种.1993.学习率自动调整的BP算法.北方交通大学学报,17(2):173-177
    28.倪志伟.1997.BP网络中激活函数的深入研究.安徽大学学报(自然科学版),21(3):48-51
    29.秦焱,朱宏,李旭伟.2008.基于改进型粒子群优化算法的BP网络在股票预测中的应用.计算机 工程与科学,30(4):66-79
    30.石山铭,张维,刘豹.1993.基于神经网络的非线性时间序列预测方法研究.决策与决策支持系统,3(4):72-78
    31.石玉梅,陈永成,马本学等.2006.新陈代谢GM(1,1)模型在兵团农机总动力预测中的应用.农业装备与车辆工程,(11):23-24
    32.史忠植.2009.神经网络.北京:高等教育出版社
    33.宋珲,董欣,王兵.2009.基于BP神经网络的农机总动力预测模型研究.东北农业大学学报,40(4):116-120
    34.孙佰清,潘启树,冯英浚等.2001.提高BP网络训练速度的研究.哈尔滨工业大学学报,33(4):439-441
    35.唐璐,齐欢.2003.混沌和神经网络结合的滑坡预测方法.岩石力学与工程学报,22(12):1984-1987
    36.唐万梅.2005.BP神经网络网络结构优化问题的研究.系统工程理论与实践,25(10):95-100
    37.王德成.2005.我国农业机械化发展经济效应的研究.中国农业大学博士学位论文
    38.王立军.2006.气吸式割前摘脱联收机惯性分离室机理研究.东北农业大学博士学位论文
    39.王玲芝,王忠民.2009.动态调整学习速率的BP改进算法.计算机应用,29(7):1894-1896
    40.王涛,贾诺.2008.BP神经网络在期货价格预测中的应用.数学的实践与认识,38(17):49-52
    41.王文剑.2000.BP神经网络模型的优化.计算机工程与设计,21(6):8-10
    42.王小同,杜方,杨庆雄.1994.一种避免前向网络学习算法局部极小问题的方法.西北工业大学学报,12(2):326-329
    43.王晓萍,黄海,蒋化冰.2000.BP神经网络Vogl快速算法的改进.浙江大学学报(工学版),34(2):141-146
    44.王新民,赵彬,王贤来等.2009.基于BP神经网络的凿岩爆破参数优选.中南大学学报(自然科学版),40(5):1411-1416
    45.王燕妮,樊养余.2010.改进BP神经网络的自适应预测算法.计算机工程与应用,46(17):23-26
    46.王子才,施云惠,崔明根.2001.一种具有动态最优学率的BP算法.系统仿真学报,13(6):775-815
    47.魏玲.2008.基于时间序列神经网络预测的研究.湖南环境生物职业技术学院学报,14(3):31-33
    48.文冬林,刘小军.2008.一种快速逃离局部极小点的BP算法.计算机应用,28(S1):25-27
    49.吴劲军.2004.基于BP神经网络的人口预测模型研究.统计与信息论坛,19(2):44-46
    50.吴佑寿,赵明生.2001.激活函数可调的神经元模型及其有监督学习与应用.中国科学(E辑),31(3):263-272
    51.向国全,董道珍,董继宏.1999.前向网络bp算法的改进算法.河海大学学报(自然科学版),29(1):43-47
    52.向国全,董道珍.1997.BP模型中的激励函数和改进的网络训练法.计算机研究与发展,34(2):113-117
    53.肖国泉,王春,张福伟.2001.电力负荷预测.北京:中国电力出版社
    54.谢红梅,周清,黄大明等.2001.2001年-2010年广西农机总动力人工神经网络预测.广西工学院学报,12(4):26-32
    55.谢立春.2007.BP神经网络算法的改进及收敛性分析.计算技术与自动化,26(3):52-56
    56.徐晋.2004.前馈神经网络学习新算法及其仿真.哈尔滨商业大学学报(自然科学版),20(1):24-27
    57.徐小文,张雨浓,毛宗源.1998.动态神经网络的隐节点增删算法研究.仲恺农业技术学院学报,11(4):20-23
    58.杨敏丽,白人朴.2004.农业机械总动力与影响因素关系分析.农机化研究,(6):45-47
    59.杨敏丽.2000.我国农业(种植业)机械化发展的区域不平衡性研究.农业装备学报,16(4):68-72
    60.杨印生,刘佩军,李宁.2006.我国东北地区农业机械化发展的影响因素辨识及系统分析.农业技术经济,(5):28-33
    61.杨兆升,朱中.1999.基于BP神经网络的路径行程时间实时预测模型.系统工程理论与实践,19(8):59-64
    62.易丹丹,李晓红,焦长丰.2006.我国农户农业装备现状及需求调查.中国农机化,64-67
    63.余本国.BP神经网络局限性及其改进的研究.山西农业大学学报(自然科学版),29(1):89-93
    64.张海燕,冯天瑾.2002.新的组合激活函数BP网络模型研究.青岛海洋大学学报,32(4):621-626
    65.张海云.2006.农机总动力的预测及多种数学方法的应用比较.中国农机化,(2):50-51
    66.张青贵.2004.人工神经网络导论.北京:中国水利水电出版社
    67.张山,何建农.2009.BP神经网络的优化算法研究.计算机与现代化,(1):73-80
    68.张淑娟,赵飞.2008.基于Shapley值的农机总动力组合预测方法.农业机械学报,39(5):60-64
    69.张雨浓,曾庆淡,肖秀春等.2008.复指数Fourier神经元网络隐神经元衍生算法.计算机应用,28(10):2503-2506
    70.张雨浓,杨逸文,李巍.2010.神经网络权值直接确定法.广州:中山大学出版社
    71.周玲,孙军,袁宇波等.1999.混合激活函数对BP算法收敛速度的影响.河海大学学报,27(5):107-108
    72.周晓红,蔡俊,任德官.2002.一种优化多层前馈神经网络中隐节点数的算法.浙江师范大学学报(自然科学版),25(3):268-271
    73.周政.2008.BP神经网络的发展综述.山西电子技术,2:90-92
    74.朱登胜,吴梦良,陈丽能.2001.浙江省农业机械化发展影响因素的关联分析.金华职业技术学院学报,(4):4-6
    75.朱晶,刘会民,赵冰等.2007.基于BP神经网络的SARS传播预测.生物数学学报,22(2):288-292
    76.朱荣胜,王福林.2006.黑龙江省农机总动力趋势包络预测与分析.东北农业大学学报,37(4):512-515
    77.朱瑞祥,黄玉祥,杨晓辉.2006.用灰色神经网络组合模型预测农机总动力发展.农业装备学报,22(2):107-110
    78.邹阿金,张雨浓.2009.基函数神经网络及应用.广州:中山大学出版社
    79. Abebe K., Dahl D.C., Olson K.D.1989. The Demand for Farm Machinery. University of Minnesota
    80. Ahammed C.S., Herdt R.W.1983. Farm mechanization in a Semiclosed Input-Output Model:the Philippines. American Journal of Agricultural Economics,65(3):516-525
    81. Akaike H. A new look at the statistical model identification. IEEE transaction on Automatic Control, 6(19):716-723
    82. Amari S.1993. A universal theorem on learning curves. Neural Networks,6(2):161-166
    83. Battiti R.1992. First and second-order methods for learning:between steepest descent and Newton's method. Neural computation,4(2):141-166
    84. Bermejo R., Infante J.2000. A multigrid algorithm for the p- Laplacian. SIAM J. Sci. Comput.,21(5): 1774-1789.
    85. Bhaya A., Kaszkurewicz E.2004. Steepest descent with momentum for quadratic functions is a version of the conjugate gradient method. Neural Networks,17(1):65-71
    86. Bryson A.E., Ho Y.C.1969. Applied optimal control. New York:Blaisdell
    87. Chan L.W., Fallside F.1987. An adaptive training algorithm for back propagation networks. Computers, Speech and Language,2:205-218
    88. Chen H., Canizqres C.A., Singh A.2001. ANN-based short-term load forecasting in electricity markets. IEEE Power Engineering Society Winter Meeting, Ohio USA,2:411-415
    89. Chiang Y.M., Chang L.C., Chang F.J.2004. Comparison of static-feed forward and dynamic-feedback neural networks for rainfall-runoff modeling. Journal of Hydro logy,290(3-4):297-311
    90. Chung F.L., Lee L.1995. Network-growth approach to design of feed-forward neural networks. IEEE Proc Control Theory Appl,142(5):486-492
    91. Danok A., McCarl B., White T.K.1978. Machinery Selection and Crop Planning on a state Farm in Iraq. American Jouranl of Agricultural Economics,60(3):544-549
    92. Darken C., Moody J.1991. Towards faster stochastic gradient search. In Advances in Neural Information Processing Systems, San Mateo, CA:Morgan Kaufmann,4:1009-1016
    93. Darken Christian, Moody John.1991. Note on learning rate schedules for stochastic optimization. Neural information processing systems,832-838.
    94. Dayan P., Hinton G.E., Neal R.M., et al.1995. The helmholtz machine. Neural Computation,7:889-904
    95. Dennis J.E., Schnabel R.B.1983. Numerical methods for unconstrained Optimization and nonlinear equations.1st edition. Englewood cliffs, NJ:Prentice-hall
    96. Fahlman S.E.,Lebiere C.1990. The cascade-correlation learning architecture. San Francisico:Advances in Neural Information Processing System 2
    97. Fletcher R., Reeves C.M.1964. Function optimization by conjugate gradients. The Computer Journal, 7:149-154
    98. Foresee F.D., Hagan M.T.1997. Gauss-Newton approximation to Bayesian learning. Proceedings of the 1997 International Joint Conference on Neural Networks, (3):1930-1935
    99. Frean M.1990. The upstart algorithm:a method for constructing and training feed-forward neural networks. Neural Computation,2(2):198-209
    100. Funahashi K.1989. On the approximate realization of continuous mappings by neural networks. Neural Networks,2(7):183-192
    101. Griliches Z.1960. Measuring Inputs in agriculture:A Critical Survey. Farm Economics,42(5): 1411-1427
    102. Gulati T., Chakrabarti M., Singh A., et al.2010. Comparative Study of Response Surface Methodology,Artificial Neural Network and Genetic Algorithms for Optimization of Soybean Hydration. Food Technol Biotechnol,1(48):11-18
    103. Gunjal, Kisan R., Earl O.,et al.1983. Economic Analysis of U.S. Farm Mechanization. The Center for Agricultural and Rural Development, Iowa State University
    104. Hagan M.T., Menhaj M.B.1994. Training feedforwad networks with the Marquardt algorithm. IEEE Transactions on Neural Networks,5(6):989-993
    105. HAM F.M., KOSTANIC I.2001. Principle of Neurocomputing for Science & Engineering. New York: McGraw-Hill Companies, Inc.
    106. Ham F.M., Kostanic I.2001. Principles of Neurocomputing for science & Engingeering. New York:McGraw-Hill Companies, Inc.
    107. Ham F.M.,Demuth H.B., Beale M.1995. Neural Network Design. Boston:PWS-Kent Pub. Co
    108. Heady, Earl O.1963. Resource Demand and Structure of Agricultural Industry. Iowa State University Press
    109. Hecht-Nielson R.1987. Kolmogorov's mapping neural network existence theorem.1987 Inr Conf on Neural Networks,3(6):11-13
    110. Hecht-Nielson R.1989. Theory of the back propagation neural networks. Washington D C:IEEE International Joint Conference on Neural Networks
    111. Husken M., Jin Y., Sendhoff B.2005. Structure optimization of neural networks for evolutionary design optimization. Soft computing a fusion of Foundations, Methodologies and Applications,9(1): 21-28
    112. Hyunjin Lee, Hyeyoung Park, Yillbyung Lee.2002. Network optimization through learning and pruning in neuromanifold. Lecture Notes in Computer Science,24(17):169-177
    113. Kulshreshtha N.S.1975. Ownership of Farm Truchks for Hauling Grain:an Application of Multivariate Logit Analysis. American Journal of Agricultural Economics, (5):302-308
    114. Lapedes A., Farber.1987. Nonlinear signal processing using neural networks:prediction and system modeling. Technical Reprot, Los Almos Laboratory
    115. Lee H., Park H., Lee Y.2002. Network optimization through learning and pruning in neuromanifold. Lecture Notes in Computer Science,24(17):169-177
    116. Leonard J., Kramer M.A.1990. Improvement of the back-propagation algorithm for training neural networks. Computers&Chemical Engineering,14(1):337-341
    117. Levenberg K. A method for the solution of certain problem in least squares. Quart. Appl. Math., 2:164-168
    118. Lippmann R.P.1999. An introduction to computing with neural nets. Washington D C:IEEE ASSPM Magazine
    119. Marquardt D.W.1963. An algorithm for least-squares estimation of nonlinear parameters. SIAM J. Appl. Math.,11:431-441
    120. Martin R., Heinrich B.1993. A Direct Adaptive Method for Faster Back-propagation Learning:The RPROP Algorithrm. Ruspini H. Proceedings of the IEEE international Conference on Neural Networks(ICNN). IEEE Press, New York,586-591
    121. Merad L., Bendimerad F.T., Meriah, S.M.,et al.2007. Neural Networks for synthesis and optimization of antennas arrays. Radioengineering Journal,16(1):23-30
    122. Mezard M.,Nadal J.P.1989. Learning in feed-forward layered networks:the tiling algorithm. Physics A:Math. Gen,22(12):2191-2203
    123. Minskey M.L., Papert S.1969. Perceptrons:An introduction to computational geometry. Cambridge: MIT Press
    124. Moller M.F.1993. Scaled conjugate-gradient algorithm for fast supervised learning. Neural Networks, 6(4):525-533
    125. Nguyen D., Widrow B.1990. Improving the learning speed of 2-layer Neural networks by Choosing Initial Values of Adaptive Weights. Proceedings of the International Joint Conference on Neural networks. San Diego, CA,3:21-26
    126. Oren S.S.1976. On the selection of parameters in self-sealing variable metric algorithms. Mathematical Programming,10:70-90
    127. Parker D.B..1985. Learning logic[R]//Center for computational research in economics and management sciences. Technical report TR-47. Cabridge, MA:MIT
    128. Rumehart D. E., McClelland J. L.1986. Parallel Distributed Processing:explorations in the microstructure of cognitions. MA:MIT Press,1:318-362
    129. RuMelhart D.E., Hinton G.E., Williams R.J.1986. Learning representations by back-propagation errors. Nature, (323):533-536
    130. Tamura S.,Tateishi M.1997. Capabilities of a four layered feed-forward NN, four layer versus three layer. IEEE Trans Neural Networks,8(3):251-255
    131. Varfis A., Versino C.1990. Univariate economic time series forecasting by connectionist methods. IEEE ICNN,342-345
    132. Vogl T..P., Magis J.K., Zigler A.K. et al.1988. Accelerating the convergence of the back-propagation method. Bio. Cybern,59:246-264
    133. Vogl T.P., Mangis J.K., Rigler J.K., et al.1988. Accelerating the convergence of the back propagation method. Biological Cybernetics,59:257-263
    134. Werbos P.J.1974. Beyond regression:New tools for prediction and analysis in the behavioral sciences. Ph D Thesis, MA:Harvard University, Cambridge
    135. Werbos P.J.1988. Generalization of back propagation with application to a recurrent gas market mode. Neural Networks,1:339-356
    136. Yu Xiaohu, Chen Guoan. Efficient backpropagation learning using optimal learning rate and momentum. Neural Networks,10(3):517-527
    137. Zhang Y., Ruan G.2009. Bernoulli neural network with weights directly determined and with the number of hidden-layer neurons automatically determined. International Symposium on neural networks,36-45
    138. Zhang Y., Wang J.2004. Obstacle avoidance for Kinematically redundant manipulations using a dual neural network. IEEE transactions on Systems, Man, and Cybernetics, Part B,34(1):752-759
    139. Zhang Y., Wu L.2008. Weights optimization of neural network via improved BCO approach. Progress In Electromagnetic Research, PIER 83,185-198

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700