Total views : 424

A Review on Enhancements to Speed up Training of the Batch Back Propagation Algorithm

Affiliations

  • Faculty of Informatics and Computing, Universiti Sultan Zainal Abidin, Terengganu, Malaysia

Abstract


Objectives: The present review is focused on determining the efficiency of some of the parameters for enhancing the time and accuracy training in the batch back propagation (BP) algorithm. Methods: Researchers have used many methods, including heuristic methods, flat-spots, Fletcher -Powel and Quasi-Newton methods to enhance the BP algorithm for speeding up time training. The current heuristic method covers two techniques. The first focuses on choosing the suitable value for each training rate and momentum term, either together or individually. The second technique is to create a dynamic training rate with a penalty to avoid the local minimum. Findings: Slow training or fast training depends on the weight adjusted in the BP algorithm. The training rate and momentum are significant parameters for controlling the updated weight, but it is difficult to choose the suitable value to adjust the weight for improving the BP algorithm. If the weights are adjusted too small, the BP algorithm gives slow training; if the weight is over-adjusted, the BP algorithm gives faster training with an oscillating value of error training. The small or large adjustment of the weights is unsuitable for learning of the BP algorithm. Existing studies do not mention the relationship between the values of training rate and momentum term with gross weight. Gross weight leads to saturation training or reduction in training accuracy. This study suggests creating the dynamic training rate with boundary and momentum terms and then establishing the relationship between them to keep the weight adjusted moderate to avoid the gross the weight being updated. Improvements: This study will guide researchers to create a dynamic training rate and momentum term with an inverse relationship or boundary to escape gross weight training and maintain high accuracy training.

Keywords

Batch Back Propagation Algorithm, Local Minimum, Momentum Term, Speed Up Training, Training Rate.

Full Text:

 |  (PDF views: 276)

References


  • Rizwan JM, Krishnan PN, Karthikeyan R, Kumar SR. Multi layer perception type artificial neural network based traffic control. Indian Journal of Science and Technology. 2016 Feb 9; 9(5).
  • Shao Y, Zhao C, Bao Y, He Y. Quantification of nitrogen status in rice by least squares support vector machines and reflectance spectroscopy. Food and Bioprocess Technology. 2012; 5(1): 100–7.
  • Kostopoulos AE, Grapsa TN. Self-scaled conjugate gradient training algorithms. Neurocomputing. 2009; 72(13):3000–19.
  • Zweiri YH, Whidborne JF, Seneviratne LD. A three term back propagation algorithm. Neurocomputing. 2003; 50:305–18.
  • Shukla A, Karmakar S. Identification of global minima of back-propagation neural network in the prediction of chaotic motion. International Journal of Computer Applications. 2015; 112(4).
  • Kalaivani R, Sudhagar K, Lakshmi P. Neural network based vibration control for vehicle active suspension system. Indian Journal of Science and Technology. 2016 Feb 7; 9(1).
  • Mo H, Wang J, Niu H. Exponent back propagation neural network forecasting for financial cross-correlation relationship. Expert Systems with Applications. 2016; 53:106–16.
  • Wang L, Zeng Y, Chen T. Back propagation neural network with adaptive differential evolution algorithm for time series forecasting. Expert Systems with Applications. 2015; 42(2):855–63.
  • Dai Q. Back-propagation with diversive curiosity: An automatic conversion from search stagnation to exploration. Applied Soft Computing. 2013; 13(1):483–95.
  • Li LK, Shao S, Yiu KF. A new optimization algorithm for single hidden layer feedforward neural network. Applied Soft Compting. 2013; 13(5):2857–62.
  • Malleswaran M, Vaidehi V, Sivasankari N. A novel approach to the integration of GPS and INS using recurrent neural networks with evolutionary optimization techniques. Aerospace Science and Technology. 2014; 32(1):169–79.
  • Abbas Q, Ahmad F, Imran M. VARIABLE learning rate based Modification In Backpropagation Algorithm (MBPA) of artificial neural network for data classification. Science International. 2016; 28(3):2369–78.
  • Kirar VPS. Improving the performance of back-propagation training algorithm by using ANN. World Academy of Science, Engineering and Technology. International Journal of Computer, Electrical, Automation, Control and Information Engineering. 2015; 9(1):187–92.
  • Ramadhena C, Ibrahim AO, Sulaiman S. Weights Adjustment of two -term back- propagation network using adaptive and fixed learning methods. International Journal Advance Soft Computer Applied. 2013; 5(2) .
  • Wu SX, Luo DL, Zhou ZW, Cai JH, Shi YX. A kind of BP neural network algorithm based on grey interval. International Journal of Systems Science. 2011 ; 42(3):389–96.
  • Azami H, Sanei S, Mohammadi K. Improving the neural network training for face recognition using adaptive learning rate, resilient back propagation and conjugate gradient algorithm. Journal of Computer Applications. 2011 Nov; 34(2):22–6.
  • He L, Bo Y, Zhao G. Speech-oriented negative emotion recognition. Proceedings 34th Chinese in Control Conference (CCC); 2015 Jul 28. p. 3553–8.
  • Ge J, Sha J, Fang Y. An new back propagation algorithm with chaotic learning rate. International Conference on Software Engineering and Service Sciences; 2010 Jul 16. p. 404–7.
  • Iranmanesh S, Mahdavi MA. A differential adaptive learning rate method for back-propagation neural networks. World Academy of Science, Engineering and Technology. 2009 Mar 23; 50(1):285–8.
  • Zainuddin Z, Mahat N, Hassan YA. Improving the convergence of the backpropagation algorithm using local adaptive techniques. International Conference on Computational Intelligence; 2004 Dec. p. 173–6.
  • Rimer ME. Improving neural network classification training [Unpublished PhD thesis]. Brigham Young University; 2007.
  • Wen H, Xie W, Pei J. A pre-radical basis function with deep back propagation neural network research. Proceeding in 12th International Conference on Signal Processing (ICSP); 2014 Oct 19. p. 1489–94.
  • Zhang N. An online gradient method with momentum for two-layer feedforward neural networks. Applied Mathematics and Computation. 2009 Jun 15; 212(2):488–98.
  • Charif O, Omrani H, Trigano P. Optic modified backpropagation training algorithm for fast convergence of feedforward neural network. International Conference on Telecoounication Technology and Applications (CSIT); 2011. p. 132–7.
  • Liyi Z, Ting L, Jingyu Z. Analysis of momentum factor in neural network blind equalization algorithm. International Conference on Communications and Mobile Computing; 2009 Jan 6. p. 345–8.
  • Li Y, Fu Y, Li H, Zhang S-W. The improved training algorithm o back propagation neural network with self adaptive learing rate. International Conference on Computational Intelligence and Natural Computing CNC'09; Wuhan. 2009. p. 73–6.
  • Cheung C-C, Ng S-C, Lui AK, Xu SS. Enhanced two‐phase method in fast learning algorithms. International Joint Conference on Neural Networks (IJCNN); 2010 Jul 18‐23. p. 1–7.
  • Örkcü HH, Bal H. Comparing performances of back propagation and genetic algorithms. Expert Systems with Applications. 2011; 38(4):3705–9.
  • Moallem P. Improving back‐propagation VIA an efficient combination of a saturation suppression method. Neural Network World. 2010; 20(2):207–23.
  • Xie J, Lin P, Liang H, Lu M. The improved rapid convergence algorithm of the connecting rights in the BP network. International Symposium on Knowledge Acquisition and Modeling; Wuhan. 2008. p. 819–22.
  • Shrestha SB, Song Q. Adaptive learning rate of spike prop based on weight convergence analysis. Neural Networks. 2015; 63:185–98.
  • Liu Y, Li Z, Yang D, Mohamed KS, Wang J, Wu W. Convergence of batch gradient leaning algorithm with smoothing L 1/2 regularization for sigma–pi–sigma neural networks. Neurocomputing. 2015; 151:333–41.
  • Shao H, Wang J, Liu L, Xu D, Bao W. Relaxed conditions forconvergence of batch BPAP for feed forward neural networks. Neurocomputing. 2015; 153:174–9.
  • Zhang H, Wu W, Yao M. Boundedness and convergence of bach back-propagation algorithm with penalty for feedward neural networks. Neurocomputing. 2012; 89:141–6.
  • Yang G, Qian FJ. A fast and efficient two-phase sequential learning algorithm. Applied Soft Computing. 2013; 25:129–38.
  • Nawi NM, Ranaing R, Ransing S. An improved learning algorithm besed on the Brroyden-Fletcher-Goldfarb-Shanno (BFGS) method for back propagation neural networks. Proceeding of the 6th Internatioal conference on Intelligent Systems Design and Applications; 2006 Oct 16-18. p. 152–7.
  • Kaensar C. Analysis on the parameter of back propagation algorithm with three weight adjustment structure for hand written digit recognition. 10th International Conference on Service Systems and Service Management. 2013. p. 18–22.
  • Shao H, Zheng G. Convergence analysis of a back-propagation algorithm with adaptive momentum. Neurocomputing. 2011 Feb 28; 74(5):749–52.
  • Sha D, Bajic VB. An optimized recursive learning algorithm for three-layer feedforward neural networks for mimo nonlinear system identifications. Intelligent Automation and Soft Computing. 2011; 17(2):133–47.
  • Feng Q, Daqi G. Dynamic learning algorithm of multi-layer perceptrons for letter recognition. International Joint Conference on in Neural Networks (IJCNN); 2013. p. 1-6.
  • Latifi N, Amiri A. A novel VSS-EBP algorithm based on adaptive variable learning rate. 2011 3rd International Conference on Computational Intelligence a Modelling and Simulation. Langkawi. 2011. p. 14–7.
  • Gong B. A novel learning algorithm of back-propagation neural network. Control, Automation and Systems Engineering. CASE 2009. IITA International Conference; 2009 Jul 11. p. 411–4.
  • Xinbo Z, Lili W. Handwritten digit recognition based on improved learning rate BP algorithm. International Conference in Information Engineering and Computer; 2010. p. 1–4.
  • Masmoudi MS, Klabi I, Masmoudi M. Performances improvement of back propagation algorithm applied to a lane following system. 2013 World Congress on in Computer and Information Technology (WCCIT); Sousse. 2013. p. 1–5.
  • Abdulkadir SJ, Shamsuddin SM, Sallehuddin R. Three term back propagation network for moisture prediction. International Conference on Clean and Green Energy; 2012. p. 103–7.
  • Zhixin S, Bingqing L. Research of improved back propagation neural network algorithm. Proceedings 12th IEEE International Conference on Communication Technology; Nanjing. 2010. p. 763–6.
  • Zhang W, Li Z, Xu W, Zhou H. A classifier of satellite signals based on the back-propagation neural network. 8th International Congress on Image and Signal Processing (CISP); 2015 Oct 14. p. 1353–7.
  • Azami H, Escudero J. A comparative study of breast cancer diagnosis based on neural network ensemble via improved training algorithms. Proceedings in 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); 2015. p. 2836–9.
  • Rui L, Xiong Y, Xiao K, Qiu X. BP neural network-based web service selection algorithm in the smart distribution grid. Proceedings 16th Asia-Pacific in Network Operations and Management Symposium (APNOMS); 2014. p. 1–4.
  • Yonghao D, Peng Z, Yuming S, Sanyuan Z. Improvements of coefficient learning in BPNN for image restoration restoration ICSAI. International Conference on Systems; Yantai. 2012. p. 2692–4.
  • Wang K, Zhuo L, Lu H, Guo H, Xu L, Zhang Y. An improved BP algorithm overout-of-order streams for big data. Proceedings 8th International ICST Conference in Communications and Networking; Guilin. 2013. p. 840–5.
  • Saki H, Tahmasbi A, Saltanian-Zadah H, Shokouhi SB. Fast opposite weight learning rules with application in breast cancer. Computers in biology and medicine. 2013; 43(1):32–41.
  • Scanzio S, Cumani S, Gemello R, Mana F, Laface P. Parallel implementation of artificial neural network training for speech recognition. Pattern Recognition Letters. 2010; 31(11):1302–9.
  • El-Melegy MT, Essai MH, Ali AA. Robust training of artificial feedforward neural network. Foundations of Computational Intelligence, Springer-Verlag: Berlin Heidelberg; 2009. p. 217–42.
  • Hamid NA, Nawi NM, Ghazali R. The effect of adaptive gain and adaptive momentum in improving training time of gradient descent back propagation algorithm on classifation problems. International Journal on Advanced Science, Engineering and Information Technology. 2011; 1(2):178–84.
  • Hamid NA, Nawi NM. Ghazali R, Salleh MN Accelerating learning performance of back propagation algorithm by using adaptive gain together with adaptive momentum and adaptive learning rate on classification problems. International Conference on Ubiquitous Computing and Multimedia Applications; 2011. p. 559–70.
  • Noersasongko E, Julfia FT, Syukur A, Pramunendar RA, Supriyanto C. A tourism arrival forecasting using genetic algorithm based neural network. Indian Journal of Science and Technology. 2016 Jan 16; 9(4).
  • Wang J, Wu W, Zurada JM. Deterministic convergence of conjugate gradient method for feedforward neural networks. Neurocomputing. 2011 Jul 31; 74(14):2368–76.
  • Shao HM, Wu W, Liu LJ. Convergence of online gradient method with penalty for BP neural networks. Journal of Mathematical Research. 2010; 26(1):67–75.
  • Sureerattanan S, Phien HN, Sureerattanan N, Mastorakis E. The optimal multi-layer structure of back propagation networks. Proceedings of the 7th WSEAS International Conference on Neural Networks; Croatia. 2006. p. 108–13.
  • Yang ZX, Zhao GS, Rong HJ, Yang J. Adaptive backstepping control for magnetic bearing system via feedforward networks with random hidden nodes. Neurocomputing. 2016 Jan 22; 174:109–20.
  • Huang Y. Advances in artificial neural networks–methodological development and application. Algorithms. 2009 Aug 3; 2(3):973–1007.
  • Albarakati N, Kecman V. Fast neural network algorithm for solving classification tasks: Batch error back-propagation algorithm. Proceedings of IEEE in Southeastcon; Jacksonvill. 2013. p. 1–8.
  • Bui NT, Hasegawa H. Training artificial neural network using modification of differential evolution algorithm. International Journal of Machine Learning and computing. 2015; 5(1):1–6.
  • Chen C, Wang P, Xing J. Indoor location algorithm of back propagation neural network based on residual analysis. 33rd Control Conference (CCC); 2014. p. 467–71.
  • Dai Q, Ma Z, Xie Q. A two-phased and Ensemble scheme integrated Backpropagation algorithm. Applied Soft Computing. 2014; 24:1124–35.
  • Dai Q, Liu N. Alleviating the problem of local minima in backpropagation through competitive learning. Neurocomputing. 2012; 94:152–8.

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.