| Peer-Reviewed

Research on Face Recognition Algorithm Based on Improved Residual Neural Network

Received: 18 March 2021     Accepted: 30 March 2021     Published: 12 April 2021
Views:       Downloads:
Abstract

The residual neural network is prone to two problems when it is used in the process of face recognition: the first is "overfitting", and the other is the slow or non-convergence problem of the loss function of the network in the later stage of training. In this paper, in order to solve the problem of "overfitting", this paper increases the number of training samples by adding Gaussian noise and salt and pepper noise to the original image to achieve the purpose of enhancing the data, and then we added "dropout" to the network, which can improve the generalization ability of the network. In addition, we have improved the loss function and optimization algorithm of the network. After analyzing the three loss functions of Softmax, center, and triplet, we consider their advantages and disadvantages, and propose a joint loss function. Then, for the optimization algorithm that is widely used through the network at present, that is the Adam algorithm, although its convergence speed is relatively fast, but the convergence results are not necessarily satisfactory. According to the characteristics of the sample iteration of the convolutional neural network during the training process, in this paper, the memory factor and momentum ideas are introduced into the Adam optimization algorithm. This can increase the speed of network convergence and improve the effect of convergence. Finally, this paper conducted simulation experiments on the data-enhanced ORL face database and Yale face database, which proved the feasibility of the method proposed in this paper. Finally, this paper compares the time-consuming and power consumption of network training before and after the improvement on the CMU_PIE database, and comprehensively analyzes their performance.

Published in Automation, Control and Intelligent Systems (Volume 9, Issue 1)
DOI 10.11648/j.acis.20210901.16
Page(s) 46-60
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2021. Published by Science Publishing Group

Keywords

Residual Neural Network, Data Enhancement, Overfitting, Loss Function, Optimization Algorithm

References
[1] B. A. White and M. I. Elmasry, "The digi-neocognitron: a digital neocognitron neural network model for VLSI," in IEEE Transactions on Neural Networks, vol. 3, no. 1, pp. 73-85, Jan. 1992.
[2] D. J. Toms, "Training binary node feedforward neural networks by back propagation of error," in Electronics Letters, vol. 26, no. 21, pp. 1745-1746, 11 Oct. 1990.
[3] J. Lin, C. Chao and J. Chiou, "Determining Neuronal Number in Each Hidden Layer Using Earthquake Catalogues as Training Data in Training an Embedded Back Propagation Neural Network for Predicting Earthquake Magnitude," in IEEE Access, vol. 6, pp. 52582-52597, 2018.
[4] W. Kehe, Z. Ying and G. Minghao, "Situation Awareness Technology of LeNet-5 Attack Detection Model Based on Optimized Feature Set," 2020 IEEE 11th International Conference on Software Engineering and Service Science (ICSESS), Beijing, China, 2020, pp. 269-272.
[5] Y. Guo, Z. Pang, J. Du, F. Jiang and Q. Hu, "An Improved AlexNet for Power Edge Transmission Line Anomaly Detection," in IEEE Access, vol. 8, pp. 97830-97838, 2020.
[6] S. Sun et al., "Fault Diagnosis of Conventional Circuit Breaker Contact System Based on Time–Frequency Analysis and Improved AlexNet," in IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1-12, 2021, Art no. 3508512.
[7] X. Yao, X. Wang, Y. Karaca, J. Xie and S. Wang, "Glomerulus Classification via an Improved GoogLeNet," in IEEE Access, vol. 8, pp. 176916-176923, 2020.
[8] L. Balagourouchetty, J. K. Pragatheeswaran, B. Pottakkat and R. G, "GoogLeNet-Based Ensemble FCNet Classifier for Focal Liver Lesion Diagnosis," in IEEE Journal of Biomedical and Health Informatics, vol. 24, no. 6, pp. 1686-1694, June 2020.
[9] H. Ke, D. Chen, X. Li, Y. Tang, T. Shah and R. Ranjan, "Towards Brain Big Data Classification: Epileptic EEG Identification With a Lightweight VGGNet on Global MIC," in IEEE Access, vol. 6, pp. 14722-14733, 2018.
[10] A. Mahajan and S. Chaudhary, "Categorical Image Classification Based On Representational Deep Network (RESNET)," 2019 3rd International conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, 2019, pp. 327-330.
[11] H. Zhu, N. Lin, H. Leung, R. Leung and S. Theodoidis, "Target Classification From SAR Imagery Based on the Pixel Grayscale Decline by Graph Convolutional Neural Network," in IEEE Sensors Letters, vol. 4, no. 6, pp. 1-4, June 2020, Art no. 7002204.
[12] C. Ding and D. Tao, "Trunk-Branch Ensemble Convolutional Neural Networks for Video-Based Face Recognition," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 4, pp. 1002-1014, 1 April 2018.
[13] S. Srisuk and S. Ongkittikul, "Robust face recognition based on weighted DeepFace," 2017 International Electrical Engineering Congress (iEECON), Pattaya, 2017, pp. 1-4.
[14] S. Y. Wong, K. S. Yap, Q. Zhai and X. Li, "Realization of a Hybrid Locally Connected Extreme Learning Machine With DeepID for Face Verification," in IEEE Access, vol. 7, pp. 70447-70460, 2019.
[15] E. Jose, G. M., M. T. P. Haridas and M. H. Supriya, "Face Recognition based Surveillance System Using FaceNet and MTCNN on Jetson TX2," 2019 5th International Conference on Advanced Computing & Communication Systems (ICACCS), Coimbatore, India, 2019, pp. 608-613.
[16] Y. Liu, J. A. Starzyk and Z. Zhu, "Optimized Approximation Algorithm in Neural Networks Without Overfitting," in IEEE Transactions on Neural Networks, vol. 19, no. 6, pp. 983-995, June 2008.
[17] M. A. Abuzneid and A. Mahmood, "Enhanced Human Face Recognition Using LBPH Descriptor, Multi-KNN, and Back-Propagation Neural Network," in IEEE Access, vol. 6, pp. 20641-20651, 2018.
[18] Y. Zheng, Z. Gao, Y. Wang and Q. Fu, "MOOC Dropout Prediction Using FWTS-CNN Model Based on Fused Feature Weighting and Time Series," in IEEE Access, vol. 8, pp. 225324-225335, 2020.
[19] W. Zhang, Y. Chen, W. Yang, G. Wang, J. Xue and Q. Liao, "Class-Variant Margin Normalized Softmax Loss for Deep Face Recognition," in IEEE Transactions on Neural Networks and Learning Systems.
[20] C. Qi and F. Su, "Contrastive-center loss for deep neural networks," 2017 IEEE International Conference on Image Processing (ICIP), Beijing, 2017, pp. 2851-2855.
[21] W. Lei, R. Zhang, Y. Yang, R. Wang and W. Zheng, "Class-Center Involved Triplet Loss for Skin Disease Classification on Imbalanced Data," 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 2020, pp. 1-5.
[22] Z. Zhang, "Improved Adam Optimizer for Deep Neural Networks," 2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS), Banff, AB, Canada, 2018, pp. 1-2.
[23] X. Li, W. Wang, S. Zhu, W. Xiang and X. Wu, "Generalized Nesterov Accelerated Conjugate Gradient Algorithm for a Compressively Sampled MR Imaging Reconstruction," in IEEE Access, vol. 8, pp. 157130-157139, 2020.
[24] B. Li and Y. He, "An Improved ResNet Based on the Adjustable Shortcut Connections," in IEEE Access, vol. 6, pp. 18967-18974, 2018.
Cite This Article
  • APA Style

    Tang Xiaolin, Wang Xiaogang, Hou Jin, Han Yiting, Huang Ye. (2021). Research on Face Recognition Algorithm Based on Improved Residual Neural Network. Automation, Control and Intelligent Systems, 9(1), 46-60. https://doi.org/10.11648/j.acis.20210901.16

    Copy | Download

    ACS Style

    Tang Xiaolin; Wang Xiaogang; Hou Jin; Han Yiting; Huang Ye. Research on Face Recognition Algorithm Based on Improved Residual Neural Network. Autom. Control Intell. Syst. 2021, 9(1), 46-60. doi: 10.11648/j.acis.20210901.16

    Copy | Download

    AMA Style

    Tang Xiaolin, Wang Xiaogang, Hou Jin, Han Yiting, Huang Ye. Research on Face Recognition Algorithm Based on Improved Residual Neural Network. Autom Control Intell Syst. 2021;9(1):46-60. doi: 10.11648/j.acis.20210901.16

    Copy | Download

  • @article{10.11648/j.acis.20210901.16,
      author = {Tang Xiaolin and Wang Xiaogang and Hou Jin and Han Yiting and Huang Ye},
      title = {Research on Face Recognition Algorithm Based on Improved Residual Neural Network},
      journal = {Automation, Control and Intelligent Systems},
      volume = {9},
      number = {1},
      pages = {46-60},
      doi = {10.11648/j.acis.20210901.16},
      url = {https://doi.org/10.11648/j.acis.20210901.16},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.acis.20210901.16},
      abstract = {The residual neural network is prone to two problems when it is used in the process of face recognition: the first is "overfitting", and the other is the slow or non-convergence problem of the loss function of the network in the later stage of training. In this paper, in order to solve the problem of "overfitting", this paper increases the number of training samples by adding Gaussian noise and salt and pepper noise to the original image to achieve the purpose of enhancing the data, and then we added "dropout" to the network, which can improve the generalization ability of the network. In addition, we have improved the loss function and optimization algorithm of the network. After analyzing the three loss functions of Softmax, center, and triplet, we consider their advantages and disadvantages, and propose a joint loss function. Then, for the optimization algorithm that is widely used through the network at present, that is the Adam algorithm, although its convergence speed is relatively fast, but the convergence results are not necessarily satisfactory. According to the characteristics of the sample iteration of the convolutional neural network during the training process, in this paper, the memory factor and momentum ideas are introduced into the Adam optimization algorithm. This can increase the speed of network convergence and improve the effect of convergence. Finally, this paper conducted simulation experiments on the data-enhanced ORL face database and Yale face database, which proved the feasibility of the method proposed in this paper. Finally, this paper compares the time-consuming and power consumption of network training before and after the improvement on the CMU_PIE database, and comprehensively analyzes their performance.},
     year = {2021}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Research on Face Recognition Algorithm Based on Improved Residual Neural Network
    AU  - Tang Xiaolin
    AU  - Wang Xiaogang
    AU  - Hou Jin
    AU  - Han Yiting
    AU  - Huang Ye
    Y1  - 2021/04/12
    PY  - 2021
    N1  - https://doi.org/10.11648/j.acis.20210901.16
    DO  - 10.11648/j.acis.20210901.16
    T2  - Automation, Control and Intelligent Systems
    JF  - Automation, Control and Intelligent Systems
    JO  - Automation, Control and Intelligent Systems
    SP  - 46
    EP  - 60
    PB  - Science Publishing Group
    SN  - 2328-5591
    UR  - https://doi.org/10.11648/j.acis.20210901.16
    AB  - The residual neural network is prone to two problems when it is used in the process of face recognition: the first is "overfitting", and the other is the slow or non-convergence problem of the loss function of the network in the later stage of training. In this paper, in order to solve the problem of "overfitting", this paper increases the number of training samples by adding Gaussian noise and salt and pepper noise to the original image to achieve the purpose of enhancing the data, and then we added "dropout" to the network, which can improve the generalization ability of the network. In addition, we have improved the loss function and optimization algorithm of the network. After analyzing the three loss functions of Softmax, center, and triplet, we consider their advantages and disadvantages, and propose a joint loss function. Then, for the optimization algorithm that is widely used through the network at present, that is the Adam algorithm, although its convergence speed is relatively fast, but the convergence results are not necessarily satisfactory. According to the characteristics of the sample iteration of the convolutional neural network during the training process, in this paper, the memory factor and momentum ideas are introduced into the Adam optimization algorithm. This can increase the speed of network convergence and improve the effect of convergence. Finally, this paper conducted simulation experiments on the data-enhanced ORL face database and Yale face database, which proved the feasibility of the method proposed in this paper. Finally, this paper compares the time-consuming and power consumption of network training before and after the improvement on the CMU_PIE database, and comprehensively analyzes their performance.
    VL  - 9
    IS  - 1
    ER  - 

    Copy | Download

Author Information
  • School of Automation & Information Engineering, Sichuan University of Science & Engineering, Yibin, China

  • School of Automation & Information Engineering, Sichuan University of Science & Engineering, Yibin, China

  • School of Automation & Information Engineering, Sichuan University of Science & Engineering, Yibin, China

  • School of Automation & Information Engineering, Sichuan University of Science & Engineering, Yibin, China

  • School of Automation & Information Engineering, Sichuan University of Science & Engineering, Yibin, China

  • Sections