异构微差同步并行训练算法-学术咨询网
计算机工程与科学

计算机工程与科学杂志

  • 北大期刊
  • CSCD
  • 统计源期刊
  • 知网收录
  • 维普收录
  • 万方收录
基本信息
  • 主管单位:

    国防科技大学

  • 主办单位:

    国防科技大学计算机学院

  • 国际刊号:

    1007-130X

  • 国内刊号:

    43-1258/TP

  • 创刊时间:

    1973

  • 期刊类别:

    计算机期刊

  • 出版社:

    计算机工程与科学

  • 主编:

    王志英

  • 发行周期:

    月刊

出版信息
  • 审稿周期:

    1-3个月

  • 被引次数:

    19216

  • 邮发代号:

    42-153

  • 全年定价:

    ¥796.00

  • 他引率:

    0.9643

  • 邮编:

    410073

期刊详情 投稿咨询 关注公众号

异构微差同步并行训练算法

作者:黄山,,吴煜凡,,吕鹤轩,,段晓东,
摘要:前馈神经网络BPNN因具有非线性能力强、自学习能力强、自适应能力强以及容错能力强等优点,被广泛应用于行为识别和预测等领域。随着模型的升级优化和数据量的快速增长,

前馈神经网络BPNN因具有非线性能力强、自学习能力强、自适应能力强以及容错能力强等优点,被广泛应用于行为识别和预测等领域。随着模型的升级优化和数据量的快速增长,基于大数据分布式计算框架的并行训练架构成为主流。ApacheFlink作为新一代大数据计算框架,因其具有高吞吐量、低时延等特点而被广泛应用。硬件设备更新换代速度的加快以及购买批次不同导致现实生活中Flink集群大多数为异构集群,意味着集群中的计算资源不均衡。现有的BPNN并行训练模型无法解决因计算资源不均衡带来的训练过程中高性能节点空转的问题。此外,异构环境下BPNN的并行训练还存在节点数量增加,节点间的通信开销也随之增加的问题。传统的小批量梯度下降方法拥有较好的寻优效果,但随机的初始化模型和小批量的梯度下降特点导致了BPNN并行化训练出现收敛速度缓慢的问题。针对以上问题,为加快异构环境下BPNN并行化训练速度,提高BPNN并行训练效率,提出了异构微差同步并行训练算法。该算法能够针对异构环境下节点性能不同的情况,对节点性能进行评分,并实时地通过数据分区模块动态地按比例分配数据,使节点性能和节点分配数据量成正比,从而减少高性能节点空转时长。


Back propagation neural network (BPNN) is widely used in fields such as behavior recognition and prediction due to its advantages including strong nonlinearity, self-learning capability, adaptability, and robust fault tolerance. With the upgrade and optimization of models and the accelerated growth of data volume, parallel training architectures based on big data distributed computing frameworks have become mainstream. Apache Flink, as a new generation of big data computing frameworks, is widely applied due to its high throughput and low latency characteristics. However, due to the accelerated pace of hardware upgrades and different purchase batches, Flink clusters in real-life scenarios are mostly heterogeneous, meaning that computing resources within the cluster are unbalanced. Existing BPNN parallel training models cannot address the issue of high-performance nodes idling during the training process due to this unbalanced computing resource distribution. Additionally, in a heterogeneous environment, as the number of nodes increases, so does the communication overhead between nodes during BPNN parallel training. The traditional mini-batch gradient descent method possesses precise optimization capabilities, but the combination of random model initialization and precise mini-batch gradient descent characteristics leads to slow convergence speeds in BPNN parallel training. To address the aforementioned issues, this paper aims to accelerate BPNN parallel training speed and improve BPNN parallel training efficiency in a heterogeneous environment by proposing the heterogeneous micro-difference synchronous parallel training (HMDSPT) algorithm. This algorithm scores node performance based on variations in performance within a heterogeneous environment and dynamically allocates data in proportion through a data partitioning module in real-time, ensuring that node performance is directly proportional to the amount of data allocated to each node. This approach reduces the idling time of high- performance nodes.


相关文章

[1]徐浩桐, 黄山, 孙国璋, 贺菲莉, 段晓东, . 面向云环境的Flink负载均衡策略[J]. 计算机工程与科学, 2022, 44(05): 779-787.
[2]颜子杰,陈孟强,吴维刚. 基于训练数据动态分配的深度学习并行优化机制[J]. 计算机工程与科学, 2018, 40(增刊S1): 141-144.
[3]柳松 王展. 基于径向基概率神经网络的人脸识别方法[J]. J4, 2006, 28(2): 57-60.
注:因版权方要求,不能公开全文,如需全文,请咨询杂志社