TY - GEN
T1 - Benchmarking Training Methodologies for Dense Neural Networks
AU - Tonkin, Isaac
AU - Harris, Geoff
AU - Novykov, Volodymyr
PY - 2022/8/30
Y1 - 2022/8/30
N2 - Multi-Layer Perceptrons (MLP) trained using Back Propagation (BP) and Extreme Learning Machine (ELM) methodologies on highly non-linear, two-dimensional functions are compared and benchmarked. To ensure validity, identical numbers of trainable parameters were used for each approach. BP training combined with an MLP structure used many hidden layers, while ELM training can only be used on the Single Layer, Feed Forward (SLFF) neural network topology. For the same number of trainable parameters, ELM training was more efficient, using less time to train the network, while also being more effective in terms of the final value of the loss function.
AB - Multi-Layer Perceptrons (MLP) trained using Back Propagation (BP) and Extreme Learning Machine (ELM) methodologies on highly non-linear, two-dimensional functions are compared and benchmarked. To ensure validity, identical numbers of trainable parameters were used for each approach. BP training combined with an MLP structure used many hidden layers, while ELM training can only be used on the Single Layer, Feed Forward (SLFF) neural network topology. For the same number of trainable parameters, ELM training was more efficient, using less time to train the network, while also being more effective in terms of the final value of the loss function.
U2 - 10.1007/978-3-031-08530-7_59
DO - 10.1007/978-3-031-08530-7_59
M3 - Conference contribution
SN - 978-3-031-08529-1
T3 - Lecture Notes in Computer Science
SP - 707
EP - 713
BT - Advances and Trends in Artificial Intelligence. Theory and Practices in Artificial Intelligence
A2 - Fujita, Hamido
A2 - Fournier-Viger, Philippe
A2 - Ali, Moonis
A2 - Wang, Yinglin
PB - Springer
ER -