Benchmarking Training Methodologies for Dense Neural Networks

Isaac Tonkin, Geoff Harris, Volodymyr Novykov

Research output: Chapter in Book/Report/Conference proceedingConference contributionResearchpeer-review


Multi-Layer Perceptrons (MLP) trained using Back Propagation (BP) and Extreme Learning Machine (ELM) methodologies on highly non-linear, two-dimensional functions are compared and benchmarked. To ensure validity, identical numbers of trainable parameters were used for each approach. BP training combined with an MLP structure used many hidden layers, while ELM training can only be used on the Single Layer, Feed Forward (SLFF) neural network topology. For the same number of trainable parameters, ELM training was more efficient, using less time to train the network, while also being more effective in terms of the final value of the loss function.
Original languageEnglish
Title of host publicationAdvances and Trends in Artificial Intelligence. Theory and Practices in Artificial Intelligence
Subtitle of host publicationProceedings of the 35th International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, IEA/AIE 2022
EditorsHamido Fujita, Philippe Fournier-Viger, Moonis Ali, Yinglin Wang
Number of pages7
ISBN (Electronic)978-3-031-08530-7
ISBN (Print)978-3-031-08529-1
Publication statusPublished - 30 Aug 2022

Publication series

NameLecture Notes in Computer Science
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Dive into the research topics of 'Benchmarking Training Methodologies for Dense Neural Networks'. Together they form a unique fingerprint.

Cite this