Multi-Layer Perceptrons (MLP) trained using Back Propagation (BP) and Extreme Learning Machine (ELM) methodologies on highly non-linear, two-dimensional functions are compared and benchmarked. To ensure validity, identical numbers of trainable parameters were used for each approach. BP training combined with an MLP structure used many hidden layers, while ELM training can only be used on the Single Layer, Feed Forward (SLFF) neural network topology. For the same number of trainable parameters, ELM training was more efficient, using less time to train the network, while also being more effective in terms of the final value of the loss function.
|Title of host publication||Advances and Trends in Artificial Intelligence. Theory and Practices in Artificial Intelligence|
|Subtitle of host publication||Proceedings of the 35th International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, IEA/AIE 2022|
|Editors||Hamido Fujita, Philippe Fournier-Viger, Moonis Ali, Yinglin Wang|
|Number of pages||7|
|Publication status||Published - 30 Aug 2022|
|Name||Lecture Notes in Computer Science|