Share:


Density results by deep neural network operators with integer weights

    Danilo Costarelli   Affiliation

Abstract

In the present paper, a new family of multi-layers (deep) neural network (NN) operators is introduced. Density results have been established in the space of continuous functions on [−1,1], with respect to the uniform norm. First, the case of the operators with two-layers is considered in detail, then the definition and the corresponding density results have been extended to the general case of multi-layers operators. All the above definitions allow us to prove approximation results by a constructive approach, in the sense that, for any given f all the weights, the thresholds, and the coefficients of the deep NN operators can be explicitly determined. Finally, examples of activation functions have been provided, together with graphical examples. The main motivation of this work resides in the aim to provide the corresponding multi-layers version of the well-known (shallow) NN operators, according to what is done in the applications with the construction of deep neural models.

Keyword : deep neural networks, neural network operators, density results, ReLU activation function, RePUs activation functions, sigmoidal functions

How to Cite
Costarelli, D. (2022). Density results by deep neural network operators with integer weights. Mathematical Modelling and Analysis, 27(4), 547–560. https://doi.org/10.3846/mma.2022.15974
Published in Issue
Nov 10, 2022
Abstract Views
378
PDF Downloads
491
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

References

G.A. Anastassiou. Rate of convergence of some neural network operators to the unit-univariate case. J. Math. Anal. Appl., 212(1):237–262, 1997. https://doi.org/10.1006/jmaa.1997.5494

G.A. Anastassiou. Intelligent Systems II: Complete Approximation by Neural Network Operators. Studies in Computational Intelligence book series (SCI, volume 608), Springer, Cham, 2016. https://doi.org/10.1007/978-3-319-20505-2

S. Bajpeyi and A. Sathish Kumar. Max-product type exponential neural network operators. In International Conference on Mathematical Analysis and Computing, ICMAC 2019, Mathematical Analysis and Computing, pp. 561–571, 2019. https://doi.org/10.1007/978-981-33-4646-8_44

S. Bajpeyi and A. Sathish Kumar. Approximation by exponential sampling type neural network operators. Analysis and Mathematical Physics, 11(108), 2021. https://doi.org/10.1007/s13324-021-00543-y

A.R. Barron. Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information Theory, 39(3):930–945, 1993. https://doi.org/10.1109/18.256500

P. Bohra, J. Campos, H. Gupta, S. Aziznejad and M. Unser. Learning activation functions in deep (spline) neural networks. IEEE Open Journal of Signal Proc., 1:295–309, 2020. https://doi.org/10.1109/OJSP.2020.3039379

P.L. Butzer and R.J. Nessel. Fourier Analysis and Approximation I. Academic Press, New York-London, 1971. https://doi.org/10.1007/978-3-0348-7448-9

F. Cao and Z. Chen. The approximation operators with sigmoidal functions. Comput. Math. Appl., 58(4):758–765, 2009. https://doi.org/10.1016/j.camwa.2009.05.001

F. Cao and Z. Chen. The construction and approximation of a class of neural networks operators with ramp functions. J. Comput. Anal. Appl., 14(1):101–112, 2012.

P. Cardaliaguet and G. Euvrard. Approximation of a function and its derivative with a neural network. Neural Netw., 5(2):207–220, 1992. https://doi.org/10.1016/S0893-6080(05)80020-6

H. Chen, T. Chen and R. Liu. A constructive proof and an extension of Cybenko’s approximation theorem. In Connie Page and Raoul LePage(Eds.), Computing Science and Statistics, pp. 163–168, New York, NY, 1992. Springer New York. https://doi.org/10.1007/978-1-4612-2856-1_21

L. Coroianu and S.G. Gal. Approximation by truncated max-product operators of Kantorovich-type based on generalized (φ,ψ)-kernels. Math. Meth. Appl. Sci., 41(17):7971–7984, 2018. https://doi.org/10.1002/mma.5262

D. Costarelli. Interpolation by neural network operators activated by ramp functions. J. Math. Anal. Appl., 419(1):574–582, 2014. https://doi.org/10.1016/j.jmaa.2014.05.013

D. Costarelli. Neural network operators: constructive interpolation of multivariate functions. Neural Netw., 67:28–36, 2015. https://doi.org/10.1016/j.neunet.2015.02.002

D. Costarelli and A.R. Sambucini. Approximation results in orlicz spaces for sequences of Kantorovich max-product neural network operators. Results in Mathematics, 73(1):15, 2018. https://doi.org/10.1007/s00025-018-0799-4

D. Costarelli and R. Spigler. Approximation results for neural network operators activated by sigmoidal functions. Neural Netw., 44:101–106, 2013. https://doi.org/10.1016/j.neunet.2013.03.015

D. Costarelli and R. Spigler. Multivariate neural network operators with sigmoidal activation functions. Neural Netw., 48:72–77, 2013. https://doi.org/10.1016/j.neunet.2013.07.009

D. Costarelli and R. Spigler. Approximation by series of sigmoidal functions with applications to neural networks. Annali Mat. Pura Appl., 194(1):289–306, 2015. https://doi.org/10.1007/s10231-013-0378-y

D. Costarelli and G. Vinti. Saturation classes for max-product neural network operators activated by sigmoidal functions. Results Math., 72(3):1555–1569, 2017. https://doi.org/10.1007/s00025-017-0692-6

G. Cybenko. Approximation by superpositions of a sigmoidal function. Math. Control Signals Systems, 2:303–314, 1989. https://doi.org/10.1007/BF02551274

S. Goebbels. On sharpness of error bounds for univariate approximation by single hidden layer feedforward neural networks. Results Math., 75:109, 2020. https://doi.org/10.1007/s00025-020-01239-8

K. Hornik. Approximation capabilities of multilayer feedforward networks. Neural Netw., 4(2):251–257, 1991. https://doi.org/10.1016/0893-6080(91)90009-T

K. Hornik, M. Stinchcombe and H. White. Multilayer feedforward networks are universal approximators. Neural Netw., 2(5):359–366, 1989. https://doi.org/10.1016/0893-6080(89)90020-8

U. Kadak. Fractional type multivariate neural network operators. Math. Meth. Appl. Sci., 2021. https://doi.org/10.1002/mma.7460

P.C. Kainen, V. Kurková and A. Vogt. Approximative compactness of linear combinations of characteristic functions. J. Approx. Theory, 257:105435, 2020. https://doi.org/10.1016/j.jat.2020.105435

M. Kohler and A. Krzyzak. Over-parametrized deep neural networks do not generalize well. ArXiv preprint arxiv:1912.03925, 2020.

V. Kurková. Kolmogorov’s theorem and multilayer neural networks. Neural Netw., 5(3):501–506, 1992. https://doi.org/10.1016/0893-6080(92)90012-8

V. Kurkova´ and M. Sanguineti. Classification by sparse neural networks. IEEE Trans. on Neural Netw. Learning Syst., 30(9):2746–2754, 2019. https://doi.org/10.1109/TNNLS.2018.2888517

B. Lenze. Constructive Multivariate Approximation with Sigmoidal Functions and Applications to Neural Networks, pp. 155–175. Birkha¨user Basel, Basel, 1992. https://doi.org/10.1007/978-3-0348-8619-2_9

B. Li, S. Tang and H. Yu. Approximations of high dimensional smooth functions by deep neural networks with rectified power units. arXiv:1903.05858v4, 2019.

X. Li. Simultaneous approximations of multivariate functions and their derivatives by neural networks with one hidden layer. Neurocomputing, 12:327–343, 1996. https://doi.org/10.1016/0925-2312(95)00070-4

X. Li. On simultaneous approximations by radial basis function neural networks. Appl. Math. Comput., 95:75–89, 1998. https://doi.org/10.1016/S0096-3003(97)10089-3

H.N. Mhaskar and C.A. Micchelli. Approximation by superposition of sigmoidal and radial basis functions. Adv. Appl. Math., 13(32):350–373, 1992. https://doi.org/10.1016/0196-8858(92)90016-P

W. Samek, G. Montavon, S. Lapuschkin, C.J. Anders and K.-R. Mu¨ller. Explaining deep neural networks and beyond: A review of methods and applications. Proceedings of the IEEE, 109(3):247–278, 2021. https://doi.org/10.1109/JPROC.2021.3060483

J. Schmidhuber. Deep learning in neural networks: An overview. Neural Netw., 61:85–117, 2015. https://doi.org/10.1016/j.neunet.2014.09.003

Z. Shen, H. Yang and S. Zhang. Optimal approximation rate of ReLU networks in terms of width and depth. Journal de Mathématiques Pures et Appliquées, 157:101–135, 2022. https://doi.org//10.1016/j.matpur.2021.07.009

C. Turkun and O. Duman. Modified neural network operators and their convergence properties with summability methods. RACSAM, 114:132, 2020. https://doi.org/10.1007/s13398-020-00860-0

D. Yarotsky. Error bounds for approximations with deep ReLU networks. Neural Netw., 94:103–114, 2017. https://doi.org/10.1016/j.neunet.2017.07.002

D.X. Zhou. Theory of deep convolutional neural networks: Downsampling. Neural Netw., 124:319–327, 2020. https://doi.org/10.1016/j.neunet.2020.01.018

D.X. Zhou. Universality of deep convolutional neural networks. Appl. Comput. Harmonic Anal., 48(2):787–794, 2020. https://doi.org/10.1016/j.acha.2019.06.004