Abstract and keywords
Abstract (English):
Study considers regression estimation problem as it is posed in statistical learning theory, that is as empirical risk minimization problem. It is shown that feedforward neural network implements empirical risk minimization principle for regression estimation. The real worldpossibility of establishing confidence interval for neural network learning quality is being examined. Article proposes new method of establishing upper boundary of the neural network regression quality confidence interval for whole function. It is shown that real value of neural network learning quality is less than expected upper boundary of the neural network regression quality confidence interval, thus the functioning of proposed method is shown. Described way may be of great use in real world problems, which require assured estimation of neural network regression quality. The directions of further research lies in finding more precise estimations for neural network VC-dimension, since that part brings the most inaccuracy in upper boundary establishing process.

Keywords:
statistical learning theory, empirical risk minimization, neural network, multilayer perceptron, VC-dimension, learning quality, confidence interval
Text
Publication text (PDF): Read Download
References

1. Hajkin S. Nejronnye seti: polnyj kurs.: per. s angl. M.: Izd. dom «Vil'yams», 2006. 1104 s.

2. Vapnik V.N. Vosstanovlenie zavisimostej po empiricheskim dannym. M.: Nauka, 1979. 448 s.

3. Du K.L., Swamy M.N.S. Neural Networks and Statistical Learning. London: Springer-Verlag, 2019. 998 p.

4. Stone M. Cross-validatory Choice and Assessment of Statistical Predictions // Journal of the Royal Statistical Society. 1974. № 36. P. 111-133.

5. Khaki S., Nettleton D. Conformal Prediction Intervals for Neural Networks Using Cross Validation. 2020. URL: https://arxiv.org/abs/2006.16941v1 (data obrashcheniya: 15.03.2023).

6. Lower Upper Bound Estimation Method for Construction of Neural Network-Based Prediction Intervals / A. Khosravi [et al.] // IEEE Transactions on Neural Networks. 2011. № 22. P. 337-346.

7. Kivaranovic D., Johnson K.D., Leeb H. Adaptive, Distribution-Free Prediction Intervals for Deep Networks. 2020. URL: https://arxiv.org/pdf/1905.10634.pdf (data obrashcheniya: 15.03.2023).

8. Rutkovskaya D., Pilin'skij M., Rutkovskij L. Nejronnye seti, geneticheskie algoritmy i nechetkie sistemy. M.: «Goryachaya liniya - Telekom», 2006. 452 s.

9. Polyak B.T. Vvedenie v optimizaciyu. M.: Nauka, 1983. 384 s.

10. Vapnik V.N., Chervonenkis A.Ya. Teoriya raspoznavaniya obrazov (statisticheskie problemy obucheniya). M.: Nauka, 1974. 416 s.

11. Vapnik V.N., Chervonenkis A.Ya. O ravnomernoj skhodimosti chastot poyavleniya sobytij k ih veroyatnostyam // Teoriya veroyatnostej i ee primeneniya. 1971. № 2. C. 264-279.

12. Vapnik V.N. Statistical Learning Theory. N.Y.: J. Wiley, 1998. 736 p.

13. Baum E.B., Haussler D. What Size Net Gives Valid Generalization? // Advances in Neural Information Processing Systems. 1988. № 1. P. 81-90.

14. Karpinski M., Macintyre A. Polynomial Bounds for VC Dimension of Sigmoidal and General Pfaffian Neural Networks // J. Computer Systems Science. 1997. № 54. P. 169-176.

Login or Create
* Forgot password?