文摘
The estimation of probability density functions (pdf) from unlabeled data samples is a relevant (and, still open) issue in pattern recognition and machine learning. Statistical parametric and nonparametric approaches present severe drawbacks. Only a few instances of neural networks for pdf estimation are found in the literature, due to the intrinsic difficulty of unsupervised learning under the necessary integral-equals-one constraint. In turn, also such neural networks do suffer from serious limitations. The paper introduces a soft-constrained algorithm for training a multilayer perceptron (MLP) to estimate pdfs empirically. A variant of the Metropolis-Hastings algorithm (exploiting the very probabilistic nature of the MLP) is used to satisfy numerically the constraint on the integral of the function learned by the MLP. The preliminary outcomes of a simulation on data drawn from a mixture of Fisher-Tippett pdfs are reported on, and compared graphically with the estimates yielded by statistical techniques, showing the viability of the approach.