Abstract:This article proposes a novel density estimation based algorithm for carrying out supervised machine learning. The proposed algorithm features O(n) time complexity for generating a classifier, where n is the number of sampling instances in the training dataset. This feature is highly desirable in contemporary applications that involve large and still growing databases. In comparison with the kernel density estimation based approaches, the mathe-matical fundamental behind the proposed algorithm is not based on the assump-tion that the number of training instances approaches infinite. As a result, a classifier generated with the proposed algorithm may deliver higher prediction accuracy than the kernel density estimation based classifier in some cases.
Abstract:In recent years, kernel density estimation has been exploited by computer scientists to model machine learning problems. The kernel density estimation based approaches are of interest due to the low time complexity of either O(n) or O(n*log(n)) for constructing a classifier, where n is the number of sampling instances. Concerning design of kernel density estimators, one essential issue is how fast the pointwise mean square error (MSE) and/or the integrated mean square error (IMSE) diminish as the number of sampling instances increases. In this article, it is shown that with the proposed kernel function it is feasible to make the pointwise MSE of the density estimator converge at O(n^-2/3) regardless of the dimension of the vector space, provided that the probability density function at the point of interest meets certain conditions.