A new approach to weakening the attribute independence assumption by averaging all of a constrained class of classifiers is presented, which delivers comparable prediction accuracy to LBR and Super-Parent TAN with substantially improved computational efficiency at test timerelative to the former and at training time relative to the latter.Expand

MultiBoosting is an extension to the highly successful AdaBoost technique for forming decision committees that is able to harness both AdaBoost's high bias and variance reduction with wagging's superior variance reduction.Expand

It is shown that various rule learning heuristics used in CSM, EPM and SD algorithms all aim at optimizing a trade off between rule coverage and precision.Expand

This paper proposes the application of lazy learning techniques to Bayesian tree induction and presents the resulting lazy Bayesian rule learning algorithm, called LBR, which can be justified by a variant of Bayes theorem which supports a weaker conditional attribute independence assumption than is required by naive Bayes.Expand

The style of the entries in the Encyclopedia of Machine Learning is expository and tutorial, making the book a practical resource for machine learning experts, as well as professionals in other fields who need to access this vital information but may not have the time to work their way through an entire text on their topic of interest.Expand

An important step towards finding the AlexNet network for TSC is taken by presenting InceptionTime---an ensemble of deep Convolutional Neural Network models, inspired by the Inception-v4 architecture, which outperforms HIVE-COTE's accuracy together with scalability.Expand

This paper proposes techniques to overcome the extreme risk of type-1 error by applying well-established statistical practices, which allow the user to enforce a strict upper limit on the risk of experimentwise error.Expand

Properly managing discretization bias and variance can effectively reduce naive-Bayes classification error by adjusting the number of intervals and theNumber of training instances contained in each interval is supplied.Expand

This paper proposes strategies for estimating performance of a classifier using as little labeling resource as possible and shows that these strategies can reduce the variance in estimation of classifier accuracy by a significant amount compared to simple random sampling.Expand

The use of admissible search is of potential value to the machine learning community as it means that the exact learning biases to be employed for complex learning tasks can be precisely specified and manipulated.Expand