AbstainAttributePercentile |
Only predict if attribute value within percentile range.
|
AbstainAverage |
Average base classifiers, abstain if difference outside thresholds
Valid options are:
|
AbstainAverageWithClassifierWeights |
Average base classifiers, abstain if difference outside thresholds
Valid options are:
|
AbstainingCascade |
The specified classifiers represent a cascade: if the first one abstains, the second is used (and so on), otherwise the prediction is returned.
If all classifiers prior to the last one abstained then the prediction of the last one is returned.
|
AbstainingClassifierWrapper |
Wraps an abstaining classifier and allows turning on/of abstaining.
|
AbstainLeastMedianSq |
Finds the base classifier with the best least median squared error.
|
AbstainMinimumProbability |
Abstains if the probability of the chosen class label is below the specified threshold.
|
AbstainVote |
Finds the base classifier with the best least median squared error.
|
ClassificationViaRegressionD |
Class for doing classification using regression methods.
|
ClassifierCascade |
Generates a classifier cascade, with each deeper level of classifiers being built on the input data and either the class distributions (nominal class) or classification (numeric class) of the classifiers of the previous level in the cascade.
The build process is stopped when either the maximum number of levels is reached, the termination criterion is satisfied or no further improvement is achieved.
In case of a level performing worse than the prior one, the build process is terminated immediately and the current level discarded.
|
Consensus |
Outputs predictions only if the ensemble agrees.
|
ConsensusOrVote |
If the required minimum number of classifiers of the ensemble agree on a label, then this label is predicted.
|
Corr |
Assume NO MISSING VALUES, all attributes must be NUMERIC (or 0/1 maybe ...).
|
Fallback |
In case the base classifier fails to make predictions, uses fallback one.
|
FilteredClassifierExt |
Class for running an arbitrary classifier on data that has been passed through an arbitrary filter.
|
HighLowSplit |
Uses base classifier to get guess, then get prediction from either lo/hi classifier
Valid options are:
|
HighLowSplitSingleClassifier |
Uses base classifier to get guess, then get prediction from either lo/hi classifier
Valid options are:
|
InputSmearing |
Extended version of weka.classifiers.meta.Bagging, which allows input smearing of numeric attributes.
Class for bagging a classifier to reduce variance.
|
LeanMultiScheme |
Class for selecting a classifier from among several using cross validation on the training data or the performance on the training data.
|
LeastMedianSq |
Finds the base classifier with the best least median squared error.
|
LogClassRegressor |
Takes log of the class attribute in the data.
|
LogTargetRegressor |
Takes logs of all numeric attributes in the data.
|
MinMaxLimits |
Allows to influence the handling of lower/upper limits of the built classifier when making predictions.
The following types of handling are available: AS_IS, MANUAL, CLASS_RANGE
Details on the types:
- AS_IS: prediction does not get changed
- MANUAL: applies the manual limit, ie at most this limit is output
- CLASS_RANGE: applies the percentage leeway to the class attribute range of the training set to determine the actual limit value.
|
PartitionedStacking |
Builds the base-classifiers on subsets of the data defined by ranges that correspond to the base-classifiers.
|
PeakTransformed |
Uses the maximum peak in the instances.
|
RangeCheck |
Keeps track of the ranges in case of numeric attributes.
|
SocketFacade |
Uses sockets to communicate with a process for training and
making predictions.
|
SubsetEnsemble |
Generates an ensemble using the following approach:
- for each attribute apart from class attribute do:
* create new dataset with only this feature and the class attribute
* remove all instances that contain a missing value
* if no instances left in subset, don't build a classifier for this feature
* if at least 1 instance is left in subset, build base classifier with it
If no classifier gets built at all, use ZeroR as backup model, built on the full dataset.
In addition to the default feature for a subset, a number of random features can be added to the subset before the classifier is trained.
At prediction time, the Vote meta-classifier (using the pre-built classifiers) is used to determing the class probabilities or regression value.
|
SumTransformed |
Finds the base classifier with the best least median squared error.
|
SuppressModelOutput |
Meta-classifier that enables the user to suppress the model output.
Useful for ensembles, since their output can be extremely long.
|
ThreadSafeClassifierWrapper |
Wraps an abstaining classifier and allows turning on/of abstaining.
|
ThresholdedBinaryClassification |
Meta classifier for binary classification problems that allows to specify a minimum probability threshold for one of the labels.
|
Veto |
If the specified label is predicted by the required minimum number of classifiers of the ensemble, then this label is predicted.
|
VotedImbalance |
Generates an ensemble using the following approach:
- do x times:
* create new dataset, resampled with specified bias
* build base classifier with it
If no classifier gets built at all, use ZeroR as backup model, built on the full dataset.
At prediction time, the Vote meta-classifier (using the pre-built classifiers) is used to determining the class probabilities or regression value.
Instead of just using a fixed number of resampled models, you can also specify thresholds (= probability that the minority class does not meet) with associated number of resampled models to use.
|
WeightedInstancesHandlerWrapper |
A meta-classifier that implements the weka.core.WeightedInstancesHandler interface in order to enable all classifiers to be used in other meta-classifiers that require the base classifier to implem
ent the WeightedInstancesHandler interface.
|