Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
===== MAIN: learn based on training data =====
=== START program1: ./run learn ../dataset2/train
=== END program1: ./run learn ../dataset2/train --- OK [91s]
===== MAIN: predict/evaluate on train data =====
=== START program3: ./run stripLabels ../dataset2/train ../program0/evalTrain.in
=== END program3: ./run stripLabels ../dataset2/train ../program0/evalTrain.in --- OK [1s]
=== START program1: ./run predict ../program0/evalTrain.in ../program0/evalTrain.out
=== END program1: ./run predict ../program0/evalTrain.in ../program0/evalTrain.out --- OK [7s]
=== START program4: ./run evaluate ../dataset2/train ../program0/evalTrain.out
=== END program4: ./run evaluate ../dataset2/train ../program0/evalTrain.out --- OK [1s]
===== MAIN: predict/evaluate on test data =====
=== START program3: ./run stripLabels ../dataset2/test ../program0/evalTest.in
=== END program3: ./run stripLabels ../dataset2/test ../program0/evalTest.in --- OK [0s]
=== START program1: ./run predict ../program0/evalTest.in ../program0/evalTest.out
=== END program1: ./run predict ../program0/evalTest.in ../program0/evalTest.out --- OK [5s]
=== START program4: ./run evaluate ../dataset2/test ../program0/evalTest.out
=== END program4: ./run evaluate ../dataset2/test ../program0/evalTest.out --- OK [0s]
supervised-learning: Main entry for supervised learning for training and testing a program on a dataset.
(learner:Program) SMO_weka_nominal: This programs is part of the WEKA classifier library. The code used to generate this program is from the java class 'weka/classifiers/functions/SMO.java' from WEKA's libraries.
The following description was taken from this classes JavaDoc information:
Implements John Platt's sequential minimal optimization algorithm for training a support vector classifier.
This implementation globally replaces all missing values and transforms nominal attributes into binary ones. It also normalizes all attributes by default. (In that case the coefficients in the output are based on the normalized data, not the original data --- this is important for interpreting the classifier.)
Multi-class problems are solved using pairwise classification (1-vs-1 and if logistic models are built pairwise coupling according to Hastie and Tibshirani, 1998).
To obtain proper probability estimates, use the option that fits logistic regression models to the outputs of the support vector machine. In the multi-class case the predicted probabilities are coupled using Hastie and Tibshirani's pairwise coupling method.
Note: for improved speed normalization should be turned off when operating on SparseInstances.
For more information on the SMO algorithm, see
J. Platt: Machines using Sequential Minimal Optimization. In B. Schoelkopf and C. Burges and A. Smola, editors, Advances in Kernel Methods - Support Vector Learning, 1998.
S.S. Keerthi, S.K. Shevade, C. Bhattacharyya, K.R.K. Murthy (2001). Improvements to Platt's SMO Algorithm for SVM Classifier Design. Neural Computation. 13(3):637-649.
Trevor Hastie, Robert Tibshirani: Classification by Pairwise Coupling. In: Advances in Neural Information Processing Systems, 1998.
NOTE: This algorithm has no parameter tuning, it is using the default WEKA parameters
NOTE: WEKA's Classifiers read a data in the .arff format. For Multiclass datasets, the SVMlight format converted to .arff multiclass format so they can be read by WEKA programs
(dataset:Dataset) Los tres mosqueteros: Los tres mosqueteros (Spanish: The Three Musketeers) was a 1942 Mexican film
(stripper:Program[Strip]) multiclass-utils: Validates and inspects a dataset in MulticlassClassification format.
(evaluator:Program[Evaluate]) classification-evaluator: Evaluates predictions of classification datasets (discrete outputs).
Go to the page for the run and look at the log file for signs of the responsible error.
You can also download the run and run it locally on your machine (a README file should
be included in the download which provides more information).
We said that a run was simply a program/dataset pair, but that's not the full story.
A run actually includes other helper programs such as the evaluation program and
various programs for reductions (e.g., one-versus-all, hyperparameter tuning).
More formally, a run is a given by a run specification,
which can be found on the page for any run.
A run specification is a tree where each internal node represents a program
and its children represents the arguments to be passed into its constructor.
For example, the one-versus-all program takes your binary classification program
as a constructor argument and behaves like a multiclass classification program.