===== MAIN: learn based on training data =====
=== START program1: ./run learn ../dataset2/train
Processing training examples...
Smoothing and normalizing (reverse = true)...
=== END program1: ./run learn ../dataset2/train --- OK [94s]
===== MAIN: predict/evaluate on train data =====
=== START program3: ./run stripLabels ../dataset2/train ../program0/evalTrain.in
=== END program3: ./run stripLabels ../dataset2/train ../program0/evalTrain.in --- OK [6s]
=== START program1: ./run predict ../program0/evalTrain.in ../program0/evalTrain.out
Predicting test examples...
=== END program1: ./run predict ../program0/evalTrain.in ../program0/evalTrain.out --- OK [128s]
=== START program4: ./run evaluate ../dataset2/train ../program0/evalTrain.out
=== END program4: ./run evaluate ../dataset2/train ../program0/evalTrain.out --- OK [5s]
===== MAIN: predict/evaluate on test data =====
=== START program3: ./run stripLabels ../dataset2/test ../program0/evalTest.in
=== END program3: ./run stripLabels ../dataset2/test ../program0/evalTest.in --- OK [3s]
=== START program1: ./run predict ../program0/evalTest.in ../program0/evalTest.out
Predicting test examples...
=== END program1: ./run predict ../program0/evalTest.in ../program0/evalTest.out --- OK [54s]
=== START program4: ./run evaluate ../dataset2/test ../program0/evalTest.out
=== END program4: ./run evaluate ../dataset2/test ../program0/evalTest.out --- OK [2s]
supervised-learning: Main entry for supervised learning for training and testing a program on a dataset.
(learner:Program) reverse-naive-bayes: Each feature has a smoothed distribution over labels. Just take a weighted vote.
(dataset:Dataset) Synthetic 75% Density, Large, Few Labels : A synthetically generated data set
=label(i) = argmax_j w(j)'*x(i) for randomly generated weight vectors.
=weight vectors elements are independently sampled from Normal distribution.
=density is what percentage of weight vector's elements were not set to 0
=x(i) normally distributed according to multivariate Gaussian, random parameters
(stripper:Program[Strip]) multiclass-utils: Validates and inspects a dataset in MulticlassClassification format.
(evaluator:Program[Evaluate]) classification-evaluator: Evaluates predictions of classification datasets (discrete outputs).
Go to the page for the run and look at the log file for signs of the responsible error.
You can also download the run and run it locally on your machine (a README file should
be included in the download which provides more information).
We said that a run was simply a program/dataset pair, but that's not the full story.
A run actually includes other helper programs such as the evaluation program and
various programs for reductions (e.g., one-versus-all, hyperparameter tuning).
More formally, a run is a given by a run specification,
which can be found on the page for any run.
A run specification is a tree where each internal node represents a program
and its children represents the arguments to be passed into its constructor.
For example, the one-versus-all program takes your binary classification program
as a constructor argument and behaves like a multiclass classification program.