===== MAIN: learn based on training data =====
=== START program1: ./run learn ../dataset2/train
main() {
Reading examples from ../dataset2/train {
14000 examples, 16 features, 26 labels
}
Iteration 0 {
numMistakes = 6274/14000 = 0.448
}
Iteration 1 {
numMistakes = 4787/14000 = 0.342
}
Iteration 2 {
numMistakes = 4434/14000 = 0.317
}
Iteration 3 {
numMistakes = 4262/14000 = 0.304
}
Iteration 4 {
numMistakes = 4169/14000 = 0.298
}
Iteration 5 {
numMistakes = 4040/14000 = 0.289
}
Iteration 6 {
numMistakes = 4062/14000 = 0.290
}
Iteration 7 {
numMistakes = 3988/14000 = 0.285
}
Iteration 8 {
numMistakes = 3947/14000 = 0.282
}
Iteration 9 {
numMistakes = 3933/14000 = 0.281
}
Writing parameters to params
} [9.5s]
=== END program1: ./run learn ../dataset2/train --- OK [9s]
===== MAIN: predict/evaluate on train data =====
=== START program3: ./run stripLabels ../dataset2/train ../program0/evalTrain.in
=== END program3: ./run stripLabels ../dataset2/train ../program0/evalTrain.in --- OK [1s]
=== START program1: ./run predict ../program0/evalTrain.in ../program0/evalTrain.out
main() {
Reading parameters from params
Reading examples from ../program0/evalTrain.in {
14000 examples, 16 features, 26 labels
}
Predicting
} [1.2s]
=== END program1: ./run predict ../program0/evalTrain.in ../program0/evalTrain.out --- OK [1s]
=== START program4: ./run evaluate ../dataset2/train ../program0/evalTrain.out
=== END program4: ./run evaluate ../dataset2/train ../program0/evalTrain.out --- OK [1s]
===== MAIN: predict/evaluate on test data =====
=== START program3: ./run stripLabels ../dataset2/test ../program0/evalTest.in
=== END program3: ./run stripLabels ../dataset2/test ../program0/evalTest.in --- OK [0s]
=== START program1: ./run predict ../program0/evalTest.in ../program0/evalTest.out
main() {
Reading parameters from params
Reading examples from ../program0/evalTest.in {
6000 examples, 16 features, 26 labels
}
Predicting
}
=== END program1: ./run predict ../program0/evalTest.in ../program0/evalTest.out --- OK [2s]
=== START program4: ./run evaluate ../dataset2/test ../program0/evalTest.out
=== END program4: ./run evaluate ../dataset2/test ../program0/evalTest.out --- OK [0s]
real 0m16.998s
user 0m15.173s
sys 0m1.564s
Run specification
supervised-learning: Main entry for supervised learning for training and testing a program on a dataset.
(learner:Program) sgd-logistic-stepsize0.5-iter10: Stochastic gradient descent (loss=logistic, stepSize = 1/numUpdates^0.5, take 10 passes over training data)
When you generate a run, you can set a time limit for the run (no more than 24 hours). After that point, we will terminate the program.
Your program can use 1.5GB of memory. More information here.
Go to the page for the run and look at the log file for signs of the responsible error.
You can also download the run and run it locally on your machine (a README file should
be included in the download which provides more information).
We said that a run was simply a program/dataset pair, but that's not the full story.
A run actually includes other helper programs such as the evaluation program and
various programs for reductions (e.g., one-versus-all, hyperparameter tuning).
More formally, a run is a given by a run specification,
which can be found on the page for any run.
A run specification is a tree where each internal node represents a program
and its children represents the arguments to be passed into its constructor.
For example, the one-versus-all program takes your binary classification program
as a constructor argument and behaves like a multiclass classification program.
Must be logged in to post comments.