===== MAIN: learn based on training data =====
=== START program1: ./run learn ../dataset2/train
RUNNING THE L-BFGS-B CODE
* * *
Machine precision = 1.084D-19
N = 134 M = 10
This problem is unconstrained.
At X0 0 variables are exactly at the bounds
At iterate 0 f= 6.93844D-01 |proj g|= 1.07220D+00
At iterate 10 f= 7.13348D-03 |proj g|= 4.02864D-03
At iterate 20 f= 6.05164D-03 |proj g|= 1.29654D-04
At iterate 30 f= 5.99722D-03 |proj g|= 1.63741D-04
* * *
Tit = total number of iterations
Tnf = total number of function evaluations
Tnint = total number of segments explored during Cauchy searches
Skip = number of BFGS updates skipped
Nact = number of active bounds at final generalized Cauchy point
Projg = norm of the final projected gradient
F = final function value
* * *
N Tit Tnf Tnint Skip Nact Projg F
134 38 45 1 0 0 9.416D-06 5.993D-03
F = 5.99293065986470502E-003
CONVERGENCE: NORM OF PROJECTED GRADIENT <= PGTOL
Cauchy time 0.000E+00 seconds.
Subspace minimization time 0.000E+00 seconds.
Line search time 4.000E-03 seconds.
Total User time 1.600E-02 seconds.
=== END program1: ./run learn ../dataset2/train --- OK [0s]
===== MAIN: predict/evaluate on train data =====
=== START program3: ./run stripLabels ../dataset2/train ../program0/evalTrain.in
=== END program3: ./run stripLabels ../dataset2/train ../program0/evalTrain.in --- OK [1s]
=== START program1: ./run predict ../program0/evalTrain.in ../program0/evalTrain.out
=== END program1: ./run predict ../program0/evalTrain.in ../program0/evalTrain.out --- OK [0s]
=== START program4: ./run evaluate ../dataset2/train ../program0/evalTrain.out
=== END program4: ./run evaluate ../dataset2/train ../program0/evalTrain.out --- OK [0s]
===== MAIN: predict/evaluate on test data =====
=== START program3: ./run stripLabels ../dataset2/test ../program0/evalTest.in
=== END program3: ./run stripLabels ../dataset2/test ../program0/evalTest.in --- OK [0s]
=== START program1: ./run predict ../program0/evalTest.in ../program0/evalTest.out
=== END program1: ./run predict ../program0/evalTest.in ../program0/evalTest.out --- OK [1s]
=== START program4: ./run evaluate ../dataset2/test ../program0/evalTest.out
=== END program4: ./run evaluate ../dataset2/test ../program0/evalTest.out --- OK [0s]
supervised-learning: Main entry for supervised learning for training and testing a program on a dataset.
(learner:Program) logreg-python: Very simple implementation of logistic regression in python
(dataset:Dataset) trains: 10 examples, 66 features
(stripper:Program[Strip]) multiclass-utils: Validates and inspects a dataset in MulticlassClassification format.
(evaluator:Program[Evaluate]) classification-evaluator: Evaluates predictions of classification datasets (discrete outputs).
Go to the page for the run and look at the log file for signs of the responsible error.
You can also download the run and run it locally on your machine (a README file should
be included in the download which provides more information).
We said that a run was simply a program/dataset pair, but that's not the full story.
A run actually includes other helper programs such as the evaluation program and
various programs for reductions (e.g., one-versus-all, hyperparameter tuning).
More formally, a run is a given by a run specification,
which can be found on the page for any run.
A run specification is a tree where each internal node represents a program
and its children represents the arguments to be passed into its constructor.
For example, the one-versus-all program takes your binary classification program
as a constructor argument and behaves like a multiclass classification program.