1. Test Modules
  2. Training Characteristics
    1. Input Learning
      1. Gradient Descent
      2. Conjugate Gradient Descent
      3. Limited-Memory BFGS
    2. Results
  3. Results

Subreport: Logs for com.simiacryptus.ref.lang.ReferenceCountingBase

Test Modules

Using Seed 2063770261288165376

Training Characteristics

Input Learning

In this apply, we use a network to learn this target input, given it's pre-evaluated output:

TrainingTester.java:332 executed in 0.01 seconds (0.000 gc):

    return RefArrays.stream(RefUtil.addRef(input_target)).flatMap(RefArrays::stream).map(x -> {
      try {
        return x.prettyPrint();
      } finally {
        x.freeRef();
      }
    }).reduce((a, b) -> a + "\n" + b).orElse("");

Returns

    [
    	[ [ 1.208 ], [ 0.7 ], [ 1.764 ], [ -0.384 ] ],
    	[ [ -0.804 ], [ 0.048 ], [ -1.72 ], [ -0.128 ] ],
    	[ [ -1.688 ], [ -0.608 ], [ 1.912 ], [ 0.08 ] ],
    	[ [ -0.852 ], [ 1.524 ], [ 0.496 ], [ -1.028 ] ]
    ]
    [
    	[ [ 1.524 ], [ 0.048 ], [ -0.804 ], [ 1.912 ] ],
    	[ [ 0.496 ], [ -1.72 ], [ 1.764 ], [ -1.688 ] ],
    	[ [ 1.208 ], [ -0.128 ], [ 0.7 ], [ -0.852 ] ],
    	[ [ -0.384 ], [ -1.028 ], [ -0.608 ], [ 0.08 ] ]
    ]
    [
    	[ [ -0.804 ], [ -0.852 ], [ 0.08 ], [ 1.764 ] ],
    	[ [ -1.72 ], [ 1.524 ], [ 0.496 ], [ 0.7 ] ],
    	[ [ 1.912 ], [ -0.128 ], [ -0.608 ], [ -1.028 ] ],
    	[ [ 0.048 ], [ -0.384 ], [ -1.688 ], [ 1.208 ] ]
    ]
    [
    	[ [ -1.72 ], [ -0.384 ], [ -0.804 ], [ -0.128 ] ],
    	[ [ 0.048 ], [ 0.08 ], [ 1.764 ], [ 1.524 ] ],
    	[ [ -1.028 ], [ 0.496 ], [ 0.7 ], [ -0.608 ] ],
    	[ [ 1.912 ], [ 1.208 ], [ -0.852 ], [ -1.688 ] ]
    ]
    [
    	[ [ -0.384 ], [ 1.524 ], [ 0.08 ], [ -1.72 ] ],
    	[ [ 0.496 ], [ 1.912 ], [ -1.688 ], [ -0.852 ] ],
    	[ [ -1.028 ], [ -0.128 ], [ 0.048 ], [ 1.208 ] ],
    	[ [ -0.804 ], [ 0.7 ], [ 1.764 ], [ -0.608 ] ]
    ]

Gradient Descent

First, we train using basic gradient descent method apply weak line search conditions.

TrainingTester.java:480 executed in 0.49 seconds (0.000 gc):

    IterativeTrainer iterativeTrainer = new IterativeTrainer(trainable.addRef());
    try {
      iterativeTrainer.setLineSearchFactory(label -> new ArmijoWolfeSearch());
      iterativeTrainer.setOrientation(new GradientDescent());
      iterativeTrainer.setMonitor(TrainingTester.getMonitor(history));
      iterativeTrainer.setTimeout(30, TimeUnit.SECONDS);
      iterativeTrainer.setMaxIterations(250);
      iterativeTrainer.setTerminateThreshold(0);
      return iterativeTrainer.run();
    } finally {
      iterativeTrainer.freeRef();
    }
Logging
Reset training subject: 2418240667501
Reset training subject: 2418296777117
Constructing line search parameters: GD
th(0)=3.3318736;dx=-0.6663747200000001
New Minimum: 3.3318736 > 2.0508646596394087
END: th(2.154434690031884)=2.0508646596394087; dx=-0.5228086386671718 evalInputDelta=1.2810089403605915
Fitness changed from 3.3318736 to 2.0508646596394087
Iteration 1 complete. Error: 2.0508646596394087 Total: 0.1531; Orientation: 0.0061; Line Search: 0.0552
th(0)=2.0508646596394087;dx=-0.41017293192788173
New Minimum: 2.0508646596394087 > 0.5888559556814956
END: th(4.641588833612779)=0.5888559556814956; dx=-0.21978752185921469 evalInputDelta=1.462008703957913
Fitness changed from 2.0508646596394087 to 0.5888559556814956
Iteration 2 complete. Error: 0.5888559556814956 Total: 0.0666; Orientation: 0.0018; Line Search: 0.0438
th(0)=0.5888559556814956;dx=-0.11777119113629912
New Minimum: 0.5888559556814956 > 2.3833383061792044E-32
WOLF (strong): th(10.000000000000002)=2.3833383061792044E-32; dx=2.2161284925166885E-17 evalInputDelta=0.5888559556814956
END: th(5.000000000000001)=0.14721398892037385; dx=-0.05888559556814955 evalInputDelta=0.4416419667611218
Fitness changed from 0.5888559556814956 to 2.3833383061792044E-32
Iteration 3 complete. Error: 2.3833383061792044E-32 Total: 0.0809; Orientation: 0.0075; Line Search: 0.0570
Zero gradient: 6.904112261803403E-17
th(0)=2.3833383061792044E-32;dx=-4.76667661235841E-33
New Minimum: 2.3833383061792044E-32 > 4.092601131822876E-35
WOLF (strong): th(10.772173450159421)=4.092601131822876E-35; dx=6.644458308135965E-35 evalInputDelta=2.3792457050473814E-32
END: th(5.386086725079711)=2.1618563625746722E-33; dx=-8.685944049186435E-34 evalInputDelta=2.1671526699217373E-32
Fitness changed from 2.3833383061792044E-32 to 4.092601131822876E-35
Iteration 4 complete. Error: 4.092601131822876E-35 Total: 0.0782; Orientation: 0.0023; Line Search: 0.0603
Zero gradient: 2.860979249076399E-18
th(0)=4.092601131822876E-35;dx=-8.185202263645754E-36
New Minimum: 4.092601131822876E-35 > 0.0
END: th(11.60397208403195)=0.0; dx=0.0 evalInputDelta=4.092601131822876E-35
Fitness changed from 4.092601131822876E-35 to 0.0
Iteration 5 complete. Error: 0.0 Total: 0.0458; Orientation: 0.0016; Line Search: 0.0325
Zero gradient: 0.0
th(0)=0.0;dx=0.0 (ERROR: Starting derivative negative)
Fitness changed from 0.0 to 0.0
Static Iteration Total: 0.0575; Orientation: 0.0016; Line Search: 0.0417
Iteration 6 failed. Error: 0.0
Previous Error: 0.0 -> 0.0
Optimization terminated 6
Final threshold in iteration 6: 0.0 (> 0.0) after 0.483s (< 30.000s)

Returns

    0.0

Training Converged

Conjugate Gradient Descent

First, we use a conjugate gradient descent method, which converges the fastest for purely linear functions.

TrainingTester.java:452 executed in 0.44 seconds (0.000 gc):

    IterativeTrainer iterativeTrainer = new IterativeTrainer(trainable.addRef());
    try {
      iterativeTrainer.setLineSearchFactory(label -> new QuadraticSearch());
      iterativeTrainer.setOrientation(new GradientDescent());
      iterativeTrainer.setMonitor(TrainingTester.getMonitor(history));
      iterativeTrainer.setTimeout(30, TimeUnit.SECONDS);
      iterativeTrainer.setMaxIterations(250);
      iterativeTrainer.setTerminateThreshold(0);
      return iterativeTrainer.run();
    } finally {
      iterativeTrainer.freeRef();
    }
Logging
Reset training subject: 2418729594975
Reset training subject: 2418741837380
Constructing line search parameters: GD
F(0.0) = LineSearchPoint{point=PointSample{avg=3.3318736}, derivative=-0.6663747200000001}
New Minimum: 3.3318736 > 3.3318735999333624
F(1.0E-10) = LineSearchPoint{point=PointSample{avg=3.3318735999333624}, derivative=-0.6663747199933363}, evalInputDelta = -6.663780638405115E-11
New Minimum: 3.3318735999333624 > 3.3318735995335373
F(7.000000000000001E-10) = LineSearchPoint{point=PointSample{avg=3.3318735995335373}, derivative=-0.6663747199533538}, evalInputDelta = -4.664628683315186E-10
New Minimum: 3.3318735995335373 > 3.331873596734764
F(4.900000000000001E-9) = LineSearchPoint{point=PointSample{avg=3.331873596734764}, derivative=-0.6663747196734764}, evalInputDelta = -3.2652360815177417E-9
New Minimum: 3.331873596734764 > 3.331873577143347
F(3.430000000000001E-8) = LineSearchPoint{point=PointSample{avg=3.331873577143347}, derivative=-0.6663747177143348}, evalInputDelta = -2.2856653014713402E-8
New Minimum: 3.331873577143347 > 3.3318734400034318
F(2.4010000000000004E-7) = LineSearchPoint{point=PointSample{avg=3.3318734400034318}, derivative=-0.666374704000343}, evalInputDelta = -1.5999656843845855E-7
New Minimum: 3.3318734400034318 > 3.331872480024102
F(1.6807000000000003E-6) = LineSearchPoint{point=PointSample{avg=3.331872480024102}, derivative=-0.6663746080024009}, evalInputDelta = -1.1199758982449737E-6
New Minimum: 3.331872480024102 > 3.3318657601726684
F(1.1764900000000001E-5) = LineSearchPoint{point=PointSample{avg=3.3318657601726684}, derivative=-0.6663739360168057}, evalInputDelta = -7.839827331768134E-6
New Minimum: 3.3318657601726684 > 3.3318187214023722
F(8.235430000000001E-5) = LineSearchPoint{point=PointSample{avg=3.3318187214023722}, derivative=-0.6663692321176398}, evalInputDelta = -5.487859762798308E-5
New Minimum: 3.3318187214023722 > 3.3314894593075692
F(5.764801000000001E-4) = LineSearchPoint{point=PointSample{avg=3.3314894593075692}, derivative=-0.6663363048234778}, evalInputDelta = -3.841406924309787E-4
New Minimum: 3.3314894593075692 > 3.329185080210266
F(0.004035360700000001) = LineSearchPoint{point=PointSample{avg=3.329185080210266}, derivative=-0.666105813764344}, evalInputDelta = -0.0026885197897343893
New Minimum: 3.329185080210266 > 3.3130767492786255
F(0.028247524900000005) = LineSearchPoint{point=PointSample{avg=3.3130767492786255}, derivative=-0.664492376350407}, evalInputDelta = -0.018796850721374714
New Minimum: 3.3130767492786255 > 3.2014122474817364
F(0.19773267430000002) = LineSearchPoint{point=PointSample{avg=3.2014122474817364}, derivative=-0.6531983144528487}, evalInputDelta = -0.13046135251826385
New Minimum: 3.2014122474817364 > 2.473357656408644
F(1.3841287201) = LineSearchPoint{point=PointSample{avg=2.473357656408644}, derivative=-0.5741398811699405}, evalInputDelta = -0.8585159435913563
New Minimum: 2.473357656408644 > 0.0032246726485927303
F(9.688901040700001) = LineSearchPoint{point=PointSample{avg=0.0032246726485927303}, derivative=-0.020730848189582787}, evalInputDelta = -3.3286489273514075
F(67.8223072849) = LineSearchPoint{point=PointSample{avg=111.39850232015628}, derivative=3.853132382672921}, evalInputDelta = 108.06662872015629
F(5.217100560376924) = LineSearchPoint{point=PointSample{avg=0.7622036378663065}, derivative=-0.31872032748669843}, evalInputDelta = -2.569669962133694
F(36.51970392263847) = LineSearchPoint{point=PointSample{avg=23.432890311035692}, derivative=1.7672060275931116}, evalInputDelta = 20.10101671103569
F(2.809207994049113) = LineSearchPoint{point=PointSample{avg=1.7228281976323216}, derivative=-0.4791762009543761}, evalInputDelta = -1.6090454023676786
F(19.66445595834379) = LineSearchPoint{point=PointSample{avg=3.1120268831458113}, derivative=0.6440149133193677}, evalInputDelta = -0.21984671685418888
3.1120268831458113 <= 3.3318736
New Minimum: 0.0032246726485927303 > 1.874892800860974E-32
F(10.0) = LineSearchPoint{point=PointSample{avg=1.874892800860974E-32}, derivative=8.272826867994357E-18}, evalInputDelta = -3.3318736
Right bracket at 10.0
Converged to right
Fitness changed from 3.3318736 to 1.874892800860974E-32
Iteration 1 complete. Error: 1.874892800860974E-32 Total: 0.3760; Orientation: 0.0015; Line Search: 0.3329
Zero gradient: 6.123549298994782E-17
F(0.0) = LineSearchPoint{point=PointSample{avg=1.874892800860974E-32}, derivative=-3.749785601721949E-33}
New Minimum: 1.874892800860974E-32 > 0.0
F(10.0) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=0.0}, evalInputDelta = -1.874892800860974E-32
0.0 <= 1.874892800860974E-32
Converged to right
Fitness changed from 1.874892800860974E-32 to 0.0
Iteration 2 complete. Error: 0.0 Total: 0.0303; Orientation: 0.0013; Line Search: 0.0206
Zero gradient: 0.0
F(0.0) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=0.0}
Fitness changed from 0.0 to 0.0
Static Iteration Total: 0.0276; Orientation: 0.0009; Line Search: 0.0138
Iteration 3 failed. Error: 0.0
Previous Error: 0.0 -> 0.0
Optimization terminated 3
Final threshold in iteration 3: 0.0 (> 0.0) after 0.435s (< 30.000s)

Returns

    0.0

Training Converged

Limited-Memory BFGS

Next, we apply the same optimization using L-BFGS, which is nearly ideal for purely second-order or quadratic functions.

TrainingTester.java:509 executed in 0.52 seconds (0.000 gc):

    IterativeTrainer iterativeTrainer = new IterativeTrainer(trainable.addRef());
    try {
      iterativeTrainer.setLineSearchFactory(label -> new ArmijoWolfeSearch());
      iterativeTrainer.setOrientation(new LBFGS());
      iterativeTrainer.setMonitor(TrainingTester.getMonitor(history));
      iterativeTrainer.setTimeout(30, TimeUnit.SECONDS);
      iterativeTrainer.setIterationsPerSample(100);
      iterativeTrainer.setMaxIterations(250);
      iterativeTrainer.setTerminateThreshold(0);
      return iterativeTrainer.run();
    } finally {
      iterativeTrainer.freeRef();
    }
Logging
Reset training subject: 2419169216330
Reset training subject: 2419177438421
Adding measurement 4da4d08a to history. Total: 0
LBFGS Accumulation History: 1 points
Constructing line search parameters: GD
Non-optimal measurement 3.3318736 < 3.3318736. Total: 1
th(0)=3.3318736;dx=-0.6663747200000001
Adding measurement 79a9f3d6 to history. Total: 1
New Minimum: 3.3318736 > 2.0508646596394087
END: th(2.154434690031884)=2.0508646596394087; dx=-0.5228086386671718 evalInputDelta=1.2810089403605915
Fitness changed from 3.3318736 to 2.0508646596394087
Iteration 1 complete. Error: 2.0508646596394087 Total: 0.0589; Orientation: 0.0054; Line Search: 0.0260
Non-optimal measurement 2.0508646596394087 < 2.0508646596394087. Total: 2
LBFGS Accumulation History: 2 points
Non-optimal measurement 2.0508646596394087 < 2.0508646596394087. Total: 2
th(0)=2.0508646596394087;dx=-0.41017293192788173
Adding measurement 518e337d to history. Total: 2
New Minimum: 2.0508646596394087 > 0.5888559556814956
END: th(4.641588833612779)=0.5888559556814956; dx=-0.21978752185921469 evalInputDelta=1.462008703957913
Fitness changed from 2.0508646596394087 to 0.5888559556814956
Iteration 2 complete. Error: 0.5888559556814956 Total: 0.0384; Orientation: 0.0026; Line Search: 0.0257
Non-optimal measurement 0.5888559556814956 < 0.5888559556814956. Total: 3
LBFGS Accumulation History: 3 points
Non-optimal measurement 0.5888559556814956 < 0.5888559556814956. Total: 3
th(0)=0.5888559556814956;dx=-0.11777119113629912
Adding measurement 4f3ee3f2 to history. Total: 3
New Minimum: 0.5888559556814956 > 2.3833383061792044E-32
WOLF (strong): th(10.000000000000002)=2.3833383061792044E-32; dx=2.2161284925166885E-17 evalInputDelta=0.5888559556814956
Non-optimal measurement 0.14721398892037385 < 2.3833383061792044E-32. Total: 4
END: th(5.000000000000001)=0.14721398892037385; dx=-0.05888559556814955 evalInputDelta=0.4416419667611218
Fitness changed from 0.5888559556814956 to 2.3833383061792044E-32
Iteration 3 complete. Error: 2.3833383061792044E-32 Total: 0.0411; Orientation: 0.0017; Line Search: 0.0320
Non-optimal measurement 2.3833383061792044E-32 < 2.3833383061792044E-32. Total: 4
Rejected: LBFGS Orientation magnitude: 6.904e-16, gradient 6.904e-17, dot -1.000; [d65ccb24-b2eb-4444-a1b3-b7696907dfde = 1.000/1.000e+00, 0e95512d-1278-4102-9561-4bf2eabdf176 = 1.000/1.000e+00, 4414c67a-bcd0-435c-810a-92d9e7a3abbf = 1.000/1.000e+00, e049b0dc-6ee1-4359-b486-c4e80b57e935 = 1.000/1.000e+00, 96edb4cf-8591-4582-b347-d2da76c7099d = 1.000/1.000e+00]
Orientation rejected. Popping history element from 2.3833383061792044E-32, 0.5888559556814956, 2.0508646596394087, 3.3318736
LBFGS Accumulation History: 3 points
Removed measurement 4f3ee3f2 to history. Total: 3
Adding measurement 9d52408 to history. Total: 3
th(0)=2.3833383061792044E-32;dx=-4.76667661235841E-33
Adding measurement 4be74e63 to history. Total: 4
New Minimum: 2.3833383061792044E-32 > 4.092601131822876E-35
WOLF (strong): th(10.772173450159421)=4.092601131822876E-35; dx=6.644458308135965E-35 evalInputDelta=2.3792457050473814E-32
Non-optimal measurement 2.1618563625746722E-33 < 4.092601131822876E-35. Total: 5
END: th(5.386086725079711)=2.1618563625746722E-33; dx=-8.685944049186435E-34 evalInputDelta=2.1671526699217373E-32
Fitness changed from 2.3833383061792044E-32 to 4.092601131822876E-35
Iteration 4 complete. Error: 4.092601131822876E-35 Total: 0.1040; Orientation: 0.0664; Line Search: 0.0304
Non-optimal measurement 4.092601131822876E-35 < 4.092601131822876E-35. Total: 5
Rejected: LBFGS Orientation magnitude: 2.861e-17, gradient 2.861e-18, dot -1.000; [e049b0dc-6ee1-4359-b486-c4e80b57e935 = 0.000e+00, 4414c67a-bcd0-435c-810a-92d9e7a3abbf = 0.000e+00, d65ccb24-b2eb-4444-a1b3-b7696907dfde = 0.000e+00, 96edb4cf-8591-4582-b347-d2da76c7099d = 1.000/1.000e+00, 0e95512d-1278-4102-9561-4bf2eabdf176 = 1.000/1.000e+00]
Orientation rejected. Popping history element from 4.092601131822876E-35, 2.3833383061792044E-32, 0.5888559556814956, 2.0508646596394087, 3.3318736
Rejected: LBFGS Orientation magnitude: 2.861e-17, gradient 2.861e-18, dot -1.000; [4414c67a-bcd0-435c-810a-92d9e7a3abbf = 0.000e+00, d65ccb24-b2eb-4444-a1b3-b7696907dfde = 0.000e+00, e049b0dc-6ee1-4359-b486-c4e80b57e935 = 0.000e+00, 96edb4cf-8591-4582-b347-d2da76c7099d = 1.000/1.000e+00, 0e95512d-1278-4102-9561-4bf2eabdf176 = 1.000/1.000e+00]
Orientation rejected. Popping history element from 4.092601131822876E-35, 2.3833383061792044E-32, 0.5888559556814956, 2.0508646596394087
LBFGS Accumulation History: 3 points
Removed measurement 4be74e63 to history. Total: 4
Removed measurement 9d52408 to history. Total: 3
Adding measurement 1f9bd133 to history. Total: 3
th(0)=4.092601131822876E-35;dx=-8.185202263645754E-36
Adding measurement 638bef3f to history. Total: 4
New Minimum: 4.092601131822876E-35 > 0.0
END: th(11.60397208403195)=0.0; dx=0.0 evalInputDelta=4.092601131822876E-35
Fitness changed from 4.092601131822876E-35 to 0.0
Iteration 5 complete. Error: 0.0 Total: 0.1499; Orientation: 0.1176; Line Search: 0.0249
Non-optimal measurement 0.0 < 0.0. Total: 5
Rejected: LBFGS Orientation magnitude: 0.000e+00, gradient 0.000e+00, dot NaN; [96edb4cf-8591-4582-b347-d2da76c7099d = 0.000e+00, d65ccb24-b2eb-4444-a1b3-b7696907dfde = 0.000e+00, 0e95512d-1278-4102-9561-4bf2eabdf176 = 0.000e+00, 4414c67a-bcd0-435c-810a-92d9e7a3abbf = 0.000e+00, e049b0dc-6ee1-4359-b486-c4e80b57e935 = 0.000e+00]
Orientation rejected. Popping history element from 0.0, 4.092601131822876E-35, 0.5888559556814956, 2.0508646596394087, 3.3318736
Rejected: LBFGS Orientation magnitude: 0.000e+00, gradient 0.000e+00, dot NaN; [0e95512d-1278-4102-9561-4bf2eabdf176 = 0.000e+00, d65ccb24-b2eb-4444-a1b3-b7696907dfde = 0.000e+00, 4414c67a-bcd0-435c-810a-92d9e7a3abbf = 0.000e+00, 96edb4cf-8591-4582-b347-d2da76c7099d = 0.000e+00, e049b0dc-6ee1-4359-b486-c4e80b57e935 = 0.000e+00]
Orientation rejected. Popping history element from 0.0, 4.092601131822876E-35, 0.5888559556814956, 2.0508646596394087
LBFGS Accumulation History: 3 points
Removed measurement 638bef3f to history. Total: 4
Removed measurement 1f9bd133 to history. Total: 3
Adding measurement 6c6e8c77 to history. Total: 3
th(0)=0.0;dx=0.0 (ERROR: Starting derivative negative)
Non-optimal measurement 0.0 < 0.0. Total: 4
Fitness changed from 0.0 to 0.0
Static Iteration Total: 0.1302; Orientation: 0.1064; Line Search: 0.0164
Iteration 6 failed. Error: 0.0
Previous Error: 0.0 -> 0.0
Optimization terminated 6
Final threshold in iteration 6: 0.0 (> 0.0) after 0.523s (< 30.000s)

Returns

    0.0

Training Converged

TrainingTester.java:432 executed in 0.15 seconds (0.000 gc):

    return TestUtil.compare(title + " vs Iteration", runs);
Logging
Plotting range=[1.0, -34.38800057997956], [5.0, 0.31193700141606917]; valueStats=DoubleSummaryStatistics{count=9, sum=5.279441, min=0.000000, average=0.586605, max=2.050865}
Plotting 5 points for GD
Plotting 2 points for CjGD
Plotting 5 points for LBFGS

Returns

Result

TrainingTester.java:435 executed in 0.01 seconds (0.000 gc):

    return TestUtil.compareTime(title + " vs Time", runs);
Logging
Plotting range=[0.0, -34.38800057997956], [0.333, 0.31193700141606917]; valueStats=DoubleSummaryStatistics{count=9, sum=5.279441, min=0.000000, average=0.586605, max=2.050865}
Plotting 5 points for GD
Plotting 2 points for CjGD
Plotting 5 points for LBFGS

Returns

Result

Results

TrainingTester.java:255 executed in 0.00 seconds (0.000 gc):

    return grid(inputLearning, modelLearning, completeLearning);

Returns

Result

TrainingTester.java:258 executed in 0.00 seconds (0.000 gc):

    return new ComponentResult(null == inputLearning ? null : inputLearning.value,
        null == modelLearning ? null : modelLearning.value, null == completeLearning ? null : completeLearning.value);

Returns

    {"input":{ "LBFGS": { "type": "Converged", "value": 0.0 }, "CjGD": { "type": "Converged", "value": 0.0 }, "GD": { "type": "Converged", "value": 0.0 } }, "model":null, "complete":null}

LayerTests.java:425 executed in 0.00 seconds (0.000 gc):

    throwException(exceptions.addRef());

Results

detailsresult
{"input":{ "LBFGS": { "type": "Converged", "value": 0.0 }, "CjGD": { "type": "Converged", "value": 0.0 }, "GD": { "type": "Converged", "value": 0.0 } }, "model":null, "complete":null}OK
  {
    "result": "OK",
    "performance": {
      "execution_time": "2.432",
      "gc_time": "0.261"
    },
    "created_on": 1586737049852,
    "file_name": "trainingTest",
    "report": {
      "simpleName": "Top",
      "canonicalName": "com.simiacryptus.mindseye.layers.cudnn.ImgCropLayerTest.Top",
      "link": "https://github.com/SimiaCryptus/mindseye-cudnn/tree/59d5b3318556370acb2d83ee6ec123ce0fc6974f/src/test/java/com/simiacryptus/mindseye/layers/cudnn/ImgCropLayerTest.java",
      "javaDoc": ""
    },
    "training_analysis": {
      "input": {
        "LBFGS": {
          "type": "Converged",
          "value": 0.0
        },
        "CjGD": {
          "type": "Converged",
          "value": 0.0
        },
        "GD": {
          "type": "Converged",
          "value": 0.0
        }
      }
    },
    "archive": "s3://code.simiacrypt.us/tests/com/simiacryptus/mindseye/layers/cudnn/ImgCropLayer/Top/trainingTest/202004131729",
    "id": "6fccbac6-0db9-402f-a471-1c981e573ab1",
    "report_type": "Components",
    "display_name": "Comparative Training",
    "target": {
      "simpleName": "ImgCropLayer",
      "canonicalName": "com.simiacryptus.mindseye.layers.cudnn.ImgCropLayer",
      "link": "https://github.com/SimiaCryptus/mindseye-cudnn/tree/59d5b3318556370acb2d83ee6ec123ce0fc6974f/src/main/java/com/simiacryptus/mindseye/layers/cudnn/ImgCropLayer.java",
      "javaDoc": ""
    }
  }