1. Test Modules
  2. Training Characteristics
    1. Input Learning
      1. Gradient Descent
      2. Conjugate Gradient Descent
      3. Limited-Memory BFGS
    2. Results
  3. Results

Subreport: Logs for com.simiacryptus.ref.lang.ReferenceCountingBase

Test Modules

Using Seed 3366276633648159744

Training Characteristics

Input Learning

In this apply, we use a network to learn this target input, given it's pre-evaluated output:

TrainingTester.java:332 executed in 0.01 seconds (0.000 gc):

    return RefArrays.stream(RefUtil.addRef(input_target)).flatMap(RefArrays::stream).map(x -> {
      try {
        return x.prettyPrint();
      } finally {
        x.freeRef();
      }
    }).reduce((a, b) -> a + "\n" + b).orElse("");

Returns

    [
    	[ [ -0.384 ], [ -1.688 ], [ 0.08 ], [ 0.048 ] ],
    	[ [ -1.028 ], [ -0.608 ], [ 1.764 ], [ -1.72 ] ],
    	[ [ -0.852 ], [ 1.524 ], [ 1.912 ], [ -0.804 ] ],
    	[ [ -0.128 ], [ 0.7 ], [ 0.496 ], [ 1.208 ] ]
    ]
    [
    	[ [ 0.7 ], [ -0.384 ], [ -1.688 ], [ 0.496 ] ],
    	[ [ -1.72 ], [ 1.524 ], [ -0.608 ], [ -0.852 ] ],
    	[ [ 0.08 ], [ 1.912 ], [ 0.048 ], [ -0.128 ] ],
    	[ [ -0.804 ], [ 1.764 ], [ -1.028 ], [ 1.208 ] ]
    ]
    [
    	[ [ 0.496 ], [ -1.688 ], [ 1.912 ], [ -1.72 ] ],
    	[ [ -1.028 ], [ -0.384 ], [ -0.804 ], [ 1.524 ] ],
    	[ [ 0.048 ], [ 0.7 ], [ -0.608 ], [ -0.852 ] ],
    	[ [ 1.764 ], [ -0.128 ], [ 0.08 ], [ 1.208 ] ]
    ]
    [
    	[ [ 0.048 ], [ -1.72 ], [ 1.208 ], [ -1.688 ] ],
    	[ [ -0.608 ], [ -0.384 ], [ -0.804 ], [ 0.496 ] ],
    	[ [ -0.852 ], [ 0.08 ], [ 1.524 ], [ 1.764 ] ],
    	[ [ 0.7 ], [ -0.128 ], [ 1.912 ], [ -1.028 ] ]
    ]
    [
    	[ [ 1.764 ], [ -0.608 ], [ 0.08 ], [ -1.028 ] ],
    	[ [ 0.496 ], [ -0.384 ], [ 1.208 ], [ -1.688 ] ],
    	[ [ 1.912 ], [ -0.128 ], [ -0.804 ], [ 0.7 ] ],
    	[ [ -1.72 ], [ -0.852 ], [ 0.048 ], [ 1.524 ] ]
    ]

Gradient Descent

First, we train using basic gradient descent method apply weak line search conditions.

TrainingTester.java:480 executed in 0.48 seconds (0.000 gc):

    IterativeTrainer iterativeTrainer = new IterativeTrainer(trainable.addRef());
    try {
      iterativeTrainer.setLineSearchFactory(label -> new ArmijoWolfeSearch());
      iterativeTrainer.setOrientation(new GradientDescent());
      iterativeTrainer.setMonitor(TrainingTester.getMonitor(history));
      iterativeTrainer.setTimeout(30, TimeUnit.SECONDS);
      iterativeTrainer.setMaxIterations(250);
      iterativeTrainer.setTerminateThreshold(0);
      return iterativeTrainer.run();
    } finally {
      iterativeTrainer.freeRef();
    }
Logging
Reset training subject: 2384937863349
Reset training subject: 2384991102534
Constructing line search parameters: GD
th(0)=3.1623911999999996;dx=-0.63247824
New Minimum: 3.1623911999999996 > 1.9465433358680415
END: th(2.154434690031884)=1.9465433358680415; dx=-0.49621493390536886 evalInputDelta=1.2158478641319581
Fitness changed from 3.1623911999999996 to 1.9465433358680415
Iteration 1 complete. Error: 1.9465433358680415 Total: 0.1425; Orientation: 0.0044; Line Search: 0.0539
th(0)=1.9465433358680415;dx=-0.38930866717360835
New Minimum: 1.9465433358680415 > 0.5589026223307967
END: th(4.641588833612779)=0.5589026223307967; dx=-0.20860759093543893 evalInputDelta=1.3876407135372448
Fitness changed from 1.9465433358680415 to 0.5589026223307967
Iteration 2 complete. Error: 0.5589026223307967 Total: 0.0706; Orientation: 0.0028; Line Search: 0.0450
th(0)=0.5589026223307967;dx=-0.11178052446615937
New Minimum: 0.5589026223307967 > 2.7145982566138087E-32
WOLF (strong): th(10.000000000000002)=2.7145982566138087E-32; dx=2.387046903678849E-17 evalInputDelta=0.5589026223307967
END: th(5.000000000000001)=0.1397256555826991; dx=-0.05589026223307966 evalInputDelta=0.41917696674809757
Fitness changed from 0.5589026223307967 to 2.7145982566138087E-32
Iteration 3 complete. Error: 2.7145982566138087E-32 Total: 0.0852; Orientation: 0.0022; Line Search: 0.0639
Zero gradient: 7.368308159426842E-17
th(0)=2.7145982566138087E-32;dx=-5.429196513227619E-33
New Minimum: 2.7145982566138087E-32 > 2.1666711874356404E-35
WOLF (strong): th(10.772173450159421)=2.1666711874356404E-35; dx=4.3333423748712813E-35 evalInputDelta=2.712431585426373E-32
END: th(5.386086725079711)=3.305377267054594E-33; dx=-1.3394842763213229E-33 evalInputDelta=2.3840605299083495E-32
Fitness changed from 2.7145982566138087E-32 to 2.1666711874356404E-35
Iteration 4 complete. Error: 2.1666711874356404E-35 Total: 0.0808; Orientation: 0.0019; Line Search: 0.0645
Zero gradient: 2.081668171172169E-18
th(0)=2.1666711874356404E-35;dx=-4.333342374871282E-36
New Minimum: 2.1666711874356404E-35 > 0.0
END: th(11.60397208403195)=0.0; dx=0.0 evalInputDelta=2.1666711874356404E-35
Fitness changed from 2.1666711874356404E-35 to 0.0
Iteration 5 complete. Error: 0.0 Total: 0.0451; Orientation: 0.0012; Line Search: 0.0301
Zero gradient: 0.0
th(0)=0.0;dx=0.0 (ERROR: Starting derivative negative)
Fitness changed from 0.0 to 0.0
Static Iteration Total: 0.0432; Orientation: 0.0013; Line Search: 0.0310
Iteration 6 failed. Error: 0.0
Previous Error: 0.0 -> 0.0
Optimization terminated 6
Final threshold in iteration 6: 0.0 (> 0.0) after 0.469s (< 30.000s)

Returns

    0.0

Training Converged

Conjugate Gradient Descent

First, we use a conjugate gradient descent method, which converges the fastest for purely linear functions.

TrainingTester.java:452 executed in 0.42 seconds (0.000 gc):

    IterativeTrainer iterativeTrainer = new IterativeTrainer(trainable.addRef());
    try {
      iterativeTrainer.setLineSearchFactory(label -> new QuadraticSearch());
      iterativeTrainer.setOrientation(new GradientDescent());
      iterativeTrainer.setMonitor(TrainingTester.getMonitor(history));
      iterativeTrainer.setTimeout(30, TimeUnit.SECONDS);
      iterativeTrainer.setMaxIterations(250);
      iterativeTrainer.setTerminateThreshold(0);
      return iterativeTrainer.run();
    } finally {
      iterativeTrainer.freeRef();
    }
Logging
Reset training subject: 2385413100318
Reset training subject: 2385423764146
Constructing line search parameters: GD
F(0.0) = LineSearchPoint{point=PointSample{avg=3.1623911999999996}, derivative=-0.63247824}
New Minimum: 3.1623911999999996 > 3.162391199936752
F(1.0E-10) = LineSearchPoint{point=PointSample{avg=3.162391199936752}, derivative=-0.6324782399936753}, evalInputDelta = -6.324762935605577E-11
New Minimum: 3.162391199936752 > 3.162391199557265
F(7.000000000000001E-10) = LineSearchPoint{point=PointSample{avg=3.162391199557265}, derivative=-0.6324782399557266}, evalInputDelta = -4.427347377600199E-10
New Minimum: 3.162391199557265 > 3.1623911969008565
F(4.900000000000001E-9) = LineSearchPoint{point=PointSample{avg=3.1623911969008565}, derivative=-0.6324782396900857}, evalInputDelta = -3.0991431643201395E-9
New Minimum: 3.1623911969008565 > 3.1623911783059966
F(3.430000000000001E-8) = LineSearchPoint{point=PointSample{avg=3.1623911783059966}, derivative=-0.6324782378305998}, evalInputDelta = -2.1694003038419396E-8
New Minimum: 3.1623911783059966 > 3.162391048141976
F(2.4010000000000004E-7) = LineSearchPoint{point=PointSample{avg=3.162391048141976}, derivative=-0.6324782248141975}, evalInputDelta = -1.5185802348938182E-7
New Minimum: 3.162391048141976 > 3.162390136993911
F(1.6807000000000003E-6) = LineSearchPoint{point=PointSample{avg=3.162390136993911}, derivative=-0.6324781336993823}, evalInputDelta = -1.0630060884864179E-6
New Minimum: 3.162390136993911 > 3.1623837589611314
F(1.1764900000000001E-5) = LineSearchPoint{point=PointSample{avg=3.1623837589611314}, derivative=-0.6324774958956754}, evalInputDelta = -7.4410388681833695E-6
New Minimum: 3.1623837589611314 > 3.16233911291176
F(8.235430000000001E-5) = LineSearchPoint{point=PointSample{avg=3.16233911291176}, derivative=-0.632473031269728}, evalInputDelta = -5.208708823944974E-5
New Minimum: 3.16233911291176 > 3.1620265993905092
F(5.764801000000001E-4) = LineSearchPoint{point=PointSample{avg=3.1620265993905092}, derivative=-0.6324417788880957}, evalInputDelta = -3.646006094903953E-4
New Minimum: 3.1620265993905092 > 3.1598394371347824
F(0.004035360700000001) = LineSearchPoint{point=PointSample{avg=3.1598394371347824}, derivative=-0.6322230122166699}, evalInputDelta = -0.0025517628652171886
New Minimum: 3.1598394371347824 > 3.1445504886029685
F(0.028247524900000005) = LineSearchPoint{point=PointSample{avg=3.1445504886029685}, derivative=-0.6306916455166892}, evalInputDelta = -0.01784071139703114
New Minimum: 3.1445504886029685 > 3.038566024536004
F(0.19773267430000002) = LineSearchPoint{point=PointSample{avg=3.038566024536004}, derivative=-0.6199720786168244}, evalInputDelta = -0.12382517546399541
New Minimum: 3.038566024536004 > 2.347545383198006
F(1.3841287201) = LineSearchPoint{point=PointSample{avg=2.347545383198006}, derivative=-0.54493511031777}, evalInputDelta = -0.8148458168019936
New Minimum: 2.347545383198006 > 0.0030606432389242958
F(9.688901040700001) = LineSearchPoint{point=PointSample{avg=0.0030606432389242958}, derivative=-0.019676332224389474}, evalInputDelta = -3.159330556761075
F(67.8223072849) = LineSearchPoint{point=PointSample{avg=105.73199518446373}, derivative=3.657135114429274}, evalInputDelta = 102.56960398446373
F(5.217100560376924) = LineSearchPoint{point=PointSample{avg=0.7234326287156855}, derivative=-0.302507981966979}, evalInputDelta = -2.438958571284314
F(36.51970392263847) = LineSearchPoint{point=PointSample{avg=22.24092958093744}, derivative=1.6773135662311476}, evalInputDelta = 19.07853838093744
F(2.809207994049113) = LineSearchPoint{point=PointSample{avg=1.635193103155028}, derivative=-0.4548019472129886}, evalInputDelta = -1.5271980968449717
F(19.66445595834379) = LineSearchPoint{point=PointSample{avg=2.95372742514114}, derivative=0.6112558095090794}, evalInputDelta = -0.20866377485885979
2.95372742514114 <= 3.1623911999999996
New Minimum: 0.0030606432389242958 > 4.4522685489371925E-31
F(9.999999999999996) = LineSearchPoint{point=PointSample{avg=4.4522685489371925E-31}, derivative=-2.332739557076025E-16}, evalInputDelta = -3.1623911999999996
Left bracket at 9.999999999999996
Converged to left
Fitness changed from 3.1623911999999996 to 4.4522685489371925E-31
Iteration 1 complete. Error: 4.4522685489371925E-31 Total: 0.3660; Orientation: 0.0018; Line Search: 0.3312
Zero gradient: 2.9840471004785405E-16
F(0.0) = LineSearchPoint{point=PointSample{avg=4.4522685489371925E-31}, derivative=-8.904537097874385E-32}
New Minimum: 4.4522685489371925E-31 > 0.0
F(9.999999999999996) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=0.0}, evalInputDelta = -4.4522685489371925E-31
0.0 <= 4.4522685489371925E-31
Converged to right
Fitness changed from 4.4522685489371925E-31 to 0.0
Iteration 2 complete. Error: 0.0 Total: 0.0323; Orientation: 0.0013; Line Search: 0.0219
Zero gradient: 0.0
F(0.0) = LineSearchPoint{point=PointSample{avg=0.0}, derivative=0.0}
Fitness changed from 0.0 to 0.0
Static Iteration Total: 0.0249; Orientation: 0.0009; Line Search: 0.0144
Iteration 3 failed. Error: 0.0
Previous Error: 0.0 -> 0.0
Optimization terminated 3
Final threshold in iteration 3: 0.0 (> 0.0) after 0.423s (< 30.000s)

Returns

    0.0

Training Converged

Limited-Memory BFGS

Next, we apply the same optimization using L-BFGS, which is nearly ideal for purely second-order or quadratic functions.

TrainingTester.java:509 executed in 0.55 seconds (0.000 gc):

    IterativeTrainer iterativeTrainer = new IterativeTrainer(trainable.addRef());
    try {
      iterativeTrainer.setLineSearchFactory(label -> new ArmijoWolfeSearch());
      iterativeTrainer.setOrientation(new LBFGS());
      iterativeTrainer.setMonitor(TrainingTester.getMonitor(history));
      iterativeTrainer.setTimeout(30, TimeUnit.SECONDS);
      iterativeTrainer.setIterationsPerSample(100);
      iterativeTrainer.setMaxIterations(250);
      iterativeTrainer.setTerminateThreshold(0);
      return iterativeTrainer.run();
    } finally {
      iterativeTrainer.freeRef();
    }
Logging
Reset training subject: 2385843160852
Reset training subject: 2385854260495
Adding measurement 45771cad to history. Total: 0
LBFGS Accumulation History: 1 points
Constructing line search parameters: GD
Non-optimal measurement 3.1623911999999996 < 3.1623911999999996. Total: 1
th(0)=3.1623911999999996;dx=-0.63247824
Adding measurement 23000db2 to history. Total: 1
New Minimum: 3.1623911999999996 > 1.9465433358680415
END: th(2.154434690031884)=1.9465433358680415; dx=-0.49621493390536886 evalInputDelta=1.2158478641319581
Fitness changed from 3.1623911999999996 to 1.9465433358680415
Iteration 1 complete. Error: 1.9465433358680415 Total: 0.0602; Orientation: 0.0045; Line Search: 0.0251
Non-optimal measurement 1.9465433358680415 < 1.9465433358680415. Total: 2
LBFGS Accumulation History: 2 points
Non-optimal measurement 1.9465433358680415 < 1.9465433358680415. Total: 2
th(0)=1.9465433358680415;dx=-0.38930866717360835
Adding measurement 150f7b67 to history. Total: 2
New Minimum: 1.9465433358680415 > 0.5589026223307967
END: th(4.641588833612779)=0.5589026223307967; dx=-0.20860759093543893 evalInputDelta=1.3876407135372448
Fitness changed from 1.9465433358680415 to 0.5589026223307967
Iteration 2 complete. Error: 0.5589026223307967 Total: 0.0377; Orientation: 0.0031; Line Search: 0.0247
Non-optimal measurement 0.5589026223307967 < 0.5589026223307967. Total: 3
LBFGS Accumulation History: 3 points
Non-optimal measurement 0.5589026223307967 < 0.5589026223307967. Total: 3
th(0)=0.5589026223307967;dx=-0.11178052446615935
Adding measurement 4f3ee3f2 to history. Total: 3
New Minimum: 0.5589026223307967 > 2.7145982566138087E-32
WOLF (strong): th(10.000000000000002)=2.7145982566138087E-32; dx=2.3870469036788492E-17 evalInputDelta=0.5589026223307967
Non-optimal measurement 0.1397256555826991 < 2.7145982566138087E-32. Total: 4
END: th(5.000000000000001)=0.1397256555826991; dx=-0.05589026223307966 evalInputDelta=0.41917696674809757
Fitness changed from 0.5589026223307967 to 2.7145982566138087E-32
Iteration 3 complete. Error: 2.7145982566138087E-32 Total: 0.0399; Orientation: 0.0017; Line Search: 0.0310
Non-optimal measurement 2.7145982566138087E-32 < 2.7145982566138087E-32. Total: 4
Rejected: LBFGS Orientation magnitude: 7.368e-16, gradient 7.368e-17, dot -1.000; [f7098ce8-42f8-4b8b-95c2-f8b0da3c9824 = 1.000/1.000e+00, 4904f846-a9f6-4b10-a800-6d242bb4d03a = 1.000/1.000e+00, 1218a2a2-2021-41e2-a1a2-f32d8973be74 = 1.000/1.000e+00, cbe87c0f-a3d8-4eb5-b1fa-23b0b639c7f8 = 1.000/1.000e+00, 17ffd1a4-4f58-468c-8f75-549ca8a986a4 = 1.000/1.000e+00]
Orientation rejected. Popping history element from 2.7145982566138087E-32, 0.5589026223307967, 1.9465433358680415, 3.1623911999999996
LBFGS Accumulation History: 3 points
Removed measurement 4f3ee3f2 to history. Total: 3
Adding measurement 9d52408 to history. Total: 3
th(0)=2.7145982566138087E-32;dx=-5.429196513227619E-33
Adding measurement 209356bd to history. Total: 4
New Minimum: 2.7145982566138087E-32 > 2.1666711874356404E-35
WOLF (strong): th(10.772173450159421)=2.1666711874356404E-35; dx=4.3333423748712813E-35 evalInputDelta=2.712431585426373E-32
Non-optimal measurement 3.305377267054594E-33 < 2.1666711874356404E-35. Total: 5
END: th(5.386086725079711)=3.305377267054594E-33; dx=-1.3394842763213229E-33 evalInputDelta=2.3840605299083495E-32
Fitness changed from 2.7145982566138087E-32 to 2.1666711874356404E-35
Iteration 4 complete. Error: 2.1666711874356404E-35 Total: 0.1072; Orientation: 0.0706; Line Search: 0.0289
Non-optimal measurement 2.1666711874356404E-35 < 2.1666711874356404E-35. Total: 5
Rejected: LBFGS Orientation magnitude: 2.082e-17, gradient 2.082e-18, dot -1.000; [4904f846-a9f6-4b10-a800-6d242bb4d03a = 0.000e+00, 17ffd1a4-4f58-468c-8f75-549ca8a986a4 = 1.000/1.000e+00, cbe87c0f-a3d8-4eb5-b1fa-23b0b639c7f8 = 1.000/1.000e+00, 1218a2a2-2021-41e2-a1a2-f32d8973be74 = 0.000e+00, f7098ce8-42f8-4b8b-95c2-f8b0da3c9824 = 1.000/1.000e+00]
Orientation rejected. Popping history element from 2.1666711874356404E-35, 2.7145982566138087E-32, 0.5589026223307967, 1.9465433358680415, 3.1623911999999996
Rejected: LBFGS Orientation magnitude: 2.082e-17, gradient 2.082e-18, dot -1.000; [17ffd1a4-4f58-468c-8f75-549ca8a986a4 = 1.000/1.000e+00, cbe87c0f-a3d8-4eb5-b1fa-23b0b639c7f8 = 1.000/1.000e+00, 4904f846-a9f6-4b10-a800-6d242bb4d03a = 0.000e+00, 1218a2a2-2021-41e2-a1a2-f32d8973be74 = 0.000e+00, f7098ce8-42f8-4b8b-95c2-f8b0da3c9824 = 1.000/1.000e+00]
Orientation rejected. Popping history element from 2.1666711874356404E-35, 2.7145982566138087E-32, 0.5589026223307967, 1.9465433358680415
LBFGS Accumulation History: 3 points
Removed measurement 209356bd to history. Total: 4
Removed measurement 9d52408 to history. Total: 3
Adding measurement 6b6017ef to history. Total: 3
th(0)=2.1666711874356404E-35;dx=-4.333342374871282E-36
Adding measurement fa8653 to history. Total: 4
New Minimum: 2.1666711874356404E-35 > 0.0
END: th(11.60397208403195)=0.0; dx=0.0 evalInputDelta=2.1666711874356404E-35
Fitness changed from 2.1666711874356404E-35 to 0.0
Iteration 5 complete. Error: 0.0 Total: 0.1425; Orientation: 0.1138; Line Search: 0.0218
Non-optimal measurement 0.0 < 0.0. Total: 5
Rejected: LBFGS Orientation magnitude: 0.000e+00, gradient 0.000e+00, dot NaN; [f7098ce8-42f8-4b8b-95c2-f8b0da3c9824 = 0.000e+00, 1218a2a2-2021-41e2-a1a2-f32d8973be74 = 0.000e+00, 17ffd1a4-4f58-468c-8f75-549ca8a986a4 = 0.000e+00, cbe87c0f-a3d8-4eb5-b1fa-23b0b639c7f8 = 0.000e+00, 4904f846-a9f6-4b10-a800-6d242bb4d03a = 0.000e+00]
Orientation rejected. Popping history element from 0.0, 2.1666711874356404E-35, 0.5589026223307967, 1.9465433358680415, 3.1623911999999996
Rejected: LBFGS Orientation magnitude: 0.000e+00, gradient 0.000e+00, dot NaN; [1218a2a2-2021-41e2-a1a2-f32d8973be74 = 0.000e+00, f7098ce8-42f8-4b8b-95c2-f8b0da3c9824 = 0.000e+00, 4904f846-a9f6-4b10-a800-6d242bb4d03a = 0.000e+00, cbe87c0f-a3d8-4eb5-b1fa-23b0b639c7f8 = 0.000e+00, 17ffd1a4-4f58-468c-8f75-549ca8a986a4 = 0.000e+00]
Orientation rejected. Popping history element from 0.0, 2.1666711874356404E-35, 0.5589026223307967, 1.9465433358680415
LBFGS Accumulation History: 3 points
Removed measurement fa8653 to history. Total: 4
Removed measurement 6b6017ef to history. Total: 3
Adding measurement 39d97b6c to history. Total: 3
th(0)=0.0;dx=0.0 (ERROR: Starting derivative negative)
Non-optimal measurement 0.0 < 0.0. Total: 4
Fitness changed from 0.0 to 0.0
Static Iteration Total: 0.1611; Orientation: 0.1170; Line Search: 0.0355
Iteration 6 failed. Error: 0.0
Previous Error: 0.0 -> 0.0
Optimization terminated 6
Final threshold in iteration 6: 0.0 (> 0.0) after 0.548s (< 30.000s)

Returns

    0.0

Training Converged

TrainingTester.java:432 executed in 0.15 seconds (0.000 gc):

    return TestUtil.compare(title + " vs Iteration", runs);
Logging
Plotting range=[1.0, -34.66420699191851], [5.0, 0.2892640768540044]; valueStats=DoubleSummaryStatistics{count=9, sum=5.010892, min=0.000000, average=0.556766, max=1.946543}
Plotting 5 points for GD
Plotting 2 points for CjGD
Plotting 5 points for LBFGS

Returns

Result

TrainingTester.java:435 executed in 0.02 seconds (0.000 gc):

    return TestUtil.compareTime(title + " vs Time", runs);
Logging
Plotting range=[0.0, -34.66420699191851], [0.327, 0.2892640768540044]; valueStats=DoubleSummaryStatistics{count=9, sum=5.010892, min=0.000000, average=0.556766, max=1.946543}
Plotting 5 points for GD
Plotting 2 points for CjGD
Plotting 5 points for LBFGS

Returns

Result

Results

TrainingTester.java:255 executed in 0.00 seconds (0.000 gc):

    return grid(inputLearning, modelLearning, completeLearning);

Returns

Result

TrainingTester.java:258 executed in 0.00 seconds (0.000 gc):

    return new ComponentResult(null == inputLearning ? null : inputLearning.value,
        null == modelLearning ? null : modelLearning.value, null == completeLearning ? null : completeLearning.value);

Returns

    {"input":{ "LBFGS": { "type": "Converged", "value": 0.0 }, "CjGD": { "type": "Converged", "value": 0.0 }, "GD": { "type": "Converged", "value": 0.0 } }, "model":null, "complete":null}

LayerTests.java:425 executed in 0.00 seconds (0.000 gc):

    throwException(exceptions.addRef());

Results

detailsresult
{"input":{ "LBFGS": { "type": "Converged", "value": 0.0 }, "CjGD": { "type": "Converged", "value": 0.0 }, "GD": { "type": "Converged", "value": 0.0 } }, "model":null, "complete":null}OK
  {
    "result": "OK",
    "performance": {
      "execution_time": "2.551",
      "gc_time": "0.338"
    },
    "created_on": 1586737016559,
    "file_name": "trainingTest",
    "report": {
      "simpleName": "Right",
      "canonicalName": "com.simiacryptus.mindseye.layers.cudnn.ImgCropLayerTest.Right",
      "link": "https://github.com/SimiaCryptus/mindseye-cudnn/tree/59d5b3318556370acb2d83ee6ec123ce0fc6974f/src/test/java/com/simiacryptus/mindseye/layers/cudnn/ImgCropLayerTest.java",
      "javaDoc": ""
    },
    "training_analysis": {
      "input": {
        "LBFGS": {
          "type": "Converged",
          "value": 0.0
        },
        "CjGD": {
          "type": "Converged",
          "value": 0.0
        },
        "GD": {
          "type": "Converged",
          "value": 0.0
        }
      }
    },
    "archive": "s3://code.simiacrypt.us/tests/com/simiacryptus/mindseye/layers/cudnn/ImgCropLayer/Right/trainingTest/202004131656",
    "id": "002f97e8-d967-421b-ab60-cf92d8214c2e",
    "report_type": "Components",
    "display_name": "Comparative Training",
    "target": {
      "simpleName": "ImgCropLayer",
      "canonicalName": "com.simiacryptus.mindseye.layers.cudnn.ImgCropLayer",
      "link": "https://github.com/SimiaCryptus/mindseye-cudnn/tree/59d5b3318556370acb2d83ee6ec123ce0fc6974f/src/main/java/com/simiacryptus/mindseye/layers/cudnn/ImgCropLayer.java",
      "javaDoc": ""
    }
  }