Execution Time0.19s

Test: TMVA-DNN-BatchNormalization (Passed)
Build: master-x86_64-fedora29-gcc8-dbg (root-fedora29-2.cern.ch) on 2019-11-15 09:29:21

Test Timing: Passed
Processors1

Show Command Line
Display graphs:

Test output
Testing Backpropagation:
DEEP NEURAL NETWORK:   Depth = 3  Input = ( 1, 10, 4 )  Batch size = 10  Loss function = R
	Layer 0	 DENSE Layer: 	 ( Input =     4 , Width =     2 ) 	Output = (  1 ,    10 ,     2 ) 	 Activation Function = Identity
	Layer 1	 BATCH NORM Layer: 	 ( Input =     2 ) 
	Layer 2	 DENSE Layer: 	 ( Input =     2 , Width =     1 ) 	Output = (  1 ,    10 ,     1 ) 	 Activation Function = Identity
input 

10x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |     0.9989     -0.4348      0.7818    -0.03005 
   1 |     0.8243    -0.05672     -0.9009     -0.0747 
   2 |   0.007912     -0.4108       1.391     -0.9851 
   3 |   -0.04894      -1.443      -1.061      -1.388 
   4 |     0.7674      -0.736      0.5797     -0.3821 
   5 |      2.061      -1.235       1.165     -0.4542 
   6 |    -0.1348     -0.4996     -0.1824       1.844 
   7 |    -0.2428       1.997    0.004806     -0.4222 
   8 |      1.541     0.09474       1.525       1.217 
   9 |    -0.1363     -0.1992     -0.2938     -0.1184 

 training batch 1 mu var00.357633
output DL 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     0.8171   -0.005034 
   1 |     -1.334     -0.6634 
   2 |      2.382       1.418 
   3 |     0.4199      0.3573 
   4 |       1.05      0.2034 
   5 |      1.576     0.02718 
   6 |    -0.6754      -1.677 
   7 |     -1.078      0.8331 
   8 |     0.5149     -0.7462 
   9 |   -0.09559    -0.04158 

output BN 
output DL feature 0 mean 0.357633	output DL std 1.17823
output DL feature 1 mean -0.0294551	output DL std 0.861445
output of BN 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |      0.411     0.02988 
   1 |     -1.513     -0.7757 
   2 |      1.811       1.771 
   3 |    0.05572      0.4732 
   4 |      0.619      0.2849 
   5 |       1.09      0.0693 
   6 |    -0.9241      -2.016 
   7 |     -1.284       1.055 
   8 |     0.1407      -0.877 
   9 |    -0.4055    -0.01483 

output BN feature 0 mean 1.66533e-17	output BN std 1.05405
output BN feature 1 mean 5.0307e-18	output BN std 1.05401
Testing weight gradients   for    layer 0
weight gradient for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |  0.0008284   -0.001069   0.0003435      0.0017 
   1 |   0.004328   -0.002727   -0.009058     0.02725 

weights for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |      -0.42     -0.6927       1.179      -0.468 
   1 |    -0.3281      0.2005      0.4924     -0.8305 

 training batch 2 mu var00.357636
compute loss for weight  -0.420031  -0.420041 result 0.0100393
 training batch 3 mu var00.357633
compute loss for weight  -0.420051  -0.420041 result 0.0100393
 training batch 4 mu var00.357634
compute loss for weight  -0.420036  -0.420041 result 0.0100393
 training batch 5 mu var00.357633
compute loss for weight  -0.420046  -0.420041 result 0.0100393
   --dy = 0.000828377 dy_ref = 0.000828377
 training batch 6 mu var00.357632
compute loss for weight  -0.692692  -0.692702 result 0.0100393
 training batch 7 mu var00.357633
compute loss for weight  -0.692712  -0.692702 result 0.0100393
 training batch 8 mu var00.357633
compute loss for weight  -0.692697  -0.692702 result 0.0100393
 training batch 9 mu var00.357633
compute loss for weight  -0.692707  -0.692702 result 0.0100393
   --dy = -0.00106889 dy_ref = -0.00106889
 training batch 10 mu var00.357633
compute loss for weight  1.17862  1.17861 result 0.0100393
 training batch 11 mu var00.357633
compute loss for weight  1.1786  1.17861 result 0.0100393
 training batch 12 mu var00.357633
compute loss for weight  1.17861  1.17861 result 0.0100393
 training batch 13 mu var00.357633
compute loss for weight  1.1786  1.17861 result 0.0100393
   --dy = 0.000343518 dy_ref = 0.000343518
 training batch 14 mu var00.357633
compute loss for weight  -0.467949  -0.467959 result 0.0100393
 training batch 15 mu var00.357633
compute loss for weight  -0.467969  -0.467959 result 0.0100393
 training batch 16 mu var00.357633
compute loss for weight  -0.467954  -0.467959 result 0.0100393
 training batch 17 mu var00.357633
compute loss for weight  -0.467964  -0.467959 result 0.0100393
   --dy = 0.00170023 dy_ref = 0.00170023
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |    0.02125   -0.001168 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          1           1 

 training batch 18 mu var00.357633
compute loss for weight  1.00001  1 result 0.0100395
 training batch 19 mu var00.357633
compute loss for weight  0.99999  1 result 0.0100391
 training batch 20 mu var00.357633
compute loss for weight  1.00001  1 result 0.0100394
 training batch 21 mu var00.357633
compute loss for weight  0.999995  1 result 0.0100392
   --dy = 0.0212465 dy_ref = 0.0212465
 training batch 22 mu var00.357633
compute loss for weight  1.00001  1 result 0.0100393
 training batch 23 mu var00.357633
compute loss for weight  0.99999  1 result 0.0100393
 training batch 24 mu var00.357633
compute loss for weight  1.00001  1 result 0.0100393
 training batch 25 mu var00.357633
compute loss for weight  0.999995  1 result 0.0100393
   --dy = -0.00116783 dy_ref = -0.00116783
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 | -1.301e-18  -2.711e-20 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          0           0 

 training batch 26 mu var00.357633
compute loss for weight  1e-05  0 result 0.0100393
 training batch 27 mu var00.357633
compute loss for weight  -1e-05  0 result 0.0100393
 training batch 28 mu var00.357633
compute loss for weight  5e-06  0 result 0.0100393
 training batch 29 mu var00.357633
compute loss for weight  -5e-06  0 result 0.0100393
   --dy = -4.91505e-13 dy_ref = -1.30104e-18
 training batch 30 mu var00.357633
compute loss for weight  1e-05  0 result 0.0100393
 training batch 31 mu var00.357633
compute loss for weight  -1e-05  0 result 0.0100393
 training batch 32 mu var00.357633
compute loss for weight  5e-06  0 result 0.0100393
 training batch 33 mu var00.357633
compute loss for weight  -5e-06  0 result 0.0100393
   --dy = 0 dy_ref = -2.71051e-20
Testing weight gradients   for    layer 2
weight gradient for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |    -0.1987    -0.07849 

weights for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |    -0.1069     0.01488 

 training batch 34 mu var00.357633
compute loss for weight  -0.106898  -0.106908 result 0.0100373
 training batch 35 mu var00.357633
compute loss for weight  -0.106918  -0.106908 result 0.0100413
 training batch 36 mu var00.357633
compute loss for weight  -0.106903  -0.106908 result 0.0100383
 training batch 37 mu var00.357633
compute loss for weight  -0.106913  -0.106908 result 0.0100403
   --dy = -0.198735 dy_ref = -0.198735
 training batch 38 mu var00.357633
compute loss for weight  0.0148892  0.0148792 result 0.0100385
 training batch 39 mu var00.357633
compute loss for weight  0.0148692  0.0148792 result 0.0100401
 training batch 40 mu var00.357633
compute loss for weight  0.0148842  0.0148792 result 0.0100389
 training batch 41 mu var00.357633
compute loss for weight  0.0148742  0.0148792 result 0.0100397
   --dy = -0.0784875 dy_ref = -0.0784875
Testing weight gradients:      maximum relative error: [NON-XML-CHAR-0x1B][33m2.10968e-09[NON-XML-CHAR-0x1B][39m